Tuesday, July 19, 2022

Spark writing to S3 failed: java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument

Symptom:

When using Spark writing to S3, the insert query failed:

Caused by: java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;Ljava/lang/Object;)V
at org.apache.hadoop.fs.s3a.S3AUtils.lookupPassword(S3AUtils.java:893)
at org.apache.hadoop.fs.s3a.S3AUtils.lookupPassword(S3AUtils.java:869)
at org.apache.hadoop.fs.s3a.S3AUtils.getEncryptionAlgorithm(S3AUtils.java:1580)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:341)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469)
at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:53)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:370)
at org.apache.spark.sql.execution.datasources.FindDataSourceTable.$anonfun$readDataSourceTable$1(DataSourceStrategy.scala:252)
at org.sparkproject.guava.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4792)
at org.sparkproject.guava.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3599)
at org.sparkproject.guava.cache.LocalCache$Segment.loadSync(LocalCache.java:2379)
at org.sparkproject.guava.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342)
at org.sparkproject.guava.cache.LocalCache$Segment.get(LocalCache.java:2257)

Env:

spark-3.2.1-bin-hadoop3.2
hadoop-aws-3.2.3.jar
aws-java-sdk-bundle-1.11.375.jar
guava-14.0.1.jar

Spark writing to S3 failed: java.lang.NoSuchMethodError: org.apache.hadoop.util.SemaphoredDelegatingExecutor.

Symptom:

When using Spark writing to S3, the insert query failed:

java.lang.NoSuchMethodError: org.apache.hadoop.util.SemaphoredDelegatingExecutor.<init>(Lcom/google/common/util/concurrent/ListeningExecutorService;IZ)V
at org.apache.hadoop.fs.s3a.impl.StoreContext.createThrottledExecutor(StoreContext.java:292)
at org.apache.hadoop.fs.s3a.impl.DeleteOperation.<init>(DeleteOperation.java:206)
at org.apache.hadoop.fs.s3a.S3AFileSystem.delete(S3AFileSystem.java:2468)
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.cleanupJob(FileOutputCommitter.java:532)
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.abortJob(FileOutputCommitter.java:551)
at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.abortJob(HadoopMapReduceCommitProtocol.scala:242)
at org.apache.spark.sql.rapids.GpuFileFormatWriter$.write(GpuFileFormatWriter.scala:262)

Env:

spark-3.2.1-bin-hadoop3.2

hadoop-aws-3.2.1.jar

aws-java-sdk-bundle-1.11.375.jar

Thursday, September 23, 2021

How to access Azure Open Dataset from Spark

Goal:

This article explains how to access Azure Open Dataset from Spark.

Env:

spark-3.1.1-bin-hadoop2.7

Monday, May 3, 2021

Understand Decimal precision and scale calculation in Spark using GPU or CPU mode

Goal:

This article research on how Spark calculates the Decimal precision and scale using GPU or CPU mode.

Basically we will test Addition/Subtraction/Multiplication/Division/Modulo/Union in this post.

Thursday, April 29, 2021

Tuesday, April 27, 2021

Rapids Accelerator compatibility related to spark.sql.legacy.parquet.datetimeRebaseModeInWrite

Goal:

This article talked about the compatibility of Rapids Accelerator for Spark regarding parquet writing related to parameters spark.sql.legacy.parquet.datetimeRebaseModeInWrite etc.

Tuesday, April 20, 2021

Spark Code -- Dig into SparkListenerEvent

Goal:

This article digs into different types of SparkListenerEvent in Spark event log with some examples. 

Understanding this can help us know how to pares Spark event log.

How to use latest version of Rapids Accelerator for Spark on EMR

Goal:

This article shows how to use latest version of Rapids Accelerator for Spark on EMR. 

Currently the latest EMR 6.2 only ships with Rapids Accelerator 0.2.0 with cuDF 0.15 jar.

However as of today, the latest Rapids Accelerator is 0.4.1 with cuDF 0.18 jar.

Note: This is NOT official steps on enabling rapids+Spark on EMR, but just some technical research.

Popular Posts