Data Engineering/Systems/Cluster/Spark

From Wikitech
Jump to navigation Jump to search

Spark is a powerful engine for processing data on the Analytics Cluster. You can drive it using SQL, Python, R, Java, or Scala.

As of Aug 2023, we are running Spark 3.1.2 (docs:

Command-line interfaces

There are a number of Spark command-line programs available on the analytics clients:

  • spark3-submit
  • spark3-shell
  • spark3R
  • spark3-sql
  • pyspark3
  • spark3-thriftserver

Note that other Spark documentation will use the standard names for these programs, without the 3 (e.g. spark-submit). We have added the 3 to prevent confusion with the programs from Spark 1 and 2.

spark3-sql allows you to interact with Hive tables directly via Spark SQL engine, but in a purely SQL REPL, rather than having to code in a programming language.

In the rest of this doc, spark3 shell commands will be used, as it is the preferred installation of Spark. Note that our spark3 configuration defaults pyspark3 to using python3 (and ipython3 for the driver).

How do I ...

Start a spark shell in yarn

Note: The settings presented here are for a medium-size job on the cluster (~15% of the whole cluster)

  • Scala
spark3-shell --master yarn --executor-memory 8G --executor-cores 4 --driver-memory 2G --conf spark.dynamicAllocation.maxExecutors=64
  • Python
pyspark3 --master yarn --executor-memory 8G --executor-cores 4 --driver-memory 2G --conf spark.dynamicAllocation.maxExecutors=64
  • R
spark3R --master yarn --executor-memory 8G --executor-cores 4 --driver-memory 2G --conf spark.dynamicAllocation.maxExecutors=64
  • SQL
spark3-sql --master yarn --executor-memory 8G --executor-cores 4 --driver-memory 2G --conf spark.dynamicAllocation.maxExecutors=64

Set the python version pyspark should use

As of June 2020, our installation of Spark works with python 3.5 and 3.7. The default in on Debian Stretch nodes is 3.5, and in Debian Buster 3.7. Most Hadoop workers are Stretch, so if you want to launch pyspark in YARN with Python 3.7, you should always specify the pyspark python version like:

 PYSPARK_PYTHON=python3.7 pyspark3 --master yarn

See spark logs on my local machine when using spark submit

  • If you are running Spark on local, spark3-submit should write logs to your console by default.
  • How to get logs written to a file?
    • Spark uses log4j for logging, and the log4j config is usually at /etc/spark2/conf/
    • This uses a ConsoleAppender by default, and if you wanted to write to files, an example log4j properties file would be:
# Set everything to be logged to the file
log4j.rootCategory=INFO, file
log4j.appender.file.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n

This should write logs to /tmp/spark.log

  • On the analytics cluster (stat1007):
    • On the analytics cluster, running a spark job through spark submit writes logs to the console too, on both yarn and local modes
    • To write to file, create a file, similar to the one above that uses the FileAppender
    • Use the --files argument on spark-submit and upload your custom file:
spark3-shell --master yarn --executor-memory 2G --executor-cores 1 --driver-memory 4G --files /path/to/your/
  • While running a spark job through Oozie
    • The log4j file path now needs to be a location accessible by all drivers/executors running in different machines
    • Putting the file on a temp directory on Hadoop and using a hdfs:// url should do the trick
    • Note that the logs will be written on the machine where the driver/executors are running - so you'd need access to go look at them

Monitor Spark shell job Resources

If you run some more complicated spark in the shell and you want to see how Yarn is managing resources, have a look at

Don't hesitate to poke people on #wikimedia-analytics for help!

Use Hive UDF with Spark SQL

Here is an example in R. On stat1007, start a spark shell with the path to jar:

spark3R --master yarn --executor-memory 2G --executor-cores 1 --driver-memory 4G --jars /srv/deployment/analytics/refinery/artifacts/refinery-hive.jar

Then in the R session:

sql("CREATE TEMPORARY FUNCTION is_spider as ''")
sql("Your query")

pyspark and external packages

To use external packages like graphframes:

pyspark3 --packages graphframes:graphframes:0.8.1-spark3.0-s_2.12 --conf "spark.driver.extraJavaOptions=-Dhttp.proxyHost=webproxy.eqiad.wmnet -Dhttp.proxyPort=8080 -Dhttps.proxyHost=webproxy.eqiad.wmnet -Dhttps.proxyPort=8080"

Use this to avoid

resolving dependencies :: org.apache.spark#spark-submit-parent;1.0

confs: [default]

SparkR in production (stat100* machines) examples

SparkR: Basic example

From stat100*, and with the latest {SparkR} installed:

Note: This example starts a medium-size application (~15% of the cluster resources)


# - set environmental variables
Sys.setenv("SPARKR_SUBMIT_ARGS"="--master yarn-client sparkr-shell")

# - start SparkR api session
sparkR.session(master = "yarn", 
   appName = "SparkR", 
   sparkHome = "/usr/lib/spark2/", 
   sparkConfig = list(spark.driver.memory = "2g", 
                      spark.driver.cores = "4", 
                      spark.executor.memory = "8g",
                      spark.dynamicAllocation.maxExecutors = "64",
                      spark.enableHiveSupport = TRUE)

# - a somewhat trivial example w. linear regression on iris 

# - iris becomes a SparkDataFrame
df <- createDataFrame(iris)

# - GLM w. family = "gaussian"
model <- spark.glm(data = df, Sepal_Length ~ Sepal_Width + Petal_Length + Petal_Width, family = "gaussian")

# - summary

# - end SparkR session

SparkR: Large(er) file from HDFS

Also from stat100*, and with the latest {SparkR} installed:

Note: This example starts a large application (~30% of the cluster)

### --- flights dataset Multinomial Logistic Regression
### --- SparkDataFrame from HDFS
### --- NOTE: in this example, 'flights.csv' is found in /home/goransm/testData on stat1007

Sys.setenv("SPARKR_SUBMIT_ARGS"="--master yarn-client sparkr-shell")

### --- Start SparkR session w. Hive Support enabled
sparkR.session(master = "yarn",
               appName = "SparkR",
               sparkHome = "/usr/lib/spark2/",
               sparkConfig = list(spark.driver.memory = "4g",
                                  spark.dynamicAllocation.maxExecutors = "128",
                                  spark.executor.cores  = "4",
                                  spark.executor.memory = "8g",
                                  spark.enableHiveSupport = TRUE

# - copy flight.csv to HDFS
system('hdfs dfs -put /home/goransm/testData/flights.csv hdfs://analytics-hadoop/user/goransm/flights.csv', 
       wait = T)

# - load flights
df <- read.df("flights.csv",
               header = "true",
               inferSchema = "true",
               na.strings = "NA")

# - structure

# - dimensionality

# - clean up df from NA values
df <- filter(df, isNotNull(df$AIRLINE) & isNotNull(df$ARRIVAL_DELAY) & isNotNull(df$AIR_TIME) & isNotNull(df$TAXI_IN) & 
                 isNotNull(df$TAXI_OUT) & isNotNull(df$DISTANCE) & isNotNull(df$ELAPSED_TIME))

# - dimensionality

# - Generalized Linear Model w. family = "multinomial"
model <- spark.logit(data = df, 
                     family = "multinomial")

# - Regression Coefficients
res <- summary(model)

# - delete flight.csv from HDFS
system('hdfs dfs -rm hdfs://analytics-hadoop/user/goransm/flights.csv', wait = T)

# - close SparkR session

Spark Resource Settings

Spark jobs are highly configurable and no setting is optimal for all jobs. However, this section provides some good guidelines and starting points.

Regular jobs

A good starting point for regular jobs is the following combination of settings. These settings allow the job to use roughly as much as 15% of cluster resources.

"spark.driver.memory": "2g",
"spark.dynamicAllocation.maxExecutors": 64,
"spark.executor.memory": "8g",
"spark.executor.cores": 4,
"spark.sql.shuffle.partitions": 256

Large jobs

A good starting point for large jobs is the following combination of settings. These settings allow the job to use roughly as much as 30% of cluster resources.

"spark.driver.memory": "4g",
"spark.dynamicAllocation.maxExecutors": 128
"spark.executor.memory": "8g",
"spark.executor.cores": 4,
"spark.sql.shuffle.partitions": 512

Extra large jobs

Many Spark default settings are not optimal for large scale jobs (roughly, those that handle a terabyte or more of data across stages or that have tens of thousands of stages). This article from the Facebook technical team gives hints at how to better tune Spark in those cases. In this section we try to explain how the tuning helps.

Scaling the driver

  • First, make sure your job uses dynamic allocation. It's enabled by default on the analytics-cluster, but can be turned off. This will ensure a better use of resources across the cluster. If your job fails because of errors at shuffle (due to the external shuffle service), the tuning below should help.
  • Allow for more consecutive attempts per stage (default is 4, 10 is suggested): spark.stage.maxConsecutiveAttempts = 10. This tweak allows to better deal with fetch-failures. They happen usually when an executor is not available anymore (dead because of OOM or cluster resource preemption for instance). In such a case, other executors fail fetching data, and lead to failed stages. Bumping the number possible consecutive attempts allows for more error-recovery space.
  • Increase the RPC server threads to prevent out of memory errors: = 64 (no information available as to why this help - It can be assumed that since spark.rpc.connect.threads = 64then it's better to have the same amount of server threads answering, but I have not found proper information).


  • Manually set spark.yarn.executor.memoryOverhead when using big executors or when using a lot of string values (interned string are store in the memory buffer). By default spark allocates 0.1 * total-executor-memory for the buffer, which can be too small.
  • Increase shuffle file buffer size: to reduce number of disk seeks and system calls made: spark.shuffle.file.buffer = 1 MB and spark.unsafe.sorter.spill.reader.buffer.size = 1 MB
  • Optimize spill files merging by facilitating merging newly computed streams to existing files (useful when the job spills a lot): spark.file.transferTo = false, spark.shuffle.file.buffer = 1 MB and spark.shuffle.unsafe.file.output.buffer = 5 MB
  • Reduce spilled data size by augmenting compression block size: = 512KB
  • If needed: Enable off-heap memory if GC pause become problematic (not needed for analytics jobs so far): spark.memory.offHeap.enable = true and spark.memory.offHeap.size = 3g (don't forget that the off-heap memory is part of the yarn container, therefore your container is of size: executor-memory + memory.offHeap,size)

External shuffle service

  • Speed up file retrieval by bumping the cache size available for the file index: spark.shuffle.service.index.cache.size = 2048