24

I'm using Spark SQL 1.6.1 and am performing a few joins.

Looking at the spark UI I see that there are some jobs with description "run at ThreadPoolExecutor.java:1142"

Example of some of these jobs

I was wondering why do some Spark jobs get that description?

1
  • this has been fixed in spark 3.x where we see broadcast exchange (runId 299b13c3-e4cf-4b3f-903f-44497bf192d7) $anonfun$withThreadLocalCaptured$1 at FutureTask.java:266 now Jul 15, 2023 at 8:56

2 Answers 2

44

After some investigation I found out that run at ThreadPoolExecutor.java:1142 Spark jobs are related to queries with join operators that fit the definition of BroadcastHashJoin where one join side is broadcast to executors for join.

That BroadcastHashJoin operator uses a ThreadPool for this asynchronous broadcasting (see this and this).

scala> spark.version
res16: String = 2.1.0-SNAPSHOT

scala> val left = spark.range(1)
left: org.apache.spark.sql.Dataset[Long] = [id: bigint]

scala> val right = spark.range(1)
right: org.apache.spark.sql.Dataset[Long] = [id: bigint]

scala> left.join(right, Seq("id")).show
+---+
| id|
+---+
|  0|
+---+

When you switch to the SQL tab you should see Completed Queries section and their Jobs (on the right).

SQL tab in web UI with Completed Queries

In my case the Spark job(s) running on "run at ThreadPoolExecutor.java:1142" where ids 12 and 16.

Jobs tab in web UI with "run at ThreadPoolExecutor.java:1142" jobs

They both correspond to join queries.

If you wonder "that makes sense that one of my joins is causing this job to appear but as far as I know join is a shuffle transformation and not an action, so why is the job described with the ThreadPoolExecutor and not with my action (as is the case with the rest of my jobs)?", then my answer is usually along the lines:

Spark SQL is an extension of Spark with its own abstractions (Datasets to name just the one that quickly springs to mind) that have their own operators for execution. One "simple" SQL operation can run one or more Spark jobs. It's at the discretion of Spark SQL's execution engine how many Spark jobs to run or submit (but they do use RDDs under the covers) -- you don't have to know such a low-leve details as it's...well...too low-level...given you are so high-level by using Spark SQL's SQL or Query DSL.

2
  • 4
    Dr. Laskowski to the rescue Jan 19, 2019 at 21:09
  • bhot hard bhot hard. Awesome! Jun 27, 2019 at 9:45
0

This also happens when reading and writing to csv.

During those operations is when I first witnessed this threadpoolexecutor for the first time.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Not the answer you're looking for? Browse other questions tagged or ask your own question.