Develop a Spark application
Objectives: Develop a simple Spark application.
Complexity: Beginner
Time to complete: 30 min
Source code: Download
This tutorial demonstrates how to develop, debug, and run a simple Spark application that reads data from the versioned layer, and logs this data using log4j.
The tutorial covers the following topics:
Set up the Maven project
Download the source code at the beginning of the tutorial and save it in a folder of your choice, or create a folder structure for your project from scratch:
develop-spark-application
└── src
└── main
├── java
└── resources
└── scala
You can do this with a single bash
command:
mkdir -p develop-spark-application/src/main/{java,resources,scala}
The Maven POM file is similar to the one in the Verify Maven Settings example, however, with an updated parent POM and dependencies sections:
The Parent POM is sdk-batch-bom_${scala.compat.version}
, because we need to use Spark-related libraries.
<parent>
<groupId>com.here.platform</groupId>
<artifactId>sdk-batch-bom_2.12</artifactId>
<version>2.51.5</version>
<relativePath/>
</parent>
The following dependencies are used:
-
com.here.platform.data.client:local-support_${scala.compat.version}
to read data from a local data catalog. -
com.here.platform.data.client:spark-support_${scala.compat.version}
to read data from the data catalogs on the platform. -
org.apache.spark:spark-core_${scala.compat.version}
to run a Java/Scala Spark Application -
com.here.platform.pipeline:pipeline-interface_${scala.compat.version}
to get information about input catalogs from the PipelineContext
.
Dependencies:
<dependencies>
<dependency>
<groupId>com.here.platform.data.client</groupId>
<artifactId>spark-support_${scala.compat.version}</artifactId>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_${scala.compat.version}</artifactId>
</dependency>
<dependency>
<groupId>com.here.platform.pipeline</groupId>
<artifactId>pipeline-interface_${scala.compat.version}</artifactId>
</dependency>
<dependency>
<groupId>com.here.platform.data.client</groupId>
<artifactId>local-support_${scala.compat.version}</artifactId>
</dependency>
<dependency>
<groupId>com.here.hrn</groupId>
<artifactId>hrn_${scala.compat.version}</artifactId>
</dependency>
</dependencies>
Once you have added all the necessary dependencies to the pom.xml
file, the next step would be to write the code of the application and run it.
Write the source code
As mentioned before, the tutorial shows how to write a simple Spark application that reads data from a version layer and outputs the data to the console. The data is added to the versioned layer before the execution of your batch application using the OLP CLI. All data read from the layer is logged to the console using log4j. The configuration of log4j is located in the src/main/resources/log4j.properties
file.
Let's look at the implementation of this Spark application. In the code snippet below, you can see that the JavaSparkContext
/ SparkContext
are used to distribute local collections to form a resilient distributed dataset (RDD), which is a fault-tolerant collection of elements that can be operated on in parallel. The RDD collection with Partition
objects is created using the parallelizing()
call in the queryMetadata
method which queries data from the versioned layer using the QueryApi
and then parallelizes this data to get RDD. Once we got the RDD collection with the partition metadata, the next steps are downloading partitions and mapping these partitions to the human-readable string and then logging to the console.
For more information on what the application does, see the comments in the code below.
import akka.actor.ActorSystem;
import akka.actor.CoordinatedShutdown;
import com.here.hrn.HRN;
import com.here.platform.data.client.engine.javadsl.DataEngine;
import com.here.platform.data.client.javadsl.DataClient;
import com.here.platform.data.client.javadsl.Partition;
import com.here.platform.data.client.javadsl.QueryApi;
import com.here.platform.data.client.spark.DataClientSparkContextUtils;
import com.here.platform.pipeline.PipelineContext;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.Collections;
import java.util.OptionalLong;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class DevelopSparkApplication {
private static final Logger LOGGER = LoggerFactory.getLogger(DevelopSparkApplication.class);
private static final String LAYER_ID = "versioned-layer-custom-data";
public static void main(String[] args) {
JavaSparkContext sparkContext =
new JavaSparkContext(new SparkConf().setAppName("SparkPipeline"));
PipelineContext pipelineContext = new PipelineContext();
HRN inputCatalog = pipelineContext.getConfig().getInputCatalogs().get("sparkCatalog");
ActorSystem sparkActorSystem = ActorSystem.create("DevelopSparkApplication");
try {
JavaRDD<Partition> layerMetadata =
queryMetadata(inputCatalog, sparkContext, sparkActorSystem);
CatalogReader catalogReader = new CatalogReader(inputCatalog);
JavaRDD<String> partitionData = layerMetadata.map(catalogReader::read);
partitionData.foreach(
partitionContent -> {
if (partitionContent.contains("THROW_EXCEPTION")) {
throw new RuntimeException("About to throw an exception");
}
LOGGER.info(System.lineSeparator() + partitionContent);
});
} finally {
CoordinatedShutdown.get(sparkActorSystem)
.runAll(CoordinatedShutdown.unknownReason())
.toCompletableFuture()
.join();
}
}
private static JavaRDD<Partition> queryMetadata(
HRN catalog, JavaSparkContext sparkContext, ActorSystem sparkActorSystem) {
QueryApi query = DataClient.get(sparkActorSystem).queryApi(catalog);
OptionalLong latestVersion =
query.getLatestVersion(OptionalLong.of(0)).toCompletableFuture().join();
ArrayList<Partition> partitions = new ArrayList<>();
query
.getPartitionsAsIterator(latestVersion.getAsLong(), LAYER_ID, Collections.emptySet())
.toCompletableFuture()
.join()
.forEachRemaining(partitions::add);
return sparkContext.parallelize(partitions);
}
}
class CatalogReader implements Serializable {
private final HRN catalog;
CatalogReader(HRN catalog) {
this.catalog = catalog;
}
String read(Partition partition) {
byte[] downloadedPartition = readRaw(partition);
String partitionContent = new String(downloadedPartition);
return partitionContent;
}
private byte[] readRaw(Partition partition) {
return DataEngine.get(DataClientSparkContextUtils.context().actorSystem())
.readEngine(catalog)
.getDataAsBytes(partition)
.toCompletableFuture()
.join();
}
}
import akka.actor.{ActorSystem, CoordinatedShutdown}
import com.here.hrn.HRN
import com.here.platform.data.client.engine.javadsl.DataEngine
import com.here.platform.data.client.javadsl.{DataClient, Partition}
import com.here.platform.data.client.model.AdditionalFields
import com.here.platform.data.client.spark.DataClientSparkContextUtils
import com.here.platform.pipeline.PipelineContext
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.api.java.JavaSparkContext
import org.apache.spark.rdd.RDD
import org.slf4j.LoggerFactory
import java.io.Serializable
import java.util
import java.util.OptionalLong
object DevelopSparkApplicationScala {
private val Logger = LoggerFactory.getLogger(classOf[DevelopSparkApplication])
private val LayerId = "versioned-layer-custom-data"
def main(args: Array[String]): Unit = {
val sparkContext = new SparkContext(new SparkConf().setAppName("SparkPipeline"))
val pipelineContext = new PipelineContext
val inputCatalog = pipelineContext.getConfig.getInputCatalogs.get("sparkCatalog")
val sparkActorSystem = ActorSystem.create("DevelopSparkApplication")
try {
val layerMetadata = queryMetadata(inputCatalog, sparkContext, sparkActorSystem)
val catalogReader = new CatalogReaderScala(inputCatalog)
val partitionData = layerMetadata.map(catalogReader.read)
partitionData.foreach(partitionContent => {
if (partitionContent.contains("THROW_EXCEPTION")) {
throw new RuntimeException("About to throw an exception")
}
Logger.info(System.lineSeparator() + partitionContent)
})
} finally {
CoordinatedShutdown
.get(sparkActorSystem)
.runAll(CoordinatedShutdown.unknownReason)
.toCompletableFuture
.join
}
}
private def queryMetadata(catalog: HRN,
sparkContext: JavaSparkContext,
sparkActorSystem: ActorSystem): RDD[Partition] = {
val query = DataClient.get(sparkActorSystem).queryApi(catalog)
val latestVersion =
query.getLatestVersion(OptionalLong.of(0)).toCompletableFuture.join
val partitions = new util.ArrayList[Partition]()
query
.getPartitionsAsIterator(latestVersion.getAsLong, LayerId, AdditionalFields.AllFields)
.toCompletableFuture
.join
.forEachRemaining(part => partitions.add(part))
sparkContext.parallelize(partitions)
}
}
class CatalogReaderScala(val catalog: HRN) extends Serializable {
def read(partition: Partition) = {
val downloadedPartition = readRaw(partition)
val partitionContent = new String(downloadedPartition)
partitionContent
}
private def readRaw(partition: Partition) =
DataEngine
.get(DataClientSparkContextUtils.context.actorSystem)
.readEngine(catalog)
.getDataAsBytes(partition)
.toCompletableFuture
.join
}
Once the code is complete, you can prepare the resources and run the application.
Run the application
To run the application, you need to prepare the resources - create a catalog with a versioned layer and put some custom data to the layer.
In this tutorial, we will run our application locally, therefore, it will be enough for us to create a local catalog. No authentication or access to the external network are needed to run this tutorial as we use local catalogs. Since they are contained in a local machine, they are not subject to naming conflicts within your realm and you can use any name you want.
To create a local input catalog with a versioned layer and generic partitioning scheme, you will need the catalog-configuration.json
config file that contains configuration allowing you to create a catalog with a layer using only one OLP CLI command. You can find this file in the archive you downloaded at the beginning of the tutorial. Run the following OLP CLI command from the root of the tutorial folder to create a local catalog:
olp local catalog create batch-catalog batch-catalog --config catalog-configuration.json
The structure of the catalog-configuration.json
file is as follows:
{
"id": "develop-spark-input",
"name": "Simulated road topology data archive (From tutorial) spark-connector-input",
"summary": "Archive of simulated road topology data",
"description": "Archive of simulated road topology data.",
"layers": [
{
"id": "versioned-layer-custom-data",
"name": "versioned-layer-custom-data",
"summary": "Simulated data.",
"description": "Simulated road topology data for versioned-layer-custom-data",
"contentType": "application/octet-stream",
"layerType": "versioned",
"volume": {
"volumeType": "durable"
},
"partitioning": {
"scheme": "generic"
}
}
]
}
Note
If a billing tag is required in your realm, update the config file by adding the billingTags: ["YOUR_BILLING_TAG"]
property to the layer
section.
As mentioned in the Set up the Maven project chapter, the PipelineContext
is used to get information about the input catalog from the pipeline-config.conf
file. The structure of the pipeline-config.conf
file is as follows:
pipeline.config {
output-catalog {hrn = "OUTPUT_CATALOG_HRN"}
input-catalogs {
sparkCatalog {hrn = "INPUT_CATALOG_HRN"}
}
}
Although we do not use the output catalog in our tutorial, we need to create it to fill the output-catalog
field in the config file, otherwise you will get an error about an invalid catalog HRN.
To create a local output catalog, you will need the output-catalog-configuration.json
config file that contains configuration allowing you to create a catalog with a layer using only one OLP CLI command. You can find this file in the archive you downloaded at the beginning of the tutorial. The structure of the output-catalog-configuration.json
file is as follows:
{
"id": "develop-spark-output",
"name": "Simulated road topology data archive (From tutorial) spark-connector-input",
"summary": "Archive of simulated road topology data",
"description": "Archive of simulated road topology data.",
"layers": [
{
"id": "versioned-layer-custom-data",
"name": "versioned-layer-custom-data",
"summary": "Simulated data.",
"description": "Simulated road topology data for versioned-layer-custom-data",
"contentType": "application/octet-stream",
"layerType": "versioned",
"volume": {
"volumeType": "durable"
},
"partitioning": {
"scheme": "generic"
}
}
]
}
Run the following OLP CLI command from the root of the tutorial folder to create a local catalog:
olp local catalog create output-batch-catalog output-batch-catalog --config output-catalog-configuration.json
The next step is to push some data to the input catalog. To do so, run the following OLP CLI command from the root of the tutorial folder:
olp local catalog layer partition put hrn:local:data:::batch-catalog versioned-layer-custom-data --partitions partition:data/partition_content
As a result, the following content is published to the versioned layer:
###########################################
## First HERE Platform Spark Application ##
###########################################
Once the input and output catalogs are created and data is published, you need to replace the INPUT_CATALOG_HRN
and OUTPUT_CATALOG_HRN
placeholders in the pipeline-config.conf
file with the catalog HRNs from the previous command responses.
After you have replaced the placeholders, run the application from the root of the downloaded tutorial using the following command:
mvn compile exec:java -D"exec.mainClass"="DevelopSparkApplication" \
-Dhere.platform.data-client.endpoint-locator.discovery-service-env=local \
-Dspark.master=local[*] \
-Dpipeline-config.file=pipeline-config.conf
mvn compile exec:java -D"exec.mainClass"="DevelopSparkApplicationScala"
-Dhere.platform.data-client.endpoint-locator.discovery-service-env=local \
-Dspark.master=local[*] \
-Dpipeline-config.file=pipeline-config.conf
The command has the following parameters:
-
exec.mainClass
– entry point to run your application. -
here.platform.data-client.endpoint-locator.discovery-service-env=local
– configures the Data Client Library to only use local catalogs. -
spark.master=local[*]
– configures a local Spark run with as many worker threads as there are logical cores on your machine. -
pipeline-config.file=pipeline-config.conf
- config file with information about the input and output catalogs.
After the application finishes successfully, you can see the data that was added to the versioned layer in the console.
Attach the debugger
In this chapter you will learn how to debug Spark applications using the capabilities of Intellij IDEA, or rather you will learn how to attach to the process to start debugging if you run your program through the console.
In order to configure the debugger, you need to set up the MAVEN_OPTS
variable. To do so, stop the running application and run the following command:
export MAVEN_OPTS=-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005
export MAVEN_OPTS=-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=*:5005
The option has the following parameters:
-
address
– the port that will be used for debugging. The tutorial uses port 5005
, however, you can use any free port. -
server=y
– specifies that the process should listen for incoming debugger connections (act as a server). -
suspend=y
– specifies that the process should wait until the debugger has been connected.
Let's set the breakpoint before we attach to the running application, for example, in the CatalogReader
class where we download and map the partition. From this line, you can get a lot of useful information about the partition you downloaded:

Now you can run your application from the root folder using the following command:
mvn compile exec:java -D"exec.mainClass"="DevelopSparkApplication" \
-Dhere.platform.data-client.endpoint-locator.discovery-service-env=local \
-Dspark.master=local[*] \
-Dpipeline-config.file=pipeline-config.conf
mvn compile exec:java -D"exec.mainClass"="DevelopSparkApplicationScala" \
-Dhere.platform.data-client.endpoint-locator.discovery-service-env=local \
-Dspark.master=local[*] \
-Dpipeline-config.file=pipeline-config.conf
Make sure that line Listening for transport dt_socket at address: 5005
appears in the logs.
Now we can attach to the process using Run
> Attach to Process
and select the process with the specified 5050
port.

Once you attach to the process, the debugger should stop at the breakpoint as soon as application starts to download the partition. Now you can step through the code and inspect the content of the variables and the stack trace.

You can also use the standard Java Debugger instead of the debugger in the Intellij IDEA.
Project generation using Maven archetype
You can use a Maven archetype to bootstrap a Maven project for a Spark application. In this case, the project is set up faster with the following tasks completed automatically:
- Inclusion of the SDK BOM file.
- Creation of the Maven profile that generates the fat JAR for the platform.
The HERE Data SDK offers the following archetypes:
-
batch-direct1ton-java-archetype
and batch-direct1ton-scala-archetype
for Direct1toN compilation for Java and Scala -
batch-directmton-java-archetype
and batch-directmton-scala-archetype
for DirectMtoN compilation for Java and Scala -
batch-reftree-java-archetype
and batch-reftree-scala-archetype
for RefTree compilation for Java and Scala -
batch-mapgroup-java-archetype
and batch-mapgroup-scala-archetype
for MapGroup compilation for Java and Scala
For more information on batch pipeline design patterns, see the Data Processing Library and Compilation Patterns.
To create a Spark application project using the DirectMtoN
compiler, use the following command for the Java project:
mvn archetype:generate -DarchetypeGroupId=com.here.platform \
-DarchetypeArtifactId=batch-directmton-java-archetype \
-DarchetypeVersion=1.0.880 \
-DgroupId=com.here.platform.tutorial \
-DartifactId=develop-spark-application \
-Dversion=1.0-SNAPSHOT \
-Dpackage=com.here.platform.tutorial
mvn archetype:generate -DarchetypeGroupId=com.here.platform ^
-DarchetypeArtifactId=batch-directmton-java-archetype ^
-DarchetypeVersion=1.0.880 ^
-DgroupId=com.here.platform.tutorial ^
-DartifactId=develop-spark-application ^
-Dversion=1.0-SNAPSHOT ^
-Dpackage=com.here.platform.tutorial
For the Scala project use the following command:
mvn archetype:generate -DarchetypeGroupId=com.here.platform \
-DarchetypeArtifactId=batch-directmton-scala-archetype \
-DarchetypeVersion=1.0.880 \
-DgroupId=com.here.platform.tutorial.scala \
-DartifactId=develop-spark-application \
-Dversion=1.0-SNAPSHOT \
-Dpackage=com.here.platform.tutorial.scala
mvn archetype:generate -DarchetypeGroupId=com.here.platform ^
-DarchetypeArtifactId=batch-directmton-scala-archetype ^
-DarchetypeVersion=1.0.880 ^
-DgroupId=com.here.platform.tutorial.scala ^
-DartifactId=develop-flink-application ^
-Dversion=1.0-SNAPSHOT ^
-Dpackage=com.here.platform.tutorial.scala
To generate a project using another compiler, change the value of the -DarchetypeArtifactId
property to the desired archetype ID.
Build your project to run locally
To build your project, run the following command in your project folder.
mvn install
To run your pipeline on the platform, you need to build a fat jar
first. To build it, use the following command.
mvn install -Pplatform
For more information on building a fat jar
, see the Include the SDK in your project.
Conclusion
In this tutorial, you have learned the stages of the Spark application development. To learn how to run a Spark application on the platform and to get acquainted with such monitoring tools as Splunk, Grafana, Spark UI, and Platform Billing Page, see the Run a Spark application on the platform tutorial.
For more details on the topics covered in this tutorial, see the following sources: