Develop a Flink application
Objectives: Develop a simple Flink application.
Complexity: Beginner
Time to complete: 30 min
Source code: Download
This tutorial demonstrates how to develop, debug, and run a simple Flink application that reads data from the stream layer, and logs all this data using log4j.
The tutorial covers the following topics:
Set up the Maven project
Download the source code at the beginning of the tutorial and put it in a folder of your choice, or create a folder structure for your project from scratch:
develop-flink-application
└── src
└── main
├── java
└── resources
└── scala
You can do this with a single bash
command:
mkdir -p develop-flink-application/src/main/{java,resources,scala}
The Maven POM file is similar to the one in the Verify Maven Settings example, however, with an updated parent POM and dependencies sections:
The Parent POM is sdk-stream-bom_${scala.compat.version}
, because we need to use Flink-related libraries.
<parent>
<groupId>com.here.platform</groupId>
<artifactId>sdk-stream-bom_2.12</artifactId>
<version>2.51.5</version>
<relativePath/>
</parent>
The following dependencies are used:
-
com.here.platform.data.client:local-support_${scala.compat.version}
to read data from a local data catalog. -
org.apache.flink:flink-clients_${scala.compat.version}
to provide ExecutorFactory
for the Flink application. -
org.apache.flink:flink-streaming-java_${scala.compat.version}
to run a Java Flink application. -
org.apache.flink:flink-streaming-scala_${scala.compat.version}
to run a Scala Flink application. -
com.here.platform.data.client:flink-support_${scala.compat.version}
to read data from the data catalogs on the platform. -
org.slf4j:slf4j-log4j12
to log application results to the console and to the Splunk on the platform. -
com.here.platform.pipeline:pipeline-interface_${scala.compat.version}
to get information about input catalogs from the PipelineContext
.
Dependencies:
<dependencies>
<dependency>
<groupId>com.here.platform.data.client</groupId>
<artifactId>flink-support_${scala.compat.version}</artifactId>
</dependency>
<dependency>
<groupId>com.here.platform.data.client</groupId>
<artifactId>local-support_2.12</artifactId>
<exclusions>
<exclusion>
<groupId>com.here.platform.data.client</groupId>
<artifactId>client-core_2.12</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>com.here.platform.pipeline</groupId>
<artifactId>pipeline-interface_${scala.compat.version}</artifactId>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_${scala.compat.version}</artifactId>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_${scala.compat.version}</artifactId>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-scala_${scala.compat.version}</artifactId>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</dependency>
</dependencies>
Once you have added all the necessary dependencies to the pom.xml
file, the next step would be to write the code of the application and run it.
Write the source code
As mentioned before, the tutorial shows how to write a simple Flink application that reads data from a stream layer and outputs the data in the console. The data is added to the stream layer during the execution of your stream application using OLP CLI. All data read from the layer is logged to the console at the same time using log4j. The configuration of log4j is located in the src/main/resources/log4j.properties
file.
Let's look at the implementation of this Flink application. In the code snippet below, you can see that the FlinkQueryApi
is used to create a subscription to receive new data added to the stream layer. Once the subscription is created and the DataStreamSource
source function that emits new data from stream layer is declared, we map partitions using the SampleMaper
class that extends RichMapFunction
, which provides setup and teardown methods. The SampleMapper
class downloads partitions and logs as a human-readable string.
import com.here.hrn.HRN;
import com.here.platform.data.client.flink.javadsl.FlinkDataClient;
import com.here.platform.data.client.flink.javadsl.FlinkQueryApi;
import com.here.platform.data.client.flink.javadsl.FlinkReadEngine;
import com.here.platform.data.client.javadsl.Partition;
import com.here.platform.data.client.settings.ConsumerSettings;
import com.here.platform.pipeline.PipelineContext;
import org.apache.flink.api.common.functions.RichMapFunction;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.source.SourceFunction;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class DevelopFlinkApplication {
private static final Logger LOGGER = LoggerFactory.getLogger(DevelopFlinkApplication.class);
private static final String STREAMING_LAYER = "streaming-layer";
private static final String CONSUMER_GROUP_NAME = "flink";
private static HRN catalogHrn;
public static void main(String[] args) throws Exception {
PipelineContext pipelineContext = new PipelineContext();
catalogHrn = pipelineContext.getConfig().getInputCatalogs().get("input-catalog");
final FlinkDataClient flinkDataClient = new FlinkDataClient();
FlinkQueryApi queryApi = flinkDataClient.queryApi(catalogHrn);
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
ConsumerSettings consumerSettings =
new ConsumerSettings.Builder()
.withGroupName(CONSUMER_GROUP_NAME)
.withLatestOffset()
.build();
SourceFunction<Partition> subscriptionFunction =
queryApi.subscribe(STREAMING_LAYER, consumerSettings);
DataStreamSource<Partition> partitions = env.addSource(subscriptionFunction);
partitions.map(new SampleMapper());
env.execute();
}
private static class SampleMapper extends RichMapFunction<Partition, String> {
private transient FlinkDataClient flinkDataClient;
private transient FlinkReadEngine flinkReadEngine;
@Override
public void open(Configuration parameters) {
flinkDataClient = new FlinkDataClient();
flinkReadEngine = flinkDataClient.readEngine(catalogHrn);
}
@Override
public String map(Partition partition) {
String partitionContent = new String(flinkReadEngine.getDataAsBytes(partition));
LOGGER.info(partitionContent);
return partitionContent;
}
@Override
public void close() {
flinkDataClient.terminate();
}
}
}
import com.here.hrn.HRN
import com.here.platform.data.client.flink.javadsl.{FlinkDataClient, FlinkReadEngine}
import com.here.platform.data.client.javadsl.Partition
import com.here.platform.data.client.settings.{ConsumerSettings, LatestOffset}
import com.here.platform.pipeline.PipelineContext
import org.apache.flink.api.common.functions.RichMapFunction
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment
import org.slf4j.LoggerFactory
import java.io.Serializable
object DevelopFlinkApplicationScala {
private final val Logger = LoggerFactory.getLogger(classOf[DevelopFlinkApplication])
private final val StreamingLayer = "streaming-layer"
private final val ConsumerGroupName = "flink"
def main(args: Array[String]): Unit = {
val pipelineContext = new PipelineContext
val catalogHrn = pipelineContext.getConfig.getInputCatalogs.get("input-catalog")
val client = new FlinkDataClient
val queryApi = client.queryApi(catalogHrn)
val env = StreamExecutionEnvironment.getExecutionEnvironment
val consumerSettings = ConsumerSettings(ConsumerGroupName, LatestOffset)
val subscriptionFunction = queryApi.subscribe(StreamingLayer, consumerSettings)
val partitions = env.addSource(subscriptionFunction)
partitions.map(new SampleMapper(catalogHrn))
env.execute
}
class SampleMapper(hrn: HRN) extends RichMapFunction[Partition, String] with Serializable {
@transient
private lazy val flinkDataClient: FlinkDataClient = new FlinkDataClient()
@transient
private lazy val flinkReadEngine: FlinkReadEngine =
flinkDataClient.readEngine(hrn)
override def map(partition: Partition): String = {
val partitionContent = new String(flinkReadEngine.getDataAsBytes(partition))
Logger.info(partitionContent)
partitionContent
}
override def close(): Unit =
flinkDataClient.terminate()
}
}
Once the code is written, you can prepare resources and run the application.
Run the application
To run the application, you need to prepare the resources - create a catalog with a stream layer where the data will be written.
In this tutorial we will run our application locally, therefore, it will be enough for us to create a local catalog. No authentication and no access to the external network are needed to run this tutorial as we use local catalogs. Since they are contained in a local machine, they are not subject to naming conflicts within your realm and you can use any name you want.
To create a local catalog with a stream layer, you will need the catalog-with-stream-layer.json
config file that you downloaded together with the source code at the beginning of the tutorial. It contains configuration allowing you to create a catalog with a layer using only one OLP CLI command:
olp local catalog create streaming-catalog streaming-catalog --config catalog-with-stream-layer.json
The structure of the catalog-with-stream-layer.json
file is as follows:
{
"id": "input-catalog",
"name": "Develop Flink Application",
"summary": "Catalog for Stream Data",
"description": "The catalog containing the notification stream",
"layers": [
{
"id": "streaming-layer",
"name": "Notification Stream",
"summary": "Stream for notification",
"description": "Stream for notification",
"layerType": "stream",
"volume": {
"volumeType": "durable"
},
"partitioning": {
"scheme": "generic"
},
"contentType": "text/plain"
}
]
}
Note
If a billing tag is required in your realm, update the config file by adding the billingTags: ["YOUR_BILLING_TAG"]
property to the layer
section.
As mentioned in the Set up the Maven project chapter, the PipelineContext
is used to get information about the input catalog from the pipeline-config.conf
file. The structure of the pipeline-config.conf
file is as follows:
pipeline.config {
output-catalog {hrn = "OUTPUT_CATALOG_HRN"}
input-catalogs {
input-catalog {hrn = "INPUT_CATALOG_HRN"}
}
}
Although we do not use the output catalog in our tutorial, we can use the input catalog HRN to replace both OUTPUT_CATALOG_HRN
and INPUT_CATALOG_HRN
in the pipeline-config.conf
file.
After you have replaced the placeholder, run the application from the root of the downloaded tutorial using the following command:
mvn compile exec:java -D"exec.mainClass"="DevelopFlinkApplication" \
-Dhere.platform.data-client.endpoint-locator.discovery-service-env=local \
-Dspark.master=local[*] \
-Dpipeline-config.file=pipeline-config.conf
mvn compile exec:java -D"exec.mainClass"="DevelopFlinkApplicationScala"
-Dhere.platform.data-client.endpoint-locator.discovery-service-env=local \
-Dspark.master=local[*] \
-Dpipeline-config.file=pipeline-config.conf
The command has the following parameters:
-
exec.mainClass
– entry point to run your application. -
here.platform.data-client.endpoint-locator.discovery-service-env=local
– configures the Data Client Library to only use local catalogs.
At this moment, the application is running, but there is no data in the input catalog. Let's put some partitions in it. Open a new console window and run the following bash script that uploads data with the First HERE Platform Flink Application
content to the stream layer 10 times every 5 seconds:
CATALOG_HRN=$1
FOLDER_WITH_DATA="./src/main/resources/data"
for i in {1..10}
do
olp local catalog layer stream put ${CATALOG_HRN} streaming-layer --input "${FOLDER_WITH_DATA}"
sleep 5
done
To run the script, execute the following command:
bash scripts/populate-streaming-data.sh {{YOUR_CATALOG_HRN}}
Once the script is running, you should see the logs with the same information in the console of the running application.
Attach the debugger
In this chapter you will learn how to debug Flink applications using the capabilities of Intellij Idea, or rather you will learn how to attach to the process to start debugging if you run your program through the console.
In order to configure the debugger, you need to set up the MAVEN_OPTS
variable. To do so, stop the running application and run the following command:
export MAVEN_OPTS=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005
export MAVEN_OPTS=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005
The option has the following parameters:
-
address
– the port that will be used for debugging. The tutorial uses port 5005
, however, you can use any free port. -
server=y
– specifies that the process should listen for incoming debugger connections (act as a server). -
suspend=n
– specifies that the process should wait until the debugger has been connected.
Now you can run your application using the following command:
mvn compile exec:java -D"exec.mainClass"="DevelopFlinkApplication"
-Dhere.platform.data-client.endpoint-locator.discovery-service-env=local
mvn compile exec:java -D"exec.mainClass"="DevelopFlinkApplicationScala"
-Dhere.platform.data-client.endpoint-locator.discovery-service-env=local
Make sure that line Listening for transport dt_socket at address: 5005
appears in the logs.
Let's set the breakpoint before we attach to the running application, for example, in the line where the partitions are mapped. From this line, you can get a lot of useful information about the partition you downloaded:

Now we can attach to the process using Run
> Attach to Process
and select the process with the specified 5050
port.

Once the process is attached and your program is running and waiting for the data to appear in the stream layer, you need to run the script again to inject data:
bash scripts/populate-streaming-data.sh {{YOUR_CATALOG_HRN}}
After running the scripts/populate-streaming-data.sh
script, the debugger should stop at the breakpoint as soon as new data appears in the catalog and your application will start reading it. Now you can step through the code and inspect the content of the variables and the stack trace.

You can also use the standard Java Debugger instead of the debugger in the Intellij IDEA.
Project generation using Maven archetype
You can use a Maven archetype to bootstrap a Maven project for a Flink application. In this case, the project is set up faster with the following tasks completed automatically:
- Inclusion of the SDK BOM file.
- Creation of the Maven profile that generates the fat JAR for the platform.
For more information on your streaming pipeline options, see SDK Workflows.
To create a Flink application project, use the following command for the Java project:
mvn archetype:generate -DarchetypeGroupId=com.here.platform \
-DarchetypeArtifactId=streaming-java-archetype \
-DarchetypeVersion=1.0.880 \
-DgroupId=com.here.platform.tutorial \
-DartifactId=develop-flink-application \
-Dversion=1.0-SNAPSHOT \
-Dpackage=com.here.platform.tutorial
mvn archetype:generate -DarchetypeGroupId=com.here.platform ^
-DarchetypeArtifactId=streaming-java-archetype ^
-DarchetypeVersion=1.0.880 ^
-DgroupId=com.here.platform.tutorial ^
-DartifactId=develop-flink-application ^
-Dversion=1.0-SNAPSHOT ^
-Dpackage=com.here.platform.tutorial
For the Scala project use the following command:
mvn archetype:generate -DarchetypeGroupId=com.here.platform \
-DarchetypeArtifactId=streaming-scala-archetype \
-DarchetypeVersion=1.0.880 \
-DgroupId=com.here.platform.tutorial.scala \
-DartifactId=develop-flink-application \
-Dversion=1.0-SNAPSHOT \
-Dpackage=com.here.platform.tutorial
mvn archetype:generate -DarchetypeGroupId=com.here.platform ^
-DarchetypeArtifactId=streaming-scala-archetype ^
-DarchetypeVersion=1.0.880 ^
-DgroupId=com.here.platform.tutorial.scala ^
-DartifactId=develop-flink-application ^
-Dversion=1.0-SNAPSHOT ^
-Dpackage=com.here.platform.tutorial
Build your project to run locally
To build your project, run the following command in your project folder.
mvn install
To run your pipeline on the platform, you need to build a fat jar
first. To build it, use the following command.
mvn install -Pplatform
For more information on building a fat jar
, see the Include the SDK in your project.
Conclusion
In this tutorial, you have learned the stages of the Flink program development. To learn how to run a Flink application on the platform and to get acquainted with such monitoring tools as Splunk, Grafana, Flink UI, and Platform Billing Page, see the Run a Flink application on the platform tutorial.
For more details on the topics covered in this tutorial, see the following sources: