Flink Connector Integration with Volatile Layers

Create Table Sink and Table Source for Volatile Layer

The main entry point of the Flink Connector API is OlpStreamConnectorDescriptorFactory.

Scala
Java
import com.here.platform.data.client.flink.scaladsl.{
  OlpStreamConnection,
  OlpStreamConnectorDescriptorFactory
}
import com.here.platform.data.client.flink.javadsl.OlpStreamTableSinkFactory;
import com.here.platform.data.client.flink.javadsl.OlpStreamTableSourceFactory;

An instance of OlpStreamConnection consisting of ConnectorDescriptor and SchemaDescriptor is required to register the table source in the TableEnvironment's catalog. The following code snippet shows how you create an instance of OlpStreamConnection using OlpStreamConnectorDescriptorFactory and register in the TableEnvironment's catalog:

Scala
Java
// define the properties
val sourceProperties =
  Map(
    "olp.layer.query" -> "mt_partition=in=(1,2,3)"
  ).asJava

// create the Table Connector Descriptor Source
val streamSource: OlpStreamConnection =
  OlpStreamConnectorDescriptorFactory(HRN(inputCatalogHrn), "volatile-layer-protobuf-input")
    .createConnectorDescriptorWithSchema(sourceProperties)

val tEnv = StreamTableEnvironment.create(env)
// register the Table Source
tEnv
  .connect(streamSource.connectorDescriptor)
  .withSchema(streamSource.schema)
  .inAppendMode()
  .createTemporaryTable("InputTable")
// define the properties
Map<String, String> properties = new HashMap<>();
properties.put("olp.layer.query", "mt_partition=in=(1,2,3)");

// create the Table Connector Descriptor Source
OlpStreamConnection streamSource =
    OlpStreamConnectorDescriptorFactory.create(
            HRN.fromString(inputCatalogHrn), "volatile-layer-protobuf-input")
        .createConnectorDescriptorWithSchema(properties);

// register the Table Source
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);

tEnv.connect(streamSource.connectorDescriptor())
    .withSchema(streamSource.schema())
    .inAppendMode()
    .createTemporaryTable("InputTable");

The source factory supports the following Volatile layer properties:

  • olp.layer.query: required, specifies an RSQL query that is used to query the volatile layer.
  • olp.catalog.layer-schema: applicable only for the parquet and avro data formats. It is an Avro schema string that uses JSON format.
  • olp.connector.download-parallelism: the maximum number of blobs that are being read in parallel in one flink task. The number of tasks corresponds to the set parallelism. As a result, the number of blobs that your pipeline can read in parallel is equal to the parallelism level times the value of this property. The default value is 10.
  • olp.connector.download-timeout: the overall timeout in milliseconds that is applied for reading a blob from the Blob API. The default value is 300000 milliseconds.
  • olp.connector.publication-window: defines how often metadata are published to the Publish API. The default value is 1000 milliseconds. If the value is defined as -1, metadata will not be published.

Before creating a Table, the factory fetches the catalog configuration using the passed HRN. Then it checks the data format and schema if it exists for the passed layerId. As the last step, Flink Connector automatically translates the layer schema into a Flink Table schema.

The following section describes how the schema translation works.

You create a Table Sink using OlpStreamConnectorDescriptorFactory and register in TableEnvironment's catalog as follows:

Scala
Java
val streamSink: OlpStreamConnection =
  OlpStreamConnectorDescriptorFactory(HRN(outputCatalogHrn), "volatile-layer-protobuf-output")
    .createConnectorDescriptorWithSchema(Map.empty[String, String].asJava)

tEnv
  .connect(streamSink.connectorDescriptor)
  .withSchema(streamSink.schema)
  .inAppendMode()
  .createTemporaryTable("OutputTable")
OlpStreamConnection streamSink =
    OlpStreamConnectorDescriptorFactory.create(
            HRN.fromString(outputCatalogHrn), "volatile-layer-protobuf-output")
        .createConnectorDescriptorWithSchema(new HashMap<>());

tEnv.connect(streamSink.connectorDescriptor())
    .withSchema(streamSink.schema())
    .inAppendMode()
    .createTemporaryTable("OutputTable");

The sink factory supports the following properties for Volatile layers:

  • olp.catalog.layer-schema: applicable only for the parquet and avro data formats. It is an Avro schema string that uses JSON format.
  • olp.connector.aggregation-window: an interval in milliseconds that defines how often the sink should aggregate rows with the same partition id together. The default value is 10000 milliseconds. The property applies only for the avro and parquet formats.
  • olp.connector.upload-parallelism: the maximum number of blobs that are being written in parallel in one flink task. The number of tasks corresponds to the set parallelism. As a result, the number of blobs that your pipeline can write in parallel is equal to the parallelism level times the value of this property. The default value is 10.
  • olp.connector.upload-timeout: the overall timeout in milliseconds that is applied for writing a blob from the Blob API. The default value is 300000 milliseconds.

Data Formats

The Flink Connector supports the following data formats for Volatile layer payload:

  • Raw. The decoding and encoding logic is not applied and you get your data payload as an array of bytes. Your Table schema appears as follows:

    root
      |-- data: Array[Byte]
      |-- mt_partition: String
      |-- mt_timestamp: Long
      |-- mt_checksum: String
      |-- mt_crc: String
      |-- mt_dataSize: Long
      |-- mt_compressedDataSize: Long
    

    The column with the payload data is called data. The metadata columns follow the data column and have the mt_ prefix.

    This format is used if your layer content type is configured as application/octet-stream.

  • Protobuf. Flink uses the attached Protobuf schema (that you specify in your layer configuration) to derive a Flink Table schema.

    root
      |-- protobuf_field_1: String
      |-- protobuf_field_2: String
      |-- probobuf_field_3.nested_column: Long
      |-- ...
      |-- mt_partition: String
      |-- mt_timestamp: Long
      |-- mt_checksum: String
      |-- mt_crc: String
      |-- mt_dataSize: Long
      |-- mt_compressedDataSize: Long
    

    The Flink Connector puts the top level protobuf fields as the top level Row columns, then the metadata columns follow.

    This format is used if your layer content type is configured as application/x-protobuf and you have a specified schema. If the schema is not specified, an error will be thrown.

    Note:

    Self-referencing protobuf fields are not supported because there is no way to represent them in the Flink TypeInformation-based schema.

  • Avro. The Flink uses the passed Avro schema (that you specify in the factory Map) to derive a Flink Table schema.

    root
      |-- avro_field_1: String
      |-- avro_field_2: String
      |-- ...
      |-- mt_partition: String
      |-- mt_timestamp: Long
      |-- mt_checksum: String
      |-- mt_crc: String
      |-- mt_dataSize: Long
      |-- mt_compressedDataSize: Long
    

    The Flink Connector puts the top level Avro fields as the top level Row columns, then the metadata columns follow.

    This format is used if your layer content type is configured as application/x-avro-binary and you have a specified schema. If the schema is not specified, an error will be thrown.

  • Parquet. Flink uses the passed Avro schema (that you specify in the factory Map) to derive a Flink Table schema.

    root
      |-- parquet_field_1: String
      |-- parquet_field_2: String
      |-- ...
      |-- mt_partition: String
      |-- mt_timestamp: Long
      |-- mt_checksum: String
      |-- mt_crc: String
      |-- mt_dataSize: Long
      |-- mt_compressedDataSize: Long
    

    The Flink Connector puts the top level parquet fields as the top level Row columns, then the metadata columns follow.

    This format is used if your layer content type is configured as application/x-parquet and you have a specified schema. If the schema is not specified, an error will be thrown.

    The hadoop client is not provided by the streaming environment at the moment. As a result, if you want to use the parquet-format you have to include the hadoop client dependency in your fat jar:

Maven
sbt
<dependencies>
    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-client</artifactId>
        <version>2.7.3</version>
        <scope>compile</scope>
        <exclusions>
            <exclusion>
                <groupId>org.apache.htrace</groupId>
                <artifactId>htrace-core</artifactId>
            </exclusion>
            <exclusion>
                <groupId>xerces</groupId>
                <artifactId>xercesImpl</artifactId>
            </exclusion>
        </exclusions>
    </dependency>
</dependencies>
libraryDependencies ++=
Seq("org.apache.hadoop" % "hadoop-client" % "2.7.3" exclude ("org.apache.htrace", "htrace-core") exclude ("xerces", "xercesImpl"))
  • Other formats

    If your layer uses a format other than the described formats, an error will be thrown.

Table Source and Sink have the same schema for the same layer.

You can always print your Table schema using the standard Flink API:

Scala
Java
// imagine that we have already registered InputTable
tEnv.from("InputTable").printSchema()
// imagine that we have already registered InputTable
tEnv.from("InputTable").printSchema();

Read and Write Raw Data

Using SQL:

Scala
Java
val tEnv = StreamTableEnvironment.create(env)

val sourceProperties =
  Map(
    "olp.layer.query" -> "mt_partition=in=(1,2,3)"
  ).asJava

val streamSource: OlpStreamConnection =
  OlpStreamConnectorDescriptorFactory(HRN(inputCatalogHrn), "volatile-layer-raw-input")
    .createConnectorDescriptorWithSchema(sourceProperties)

tEnv
  .connect(streamSource.connectorDescriptor)
  .withSchema(streamSource.schema)
  .inAppendMode()
  .createTemporaryTable("InputTable")

val streamSink: OlpStreamConnection =
  OlpStreamConnectorDescriptorFactory(HRN(outputCatalogHrn), "volatile-layer-raw-output")
    .createConnectorDescriptorWithSchema(Map.empty[String, String].asJava)

tEnv
  .connect(streamSink.connectorDescriptor)
  .withSchema(streamSink.schema)
  .inAppendMode()
  .createTemporaryTable("OutputTable")

tEnv.sqlUpdate(
  """
INSERT INTO
    OutputTable
SELECT
    data,
    mt_partition,
    mt_timestamp,
    mt_checksum,
    mt_crc,
    mt_dataSize,
    mt_compressedDataSize
FROM InputTable"""
)
// define the properties
Map<String, String> properties = new HashMap<>();
properties.put("olp.layer.query", "mt_partition=in=(1,2,3)");

// create the Table Connector Descriptor Source
OlpStreamConnection streamSource =
    OlpStreamConnectorDescriptorFactory.create(
            HRN.fromString(inputCatalogHrn), "volatile-layer-raw-input")
        .createConnectorDescriptorWithSchema(properties);

// register the Table Source
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);

tEnv.connect(streamSource.connectorDescriptor())
    .withSchema(streamSource.schema())
    .inAppendMode()
    .createTemporaryTable("InputTable");

OlpStreamConnection streamSink =
    OlpStreamConnectorDescriptorFactory.create(
            HRN.fromString(outputCatalogHrn), "volatile-layer-raw-output")
        .createConnectorDescriptorWithSchema(new HashMap<>());

tEnv.connect(streamSink.connectorDescriptor())
    .withSchema(streamSink.schema())
    .inAppendMode()
    .createTemporaryTable("OutputTable");

tEnv.sqlUpdate(
    "INSERT INTO OutputTable SELECT data, mt_partition, mt_timestamp, mt_checksum, mt_crc, mt_dataSize, mt_compressedDataSize FROM InputTable");

Read and Write Protobuf Data

Using SQL:

Scala
Java
/// [create-table-source]
// define the properties
val sourceProperties =
  Map(
    "olp.layer.query" -> "mt_partition=in=(1,2,3)"
  ).asJava

// create the Table Connector Descriptor Source
val streamSource: OlpStreamConnection =
  OlpStreamConnectorDescriptorFactory(HRN(inputCatalogHrn), "volatile-layer-protobuf-input")
    .createConnectorDescriptorWithSchema(sourceProperties)

val tEnv = StreamTableEnvironment.create(env)
// register the Table Source
tEnv
  .connect(streamSource.connectorDescriptor)
  .withSchema(streamSource.schema)
  .inAppendMode()
  .createTemporaryTable("InputTable")
/// [create-table-source]

/// [create-table-sink]
val streamSink: OlpStreamConnection =
  OlpStreamConnectorDescriptorFactory(HRN(outputCatalogHrn), "volatile-layer-protobuf-output")
    .createConnectorDescriptorWithSchema(Map.empty[String, String].asJava)

tEnv
  .connect(streamSink.connectorDescriptor)
  .withSchema(streamSink.schema)
  .inAppendMode()
  .createTemporaryTable("OutputTable")
/// [create-table-sink]

tEnv.sqlUpdate(
  "INSERT INTO OutputTable SELECT * FROM InputTable"
)
/// [create-table-source-java]
// define the properties
Map<String, String> properties = new HashMap<>();
properties.put("olp.layer.query", "mt_partition=in=(1,2,3)");

// create the Table Connector Descriptor Source
OlpStreamConnection streamSource =
    OlpStreamConnectorDescriptorFactory.create(
            HRN.fromString(inputCatalogHrn), "volatile-layer-protobuf-input")
        .createConnectorDescriptorWithSchema(properties);

// register the Table Source
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);

tEnv.connect(streamSource.connectorDescriptor())
    .withSchema(streamSource.schema())
    .inAppendMode()
    .createTemporaryTable("InputTable");
/// [create-table-source-java]

/// [create-table-sink-java]
OlpStreamConnection streamSink =
    OlpStreamConnectorDescriptorFactory.create(
            HRN.fromString(outputCatalogHrn), "volatile-layer-protobuf-output")
        .createConnectorDescriptorWithSchema(new HashMap<>());

tEnv.connect(streamSink.connectorDescriptor())
    .withSchema(streamSink.schema())
    .inAppendMode()
    .createTemporaryTable("OutputTable");
/// [create-table-sink-java]

// we assume that the input and output tables have the same schema
tEnv.sqlUpdate("INSERT INTO OutputTable SELECT * FROM InputTable");

Read and Write Avro Data

Using SQL:

Scala
Java
val tEnv = StreamTableEnvironment.create(env)

    val inputLayerSchema = """
{
  "type" : "record",
  "name" : "Event",
  "namespace" : "my.example",
  "fields" : [
    {"name" : "event_timestamp", "type" : "long"},
    {"name" : "latitude", "type" : "double"},
    {"name" : "longitude", "type" : "double"}
  ]
}
    """

    val outputLayerSchema = """
{
  "type" : "record",
  "name" : "Event",
  "namespace" : "my.example",
  "fields" : [
    {"name" : "city", "type" : "string"},
    {"name" : "event_timestamp", "type" : "long"},
    {"name" : "latitude", "type" : "double"},
    {"name" : "longitude", "type" : "double"}
  ]
}
    """

    val sourceProperties =
      Map(
        "olp.catalog.layer-schema" -> inputLayerSchema,
        "olp.layer.query" -> "mt_partition=in=(1,2,3)"
      ).asJava

    val streamSource: OlpStreamConnection =
      OlpStreamConnectorDescriptorFactory(HRN(inputCatalogHrn), "volatile-layer-avro-input")
        .createConnectorDescriptorWithSchema(sourceProperties)

    tEnv
      .connect(streamSource.connectorDescriptor)
      .withSchema(streamSource.schema)
      .inAppendMode()
      .createTemporaryTable("InputTable")

    val streamSink: OlpStreamConnection =
      OlpStreamConnectorDescriptorFactory(HRN(outputCatalogHrn), "volatile-layer-avro-output")
        .createConnectorDescriptorWithSchema(
          Map("olp.catalog.layer-schema" -> outputLayerSchema).asJava)

    tEnv
      .connect(streamSink.connectorDescriptor)
      .withSchema(streamSink.schema)
      .inAppendMode()
      .createTemporaryTable("OutputTable")

    tEnv.sqlUpdate(
      """
INSERT INTO OutputTable
    SELECT
        'Berlin',
        event_timestamp,
        latitude,
        longitude,
        mt_partition,
        mt_timestamp,
        mt_checksum,
        mt_crc,
        mt_dataSize,
        mt_compressedDataSize
    FROM InputTable"""
    )
String inputLayerSchema =
    "{\"type\" : \"record\", \"name\" : \"Event\", \"namespace\" : \"my.example\", \"fields\" : [ {\"name\" : \"event_timestamp\", \"type\" : \"long\"}, {\"name\" : \"latitude\", \"type\" : \"double\"}, {\"name\" : \"longitude\", \"type\" : \"double\"} ] }";
String outputLayerSchema =
    "{\"type\" : \"record\", \"name\" : \"Event\", \"namespace\" : \"my.example\", \"fields\" : [ {\"name\" : \"city\", \"type\" : \"string\"}, {\"name\" : \"event_timestamp\", \"type\" : \"long\"}, {\"name\" : \"latitude\", \"type\" : \"double\"}, {\"name\" : \"longitude\", \"type\" : \"double\"} ] }";

// define source properties
Map<String, String> sourceProperties = new HashMap<>();
sourceProperties.put("olp.catalog.layer-schema", inputLayerSchema);
sourceProperties.put("olp.layer.query", "mt_partition=in=(1,2,3)");

// create the Table Connector Descriptor Source
OlpStreamConnection streamSource =
    OlpStreamConnectorDescriptorFactory.create(
            HRN.fromString(inputCatalogHrn), "volatile-layer-avro-input")
        .createConnectorDescriptorWithSchema(sourceProperties);

// register the Table Source
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);

tEnv.connect(streamSource.connectorDescriptor())
    .withSchema(streamSource.schema())
    .inAppendMode()
    .createTemporaryTable("InputTable");

// define sink properties
Map<String, String> sinkProperties = new HashMap<>();
sinkProperties.put("olp.catalog.layer-schema", outputLayerSchema);

OlpStreamConnection streamSink =
    OlpStreamConnectorDescriptorFactory.create(
            HRN.fromString(outputCatalogHrn), "volatile-layer-avro-output")
        .createConnectorDescriptorWithSchema(sinkProperties);

tEnv.connect(streamSink.connectorDescriptor())
    .withSchema(streamSink.schema())
    .inAppendMode()
    .createTemporaryTable("OutputTable");

tEnv.sqlUpdate(
    "INSERT INTO OutputTable SELECT 'Berlin', event_timestamp, latitude, longitude, mt_partition, mt_timestamp, mt_checksum, mt_crc, mt_dataSize, mt_compressedDataSize FROM InputTable");

Read and Write Parquet Data

Using SQL:

Scala
Java
val tEnv = StreamTableEnvironment.create(env)

    val inputLayerSchema = """
{
  "type" : "record",
  "name" : "Event",
  "namespace" : "my.example",
  "fields" : [
    {"name" : "event_timestamp", "type" : "long"},
    {"name" : "latitude", "type" : "double"},
    {"name" : "longitude", "type" : "double"}
  ]
}
    """

    val outputLayerSchema = """
{
  "type" : "record",
  "name" : "Event",
  "namespace" : "my.example",
  "fields" : [
    {"name" : "city", "type" : "string"},
    {"name" : "event_timestamp", "type" : "long"},
    {"name" : "latitude", "type" : "double"},
    {"name" : "longitude", "type" : "double"}
  ]
}
    """

    val sourceProperties =
      Map(
        "olp.catalog.layer-schema" -> inputLayerSchema,
        "olp.layer.query" -> "mt_partition=in=(1,2,3)"
      ).asJava

    val streamSource: OlpStreamConnection =
      OlpStreamConnectorDescriptorFactory(HRN(inputCatalogHrn), "volatile-layer-parquet-input")
        .createConnectorDescriptorWithSchema(sourceProperties)

    tEnv
      .connect(streamSource.connectorDescriptor)
      .withSchema(streamSource.schema)
      .inAppendMode()
      .createTemporaryTable("InputTable")

    val streamSink: OlpStreamConnection =
      OlpStreamConnectorDescriptorFactory(HRN(outputCatalogHrn), "volatile-layer-parquet-output")
        .createConnectorDescriptorWithSchema(
          Map("olp.catalog.layer-schema" -> outputLayerSchema).asJava)

    tEnv
      .connect(streamSink.connectorDescriptor)
      .withSchema(streamSink.schema)
      .inAppendMode()
      .createTemporaryTable("OutputTable")

    tEnv.sqlUpdate(
      """
INSERT INTO OutputTable
    SELECT
        'Berlin',
        event_timestamp,
        latitude,
        longitude,
        mt_partition,
        mt_timestamp,
        mt_checksum,
        mt_crc,
        mt_dataSize,
        mt_compressedDataSize
    FROM InputTable"""
    )
String inputLayerSchema =
    "{\"type\" : \"record\", \"name\" : \"Event\", \"namespace\" : \"my.example\", \"fields\" : [ {\"name\" : \"event_timestamp\", \"type\" : \"long\"}, {\"name\" : \"latitude\", \"type\" : \"double\"}, {\"name\" : \"longitude\", \"type\" : \"double\"} ] }";
String outputLayerSchema =
    "{\"type\" : \"record\", \"name\" : \"Event\", \"namespace\" : \"my.example\", \"fields\" : [ {\"name\" : \"city\", \"type\" : \"string\"}, {\"name\" : \"event_timestamp\", \"type\" : \"long\"}, {\"name\" : \"latitude\", \"type\" : \"double\"}, {\"name\" : \"longitude\", \"type\" : \"double\"} ] }";

// define source properties
Map<String, String> sourceProperties = new HashMap<>();
sourceProperties.put("olp.catalog.layer-schema", inputLayerSchema);
sourceProperties.put("olp.layer.query", "mt_partition=in=(1,2,3)");

// create the Table Connector Descriptor Source
OlpStreamConnection streamSource =
    OlpStreamConnectorDescriptorFactory.create(
            HRN.fromString(inputCatalogHrn), "volatile-layer-parquet-input")
        .createConnectorDescriptorWithSchema(sourceProperties);

// register the Table Source
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);

tEnv.connect(streamSource.connectorDescriptor())
    .withSchema(streamSource.schema())
    .inAppendMode()
    .createTemporaryTable("InputTable");

// define sink properties
Map<String, String> sinkProperties = new HashMap<>();
sinkProperties.put("olp.catalog.layer-schema", outputLayerSchema);

OlpStreamConnection streamSink =
    OlpStreamConnectorDescriptorFactory.create(
            HRN.fromString(outputCatalogHrn), "volatile-layer-parquet-output")
        .createConnectorDescriptorWithSchema(sinkProperties);

tEnv.connect(streamSink.connectorDescriptor())
    .withSchema(streamSink.schema())
    .inAppendMode()
    .createTemporaryTable("OutputTable");

tEnv.sqlUpdate(
    "INSERT INTO OutputTable SELECT 'Berlin', event_timestamp, latitude, longitude, mt_partition, mt_timestamp, mt_checksum, mt_crc, mt_dataSize, mt_compressedDataSize FROM InputTable");

results matching ""

    No results matching ""