Flink Connector

Flink Connector implements the standard Flink interfaces that allow you to create source Tables for reading, and sink Tables for writing to stream layers.

As a result, you can use both relational APIs that Flink supports: Table API and SQL. In addition, you can convert a Table to a DataStream and use the Flink DataStream API.

For information on how to build your app and which dependencies to use, see Dependencies for Stream Pipelines.

Supported Layer Types, Data Formats and Operations

Layer Type Protobuf Avro Parquet Raw (octet-stream) GeoJSON
Stream layer Read, Write not supported not supported Read, Write not applicable
Index layer Read, Write Read, Write Read, Write Read, Write not applicable
Versioned layer Read Read Read Read not applicable
Volatile layer Read, Write Read, Write Read, Write Read, Write not applicable
Interactive Map layer not applicable not applicable not applicable not applicable Read, Write

Configuration

For Flink connector configuration, see here.

Reading Data Continuously

Flink has got the ability to read data continuously from non-stream layers. For details, see here.

results matching ""

    No results matching ""