Flink Connector implements the standard Flink interfaces that allow you to create source
Tables for reading, and sink
Tables for writing to stream layers.
As a result, you can use both relational APIs that Flink supports:
Table API and
SQL. In addition, you can convert a
Table to a
DataStream and use the Flink
For information on how to build your app and which dependencies to use, see Dependencies for Stream Pipelines.
|Layer Type||Protobuf||Avro||Parquet||Raw (octet-stream)||GeoJSON|
|Stream layer||Read, Write||not supported||not supported||Read, Write||not applicable|
|Index layer||Read, Write||Read, Write||Read, Write||Read, Write||not applicable|
|Versioned layer||Read||Read||Read||Read||not applicable|
|Volatile layer||Read, Write||Read, Write||Read, Write||Read, Write||not applicable|
|Interactive Map layer||not applicable||not applicable||not applicable||not applicable||Read, Write|
For Flink connector configuration please see here.
Flink has got the ability to read data continuously from non-stream layers. For the details please see here.