Read interactive map layer data

The Data Client Library provides the class LayerDataFrameReader, a custom Spark DataFrameReader for creating DataFrames that contain the data for all supported layer type including interactive map layer.

Read process

The read operation works according to the following steps:

  1. Spark connector analyzes your query and starts communication with the server in order to retrieve information for distributing your query across the spark cluster. Individual filters that are part of your query are taken into account at this stage already.
  2. Spark will distribute your query across the workers in the cluster which will then start requesting their individual chunks of data from the server.
  3. The data returned by the server is converted into a generic row format on the worker nodes.
  4. The resulting rows are passed on to the Spark framework to return the finalized DataFrame.

Dataframe columns

Unlike other layer types, interactive map layers use a static row format when working with the Spark framework.

Data columns

A DataFrame for interactive map layers will contain the following columns:

Column name Data Type Meaning
mt_id String OID of the object
geometry ROW Object's geometry, firstfield will contain type, second field will contain coordinates
properties MAP Object's properties in reduced format
custom_members MAP Non-standard top-level fields in reduced format
mt_tags ARRAY The object's tags
mt_datahub ROW<BIGINT, BIGINT Object metadata: createdAt and updatedAt

Project dependencies

If you want to create an application that uses the HERE platform Spark Connector to read data from interactive map layer, please add the required dependencies to your project as described in chapter Dependencies for Spark Connector.

Read interactive map layer data

The following snippet demonstrates how to access an interactive map layer based DataFrame from an interactive map layer of a catalog. Please note that interactive map layer uses a dynamic schema. Therefore, you don't need to specify the format explicitly.

import org.apache.spark.sql.SparkSession"Loading data from IML.")
val readDF = sparkSession
  .readLayer(catalogHrn, layerId)
  .query("mt_geometry=inboundingbox=(85, -85, 180, -180) and  p.row=contains=7")
  .load()"Data loaded!")
val res = readDF.count()"Dataframe contains " + res.toString + " rows.")
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;

Dataset<Row> inputDF =
        .readLayer(catalogHrn, layerId)
        .query("mt_geometry=inboundingbox=(85, -85, 180, -180) and  p.row=contains=7")

// Show the schema and the contents of the DF
long res = inputDF.count();"Number of rows in dataframe: " + res);
  • When reading data from an interactive map layer, users are currently limited to requesting data within the bounds of the mercator projection, i.e. only object that are within -85° to +85° latitude will be returned.
  • For information on RSQL please see RSQL.

results matching ""

    No results matching ""