Skip to main content
Release notes 23 min read

HERE Workspace & Marketplace 2.6 release

HERE Workspace & Marketplace 2.6 release

Highlights

Location Services API (beta)

With the OLP 2.6 release, beta versions of location services APIs will be available in OLP Workspace.

You'll notice the following related updates in OLP Portal:

  • Service Tab: visible to OLP Workspace licensees in the main OLP menu on the same level as Data & Pipelines.
  • Service Overview: includes a listing of all HERE Location Services available in closed beta.
  • Service detail page: provides information on the service API as well as links to documentation and “how to” guides.

The services represented on the Service tab described above are:

  • Search - leveraging a unified API for search and geocoding, users can search for known places or addresses, bring their own data to be leveraged by the search service, as well as perform free-form searches of place names and addresses with Autosuggest.
    • Users will have the ability to engage the Search functionality in the following ways:
      • Search - One Search Box provides users the ability to find a known place or address (partial or complete), as well as discover an unknown place.
      • Search - Forward Geocoder returns the geo-coordinates for a single requested address, along with additional address and location details. Multiple results can be returned if the address is ambiguous. The Geocoding services supports the submittal of place names as additional semantic address information.
      • Search - Reverse Geocoder returns the nearest address to known geo-coordinates, along with additional address and location details.
      • Search - Browse Your Data provides a structured search for places through filtering by name and category ID. It is designed specifically for customers who want to use their own data (BYOD) with Search.
      • Search - Autosuggest improves the user's experience by allowing the submittal of free-form, incomplete or mispelled addresses or place names. The results returned are a mixture of places or addresses relevant to the incomplete query, and complete Search - One Search Box query suggestions.
      • Search - Places ID Lookup finds one result based upon its unique location ID. This service is typically used in complement of the above Search services above.
  • Rendering - allows users to request tiles containing vector data using OLP resident content via the Vector Tiles service
  • Routing - enables vehicle and pedestrian routing, calculating routes for consumer and enterprise applications
    • Users will have the ability to engage Routing functionality in the following ways;
      • Routing - Pedestrian service calculates pedestrian routes between two or more locations and provides additional route-related information. The Routing API is customizable so that the route calculation and additional information can be adapted to both consumer and enterprise applications.
      • Routing - Vehicle service calculates routes between two or more locations using various transport modes and provides additional route-related information. The Routing API is customizable so that the route calculation and additional information can be adapted to both consumer and enterprise applications.
  • Transit - includes transit departures, routing, and station search, enabling users to explore departures and calculate public transit routes
    • Users will have the ability to engage Routing functionality in the following ways;
      • Transit - Nearby Departures service explores departures of public transit lines to expose the subsequent/next stops along the line(s).
      • Transit - Routing service calculates routes between two geographical points using public transit (train, tram, bus, ferry, ...).
      • Transit - Station Search service provides multiple options for searching transit stations.
      • Transit - Intermodal Routing service calculates intermodal routes using street routing, public transit and external mobility services. This offers an innovative and smart routing experience in urban areas to navigate between a given pair of locations.

We are limiting access to these APIs to customers participating in our closed beta program, so that we can gather feedback and refine our offering before launching it to all customers. If you are interested in getting a sneak preview, and giving us feedback, please follow the link on the Service detail page and submit the OLP Information Request form to request access to the closed beta program.

In future releases, we will enable customers to bring their own data, integrate their own algorithms or customize existing ones, and eventually build their own location services on OLP. Our goal is to remove the complexity of managing cloud infrastructure, and enable customers to augment our turnkey services with their unique location data and business logic.

Option to increase data durability with multi-region synchronization

With this release, you can configure new catalogs as multi-region as an additional data durability feature.

In the event that your business is very risk-adverse to data loss, you can now set new catalogs as multi-region for the following layer types and operations only:

  • Read operations on versioned layers
  • Read and write operations on volatile layers

Additional charges apply as follows:

  • Storage charges double when storing data in a second region.
  • Data I/O charges increase 2.5 to 4 times, depending on the size of the objects you’re uploading: less for fewer large objects, more for many small objects. This is due to the validation HERE performs to ensure successful replication.

Find out more about multi-region catalogs and associated costs.

Improved Availability SLA for Data Operations

The OLP standard availability SLA for the following data operations has increased from 98.5% to 99.5% at no additional charge:

  • Read operations on Versioned layers
  • Read and Write operations on Volatile layers

For more information, see OLP’s updated Service Level Agreement.

Develop Stream Pipelines with Apache Flink 1.7.1

A new Stream-2.0.0 run-time environment with Apache Flink 1.7.1 is now available for creating Stream Pipelines. The Stream-2.0.0 run-time environment includes the following libraries:

  • Apache Flink 1.7.1
  • Java 8u191
  • Scala 2.11

The Stream-1.5.x (with Apache Flink 1.2.1) run-time environment is now deprecated. Existing Stream pipelines that use the Stream-1.5.x run-time environment will continue to operate normally until February 1, 2020. During this period, Stream-1.5.x run-time environment will receive security patches only. For this period, to continue developing pipelines with the Stream-1.5.x environment, please use OLP SDK 2.5 or older. After February 1, 2020 we will remove the Stream-1.5.x run-time environment and the pipelines still using it will be canceled. We recommend that you migrate your Stream Pipelines to the new Stream-2.0.0 run-time environment to utilize the latest functionality and improvements. For more details about migrating an existing Stream pipeline to the new Stream-2.0.0 run-time environment, see Migrate Pipeline to New Run-time Environment.

We will continue to evaluate the quality and stability of future Apache Flink releases to determine the versions to enable for OLP Pipelines. If you need a specific feature from a version of Apache Flink that is not yet available in OLP, please contact our Support team.

As we enable the newer versions of Flink, we will make best efforts to support the older versions for up to 6 months. For more details about our general support for Apache Flink, please see Stream Pipelines - Apache Flink Support FAQ.

With this release, we are also simplifying the dependency management of various libraries used in OLP SDK and the Pipeline Run-time Environment. From now on, only one environment file will be needed for developing Stream pipelines under Maven, sdk-stream-bom.pom. This file combines the run-time environment libraries and all SDK libraries needed for pipeline development and eliminates the need for workarounds, like shading. To learn more about the new sdk-stream-bom.pom file, see OLP SDK Dependency Management.

Develop Batch Pipelines with Apache Spark 2.4.2

A new Batch-2.0.0 run-time environment with Apache Spark 2.4.2 is now available for creating Batch Pipelines. The Batch-2.0.0 run-time environment includes the following libraries:

  • Apache Spark 2.4.2
  • Java 8u191
  • Scala 2.11

The Batch-1.5.x (with Apache Spark 2.1.1) run-time environment is now deprecated. Existing Batch pipelines that use the Batch-1.5.x run-time environment will continue to operate normally until February 1, 2020. During this period, Batch-1.5.x run-time environment will receive security patches only. For this period, to continue developing pipelines with the Batch-1.5.x environment, please use OLP SDK 2.5 or older. After February 1, 2020 we will remove the Batch-1.5.x run-time environment and the pipelines still using it will be canceled. We recommend that you migrate your Batch Pipelines to the new Batch-2.0.0 run-time environment to utilize the latest functionality and improvements. For more details about migrating an existing Batch pipeline to the new Batch-2.0.0 run-time environment, see Migrate Pipeline to New Run-time Environment.

We will continue to evaluate the quality and stability of future Apache Spark releases to determine the versions to enable for OLP Pipelines. If you need a specific feature from a version of Apache Spark that is not yet available in OLP, please contact our Support team.

As we enable the newer versions of Spark, we will make best efforts to support the older versions for up to 6 months. For more details about our general support for Apache Spark, please see Batch Pipelines - Apache Spark Support FAQ.

With this release, we are also simplifying the dependency management of various libraries used in OLP SDK and the Pipeline Run-time Environment. From now on, only one environment file will be needed for developing Batch pipelines under Maven, sdk-batch-bom.pom. This file combines the run-time environment libraries and all SDK libraries needed for pipeline development and eliminates the need for workarounds, like shading. To learn more about the new sdk-batch-bom.pom file, see OLP SDK Dependency Management.

Notifications for Planned Outages and Auto-recovery of Stream Pipelines

Previously, when your Stream Pipeline was affected due to a planned outage, we didn't provide you enough time to perform the requested action (for example, Pause and Resume a Pipeline Version) and as a result, the Pipeline was canceled by us, leading to interruptions in your data processing. From this release onward, we will notify you in advance about the planned outages with clear details about the incident and expected user actions. The email notification will be sent to the email address associated with the impacted Stream pipeline and it will contain the following details:

  • Incident Summary
  • Realm
  • Pipeline ID
  • Pipeline Name
  • Pipeline Version ID
  • Requested Action with supporting instructions
  • Due Date and Time
  • System Operation to be performed by OLP if the Requested Action is not performed

If you are not able to perform the Requested Action by the due date and time, we will perform the System Operation as highlighted in the outage email. Once we have initiated the System Operation on your Stream Pipeline, a second email will be sent with the following details:

  • Realm
  • Pipeline ID
  • Pipeline Name
  • Pipeline Version ID
  • Incident Summary
  • System Operation Date and Time
  • System Operation Being Performed

In cases where we need the processing of your Stream pipeline to be interrupted (for example, via Pause and Resume), and if you can't complete this action in time, then we will do it for you quickly and reliably. During the System Operation, we will attempt to save the current state and create a new job from the saved state. If you are able to perform the Requested Action by the due date and time, we will not perform the System Operation. We will be able to recover the state of only those Stream Pipelines that utilize check-pointing. Otherwise, we will Cancel and then Activate your pipeline if you haven’t performed the requested action by the due date and time. Therefore, we recommend you to use check-pointing in your Stream Pipelines.

If an affected Stream Pipeline does not have an email address associated with it, we will not be able to send a notification and will perform the System Operation during the planned outage.

Japanese Text

The text on the OLP Portal is now available in Japanese.

Support Panel

For convenience, the following links have been added to the support panel on platform.here.com, Status page, Knowledge base and Service level agreement.

New Navigation Launcher

We’ve tweaked our navigation to accommodate the growing list of things you can do on OLP. Use the launcher icon ( ⫶⫶⫶ ) in the top right navigation to get to the items that were previously located along the top of the site.

Account & Permissions

Added

  • Organization Admins

Organization (Org.) Admins are able to better manage users, apps and groups. Additionally, by enabling "Manage all apps", an individual Org Admin is able to delegate ownership of an app to other users. This is especially useful to ensure that a pipeline can continue to be activated even when the pipeline/app owner is no longer a user in OLP.

Known Issues

Issue

A finite number of access tokens (~ 250) are available for each app or user. Depending on the number of resources included, this number may be smaller.

Workaround

Create a new app or user if you reach the limitation.

Issue

Only a finite number of permissions are allowed for each app or user in the system across all services. It will be reduced depending on the inclusion of resources and types of permissions.

Issue

All users and apps in a group are granted permissions to perform all actions on any pipeline associated with that group. There is no support for users or apps with limited permissions. For example, you cannot have a reduced role that can only view pipeline status, but not start and stop a pipeline. Limit the users in a pipeline's group to only those users who should have full control over the pipeline.

Issue

When updating permissions, it can take up to an hour for changes to take effect.

Data

Deprecated

Following up on the deprecation notice released with OLP 2.2.1 at the end of January, 2019, OLP will no longer support functionality to get all volatile partitions that changed between version Vn and Vm.

Starting the first week of September:

  • The only option is to request a list of volatile tiles which have changed since a given time.
  • Volatile layer metadata will no longer be versioned. There is only one version, which is the latest version.
  • Metadata/Query APIs for volatile layers with versions will stop working.

Added

  • Data Archiving Library Error Handling

Data Archiving Library error handling has been improved with this release. You are now able to control error handling that is encountered when archiving stream data to an index layer. You decide what should happen to messages when they cannot be deserialized or do not otherwise contain good data, which allows you to clean up those messages at a later time.

  • Spark Connector

An updated release of the Spark Connector provides read support for both index and versioned layers. Spark Connectors provide a tool to help facilitate your data analysis, consumption and processing via Spark. Spend less time writing code to deal with different data formats and more time on your business case workflows.

Known Issues

Issue

Catalogs not associated with a realm are not visible in OLP.

Issue

Visualization of Index Layer data is not yet supported.

Issue

When you use the Data API or Data Library to create a Data Catalog or Layer, the app credentials used do not automatically enable the user who created those credentials to discover, read, write, manage, and share those catalogs and layers.

Workaround

After the catalog is created, use the app credentials to enable sharing with the user who created the app credentials. You can also share the catalog with other users, apps, and groups.

Marketplace

Known Issues

Issue

Users do not receive stream data usage metrics when reading or writing data from Kafka Direct.

Workaround

When writing data into a stream layer, you must use the ingest API to receive usage metrics. When reading data, you must use the Data Client Library, configured to use the HTTP connector type, to receive usage metrics and read data from a stream layer.

Issue

When the Splunk server is busy, the server can lose usage metrics.

Workaround

If you suspect you are losing usage metrics, contact HERE technical support for assistance rerunning queries and validating data.

Notebooks

Deprecated

  • OLP Notebooks has been deprecated, please refer to the advantages and enhancements offered in the new OLP SDK for Python instead.

Download your old notebooks and refer to the Zeppelin Notebooks Migration Guide for further instructions.

Known Issues

Issue

Notebooks cannot be shared with OLP user groups.

Workaround

Notebooks can be shared with one or more individual users by entering each account separately.

Issue

The Notebook Spark connector does not support analysis of stream layers and index layers.

Workaround

The Data Client Library can be used to analyze stream and index layers.

OLP SDK for Python

Added

  • HERE OLP SDK for Python - Beta

Performing analytics on OLP just got easier. Use the new HERE OLP SDK for Python - Beta to analyze and visualize OLP data in your own environment using Jupyter, Python, and Spark.

Features include:

  • Flexibility: Ability to use Python and the Python ecosystem of data science tools. Use and install your own Python libraries
  • Scalability: Ability to scale up your data analysis using larger Spark clusters (e.g., EMR in AWS)
  • Visualization: ipyleaflet and the popular Leaflet framework for interactive data visualization on a HERE base map
  • Data Access: Users can more easily work with their own local data as part of their analysis

Refer to HERE OLP SDK for Python - Beta for documentation, sample notebooks and installation instructions.

OLP Notebooks Deprecation

OLP Notebooks has been deprecated, please refer to the advantages and enhancements offered in the new HERE OLP SDK for Python - Beta instead.

Please download your old notebooks and refer to the Zeppelin Notebooks Migration Guide within the OLP SDK for Python Setup Guide for further instructions. For additional questions, please log a support ticket.

Known Issues

Issue

Currently, only MacOS and Linux distributions are supported.

Workaround

If you are using Windows OS, we recommend that you use a virtual machine.

Optimized Map for Analytics

Added

  • Optimized Map for Analytics

The Optimized Map for Analytics (OMA) is an alternative representation of HERE Map Content to facilitate data analysis. OMA is a friendly SQL-like version of HERE Map Content that simplifies the work to tie attributes to their reference layers. For example, a simple SQL lookup using OMA is one line of SparkSQL code while the same lookup using HERE Map Content would take more complicated Scala code.

OMA currently supports analysis of road attributes and includes content from the following HERE Map Content layers:

  • Road Topology and Geometry
  • Road Attributes
  • Navigation Attributes (premium content)

Please refer to OMA documentation for further information.

Known Issues

Issue

Layer navigation_access_permission schema is not accessible; data cannot be decoded from this layer.

Workaround

This layer will be available shortly.

Pipelines

Fixed

  • Fixed an issue where a few Stream pipelines were not able to register the CPU and Memory usage data for accounting and billing.

Known Issues

Issue

A pipeline failure or exception can sometimes take several minutes to respond.

Issue

The Pipeline Status Dashboard in Grafana can be edited by users. Any changes made by the user will be lost when updates are published in future releases because users will not be able to edit the dashboard in a future release.

Workaround

Duplicate the dashboard or create a new dashboard.

Issue

If several pipelines are consuming data from the same stream layer and belong to the same Group (pipeline permissions are managed via a Group), then each of those pipelines will only receive a subset of the messages from the stream. This is because, by default, the pipelines share the same Application ID.

If several pipelines are consuming data from the same stream layer and belong to the same Group (pipeline permissions are managed via a Group), then each of those pipelines will only receive a subset of the messages from the stream. This is because, by default, the pipelines share the same Application ID.

Workaround

Use the Data Client Library to configure your pipelines to consume from a single stream: If your pipelines/applications use the Direct Kafka connector, you can specify a Kafka Consumer Group ID per pipeline/application. If the Kafka consumer group IDs are unique, the pipelines/applications will be able to consume all the messages from the stream.

If your pipelines use the HTTP connector, we recommend you to create a new Group for each pipeline/application, each with its own Application ID.

Issue

Pipelines can still be activated after a catalog is deleted.

Workaround

The pipeline will fail when it starts running and will show an error message about the missing catalog. Re-check the missing catalog or use a different catalog.

Web & Portal

Known Issues

Issue

The custom run-time configuration for a Pipeline Version has a limit of 64 characters for the property name and 255 characters for the value.

Workaround

For the property name, you can define a shorter name in the configuration and map that to the actual, longer name within the pipeline code. For the property value, you must stay within the limitation.

Issue

Pipeline Templates can't be deleted from the Portal UI.

Workaround

Use the CLI or API to delete Pipeline Templates.

Issue

In the Portal, new jobs and operations are not automatically added to the list of jobs and operations for a pipeline version while the list is open for viewing.

Workaround

Refresh the Jobs and Operations pages to see the latest job or operation in the list.

HERE Technologies

HERE Technologies

Have your say

Sign up for our newsletter

Why sign up:

  • Latest offers and discounts
  • Tailored content delivered weekly
  • Exclusive events
  • One click to unsubscribe