Skip to main content
Release notes 21 min read

HERE Workspace & Marketplace 2.17 release

HERE Workspace & Marketplace 2.17 release

Highlights

Usage Dashboard Changes

Usage reporting in the platform is now shown based on currency rather than on HERE Credit Units which have been retired. This enables a more transparent and granular approach to pricing. We are also shifting the billing periods from “anniversary billing” to “calendar monthly billing”.

Please note, that on the Usage Dashboard and Rate History pages, the rates and amounts are now shown as list price for the platform in dollars or euros, depending on the billing currency in your contract.

In summary, the following changes on the platform usage dashboards:

  • The platform usage dashboard is replaced with a direct conversion of your usage per resource to an actual $/EUR value.
  • Usage is reported in calendar month instead of anniversary billing. A custom range view is still possible via the drop-down on the platform usage dashboard.
  • The historical rates in the rate history dashboard  have been converted to Dollars / Euros based on a $/EUR 65 list rate.

For existing customers, contract and pricing terms already in place supersede rates published in the platform portal. The list rates are published as an aid for customers to compare relative usage between services billed based on dissimilar usage metrics (for example, the list rates normalize the comparative cost generation of service transactions  vs. GB of data transfer).

 

Develop Stream Pipelines with Apache Flink 1.10.1 for better control over the memory configuration of Workers

A new Stream-3.0.0 run-time environment with Apache Flink 1.10.1 is now available for creating Stream pipelines. With Stream-3.0.0, the Memory Model of Task Managers has changed in order to provide more control over the memory configuration of the Workers of a Stream pipeline. Include version 2.17 of the HERE Data SDK for Java & Scala in your pipeline project to start developing with this new environment and choose Stream-3.0.0 as the run-time environment while creating a pipeline version.

Deprecated: The Stream-2.0.0 (with Apache Flink 1.7.1) run-time environment is now deprecated. The impact on existing Stream pipelines and the process of migrating to the Stream-3.0.0 run-time environment is available in the deprecation table at the bottom of this page.

 

Use Direct Kafka metrics to better monitor and debug streaming data workflows

Underlying Kafka Producer, Kafka Consumer "Fetch" and Direct Kafka connectivity related metrics are now accessible via the Data Client Library for data workflows with Flink and when using the Direct Kafka connector type (only). These metrics can be used to create custom dashboards in Grafana. Underlying Kafka Consumer metrics are also available for programmatic retrieval. With these metrics, you have more information to help you monitor and debug any of your streaming data workflows using Direct Kafka. Learn more here.

 

Add Stream layers to multi-region configured catalogs as an additional data loss mitigation strategy

This release supports adding Stream layers to multi-region catalogs. With this addition, it is possible to include Versioned, Volatile and Stream storage layers to multi-region catalogs. Catalogs can be configured to be multi-region upon initial catalog configuration/creation. Data stored in multi-region catalogs is replicated to a second region mitigating data loss in the event the primary region experiences a downtime event.

Note: Additional charges apply:

  • Storage charges double when storing data in a second region.
  • Data I/O charges increase 2.5 to 4 times, depending on the size of the objects you’re uploading: less for fewer large objects, more for many small objects. This is due to the validation HERE performs to ensure successful replication.

    Learn more about multi-region catalogs and associated costs here.

 

Access additional HERE map attributes via the Location Library for fast random access

Additional attributes have been compiled to Optimized Map for Location Library so they can be accessed much faster in random access scenarios via the Location Library: GradeCategory and Scenic from the Advanced Navigation Attributes layer and Elevation from the ADAS Attributes layer. Note: Elevation is currently restricted by Chinese authorities. Hence, it is not available in the HERE Map Content China catalog nor via the Location Library using its Optimized Map.

We will continue to iteratively compile HERE Map Content attributes into the Optimized Map for Location Library enabling fast and simplified direct access to these attributes via the Location Library.
 

Documentation with updated terminology following our new branding

To move closer to one integrated HERE platform, we have continued the previously announced rebranding efforts. You’ll now see that documentation and most supporting materials reflect the rebranding changes we started several months ago as we seek to ensure a more seamless user experience. Examples of changes include:

Former name New name
Open Location Platform (OLP) HERE Workspace and HERE Marketplace
OLP SDK for Java & Scala HERE Data SDK for Java & Scala
OLP SDK for Python HERE Data SDK for Python
OLP SDK for C++ HERE Data SDK for C++
OLP SDK for TypeScript HERE Data SDK for TypeScript

 

Changes, Additions and Known Issues

SDKs and tools

Go to the HERE platform changelog to see all detailed changes to our CLI, the Data SDKs for Python, TypeScript, C++, Java and Scala as well as to the Data Inspector Library.

 

Web & Portal

Issue: The custom run-time configuration for a Pipeline Version has a limit of 64 characters for the property name and 255 characters for the value.
Workaround: For the property name, you can define a shorter name in the configuration and map that to the actual, longer name within the pipeline code. For the property value, you must stay within the limitation.

Issue: Pipeline Templates can't be deleted from the Portal UI.
Workaround: Use the CLI or API to delete Pipeline Templates.

Issue: In the Portal, new jobs and operations are not automatically added to the list of jobs and operations for a pipeline version while the list is open for viewing.
Workaround: Refresh the Jobs and Operations pages to see the latest job or operation in the list.

 

Projects & Access Management

Issue: A finite number of access tokens (~ 250) are available for each app or user. Depending on the number of resources included, this number may be smaller.
Workaround: Create a new app or user if you reach the limitation.

Issue: Only a finite number of permissions are allowed for each app or user in the system across all services. It will be reduced depending on the inclusion of resources and types of permissions.

Issue: All users and apps in a group are granted permissions to perform all actions on any pipeline associated with that group. There is no support for users or apps with limited permissions. For example, you cannot have a reduced role that can only view pipeline status, but not start and stop a pipeline.
Workaround: Limit the users in a pipeline's group to only those users who should have full control over the pipeline.

Issue: When updating permissions, it can take up to an hour for changes to take effect.

Issue: Projects and all resources in a Project are designed for use only in Workspace and are unavailable for use in Marketplace. For example, a catalog created in a Platform Project can only be used in that Project. It cannot be marked as "Marketplace ready" and cannot be listed in the Marketplace.
Workaround: Do not create catalogs in a Project when they are intended for use in both Workspace and Marketplace.

 

Data

Fixed: Issues with Metadata Storage billing were resolved where Metadata Storage tied to the use of Versioned and Index layers was not being charged / appearing on customer invoices. This fix will result in those charges now correctly appearing on invoices. Note that no storage costs have increased; associated increases in invoices are simply due to these charges now getting billed correctly.

Issue: The changes released with 2.9 (RoW) and with 2.10 (China) to add OrgID to Catalog HRNs and with 2.10 (Global) to add OrgID to Schema HRNs could impact any use case (CI/CD or other) where comparisons are performed between HRNs used by various workflow dependencies.  For example, requests to compare HRNs that a pipeline is using vs what a Group, User or App has permissions to will  result in errors if the comparison is expecting results to match the old HRN construct.  With this change, Data APIs will return only the new HRN construct which includes the OrgID (e.g. olp-here…) so a comparison between the old HRN and the new HRN will be unsuccessful.   

  • Reading from and writing to Catalogs using old HRNs is not broken and will continue to work until July 31, 2020.
  • Referencing old Schema HRNs is not broken and will work into perpetuity.

Workaround: Update any workflows comparing HRNs to perform the comparison against the new HRN construct, including OrgID.

Issue: Versions of the Data Client Library prior to 2.9 did not compress or decompress data correctly per configurations set in Stream layers. We changed this behavior in 2.9 to strictly adhere to the compression setting in the Stream layer configuration but when doing so, we broke backward compatibility wherein data ingested and consumed via different Data Client Library versions will likely fail. The Data Client LIbrary will throw an exception and, depending upon how your application handles this exception, could lead to an application crash or downstream processing failure. This adverse behavior is due to inconsistent compression and decompression of the data driven by the different Data Client Library versions. 2.10 introduces more tolerant behavior which correctly detects if stream data is compressed and handles it correctly.

Workaround: In the case where you are using compressed Stream layers and streaming messages smaller than 2MB, use the 2.8 SDK until you have confirmed that all of your customers are using at least the 2.10 SDK where this Data Client Library issue is resolved, then upgrade to the 2.10 version for the writing aspects of your workflow.

Issue: Searching for a schema in the Portal using the old HRN construct will return only the latest version of the schema.  The Portal will not show older versions tied to the old HRN.

Workaround: Search for schemas using the new HRN construct OR lookup older versions of schemas by old HRN construct using the OLP CLI.

Issue: Visualization of Index layer data is not yet supported

 

Pipelines

Deprecated: pipeline_jobs_canceled metric used within the Pipeline Status Dashboard is now deprecated. See the details in the Deprecation table at the bottom of this page.

Issue: A pipeline failure or exception can sometimes take several minutes to respond.

Issue: Pipelines can still be activated after a catalog is deleted.
Workaround: The pipeline will fail when it starts running and will show an error message about the missing catalog. Re-check the missing catalog or use a different catalog.

Issue: If several pipelines are consuming data from the same Stream layer and belong to the same Group (pipeline permissions are managed via a Group), then each of those pipelines will only receive a subset of the messages from the stream. This is because, by default, the pipelines share the same Application ID.
Workaround: Use the Data Client Library to configure your pipelines to consume from a single stream: If your pipelines/applications use the Direct Kafka connector, you can specify a Kafka Consumer Group ID per pipeline/application.  If the Kafka consumer group IDs are unique, the pipelines/applications will be able to consume all the messages from the stream.
If your pipelines use the HTTP connector, we recommend you to create a new Group for each pipeline/application, each with its own Application ID.

Issue: The Pipeline Status Dashboard in Grafana can be edited by users. Any changes made by the user will be lost when updates are published in future releases because users will not be able to edit the dashboard in a future release.
Workaround: Duplicate the dashboard or create a new dashboard.

Issue: For Stream pipeline versions running with the high-availability mode, in a rare scenario, the selection of the primary Job Manager fails.
Workaround: Restart the stream pipeline.

 

Map Content

Added: additions to existing layers in the HERE Map Content catalog

  • Added HighOccupancyVehicleLaneCondition to Lane Attributes partition
  • Added DependentAccessType to AccessPermission

 

Location Services

Issue: Lack of usage reporting for Location Services released in version 2.10 (Routing, Search, Transit, and Vector Tiles Service)
Workaround: Usage is being tracked at the service level.  Following the 2.12 release wherein usage reporting is expected to be in place, customers may request usage summaries for usage incurred between the 2.10 release and 2.12.

 

Marketplace (Not available in China)

Issue: There is no throttling for the beta version of the External Service Gateway. When the system is overloaded, service will slow down across the board for all consumers who are reading from the External Service Gateway.

Workaround: Contact technical support for help.

Issue: Users do not receive stream data usage metrics when reading or writing data from Kafka Direct.
Workaround: When writing data into a Stream layer, you must use the Ingest API to receive usage metrics. When reading data, you must use the Data Client Library, configured to use the HTTP connector type, to receive usage metrics and read data from a Stream layer.

Issue: When the Technical Accounting component is busy, the server can lose usage metrics.
Workaround: If you suspect you are losing usage metrics, contact HERE technical support for assistance rerunning queries and validating data.

Issue: Projects and all resources in a Project are designed for use only in Workspace and are unavailable for use in Marketplace. For example, a catalog created in a Platform Project can only be used in that Project. It cannot be marked as "Marketplace ready" and cannot be listed in the Marketplace.
Workaround: Do not create catalogs in a Project when they are intended for use in the Marketplace.

 

Summary of active deprecation notices across all components

No.

Feature Summary

Deprecation Period Announced (Platform Release)

Deprecation Period Announced (Month)

Deprecation Period End

1

OrgID added to Catalog HRN (RoW)

2.9 (ROW)

2.10 (China)

November 2019

July 31, 2020

 

Deprecation Summary:

Catalog HRNs without OrgID will no longer be supported in any way after July 31, 2020.

  • Referencing catalogs and all other interactions with REST APIs using the old HRN format without OrgID OR by CatalogID will stop working after July 31, 2020.
    • Please ensure all HRN references in your code are updated to use Catalog HRNs with OrgID before July 31, 2020 so your workflows continue to work.
  • HRN duplication to ensure backward compatibility of Catalog version dependencies resolution will no longer be supported after July 31, 2020.
  • Examples of old and new Catalog HRN formats:
    • Old (without OrgID/realm): hrn:here:data:::my-catalog
    • New (with OrgID/realm): hrn:here:data::OrgID:my-catalog

2

OrgID added to Schema HRN (Global)

2.10

December 2019

July 31, 2020

 

Deprecation Summary:

References to pre-existing schemas created before this feature release will work with either new or old HRNs so those references are not impacted. New schemas created as of this feature release will support both HRN constructs for a period of (6) months or until June 30, 2020. After this time, only the new HRN construct will be supported.

3

Spark-ds-connector replaced by SDK for Java and Scala Spark Connector

2.12

February 2020

August 19, 2020

 

Deprecation Summary:

The spark-ds-connector will be deprecated (6) months from this release on August 19, 2020. Please upgrade to the latest SDK for Python version before then to get the latest SDK for Java and Scala Spark Connector.

4

Batch-2.0.0 run-time environment for Pipelines

2.12

February 2020

August 19, 2020

 

Deprecation Summary:

Batch-2.0.0 run-time environment for Batch pipelines is now deprecated. Existing Batch pipelines that use the Batch-2.0.0 run-time environment will continue to operate normally until August 19, 2020. During this period, Batch-2.0.0 run-time environment will receive security patches only. For this period, to continue developing pipelines with the Batch-2.0.0 environment, please use OLP SDK 2.11 or older. After August 19, 2020 we will remove the Batch-2.0.0 run-time environment and the pipelines still using it will be canceled. We recommend that you migrate your Batch Pipelines to the Batch-2.1.0 run-time environment to utilize the latest functionality and improvements.

5

Schema validation to be added

2.13

March 2020

September 30, 2020

 

Deprecation Summary:

For security reasons, the platform will start validating schema reference changes in layer configurations as of September 30, 2020. Schema validation will check if the user or application trying to make a layer configuration change indeed has at least read access to the existing schema associated with that layer (i.e. a user or application cannot reference or use a schema they do not have access to). If a non-existing or non-accessible schema is associated with any layer after this date, any attempt to update any configurations of that layer will fail. Please ensure all layers refer only to real, existing schemas, or contain no schema reference at all before September 30, 2020. It is possible to use the Config API to remove or altogether change schemas associated with layers to resolve these invalid schema/layer associations. Also, any CI/CD jobs referencing non-existing or non-accessible schemas will need to be updated by this date or they will fail.

6

Customizable Volatile layer storage capacity and redundancy configurations

2.14

April 2020

October 30, 2020

 

Deprecation Summary:

The Volatile layer configuration option to set storage capacity as a "Package Type" will be deprecated within (6) months or by October 30, 2020. All customers should deprecate their existing volatile layers and create new volatile layers with these new configurations within (6) months of this feature release or by October 30, 2020.

7

Stream-2.0.0 run-time environment for Pipelines

2.17

July 2020

February 1, 2021

 

Deprecation Summary:

Stream-2.0.0 (with Apache Flink 1.7.1) run-time environment is now deprecated. Existing Stream pipelines that use the Stream-2.0.0 run-time environment will continue to operate normally until February 1, 2021. During this period, Stream-2.0.0 run-time environment will receive security patches only. For this period, to continue developing pipelines with the Stream-2.0.0 environment, please use Platform SDK 2.16 or older. After February 1, 2021 the Stream-2.0.0 run-time environment will be removed and the pipelines still using it will be canceled. We recommend that you migrate your Stream Pipelines to the new Stream-3.0.0 run-time environment to utilize the latest functionality and improvements. For more details about migrating an existing Stream pipeline to the new Stream-3.0.0 run-time environment, see Migrate Pipeline to new Run-time Environment. For more details about our general support for Apache Flink, please see Stream Pipelines - Apache Flink Support FAQ.

 

8

‘pipeline_jobs_canceled’ metric in Pipeline Status Dashboard

2.17

July 2020

February 1, 2021

 

Deprecation Summary:

‘pipeline_jobs_canceled’ metric used within the Pipeline Status Dashboard is now deprecated because it was tied to the Pause functionality and caused confusion. The metric and its explanation will be available to use until February 1, 2021. After that date, the metric will be removed.

 

 

Torsten Linz

Torsten Linz

Have your say

Sign up for our newsletter

Why sign up:

  • Latest offers and discounts
  • Tailored content delivered weekly
  • Exclusive events
  • One click to unsubscribe