Upload your data to Object Store layers using Hadoop FS
We have added two new tutorials explaining how to publish data to Object Store layers in a distributed fashion using Hadoop FS. One shows how to do this from Spark and another one shows how to do this in standalone mode. Typically, you would use one or the other when writing data to an Object Store layer that you have stored in AWS S3 or Microsoft Azure.
Changes, additions and known issues
SDKs and tools
Web and portal
Known issue: Pipeline templates can't be deleted from the platform portal UI.
Workaround: Use the CLI or API to delete pipeline templates.
Known issue: In the platform portal, new jobs and operations are not automatically added to the list of jobs and operations for a pipeline version when the list is open for viewing.
Workaround: Refresh the "Jobs" and "Operations" pages to see the latest job or operation in the list.
Projects and access management
Known issue: A set number of access tokens (~250) is available for each app or user. Depending on the number of resources included, this number may be smaller.
Workaround: Create a new app or user if you reach the limit.
Known issue: A set number of permissions is allowed for each app or user in the system across all services. This will be reduced depending on the inclusion of resources and types of permissions.
Known issue: All users and apps in a group are granted permissions to perform all actions on any pipeline associated with that group. There's no support for users or apps with limited permissions. For example, you can't have a role that is limited to viewing pipeline statuses, but not starting and stopping a pipeline.
Workaround: Limit the uses in a pipeline group only to those who should have full control over the pipeline.
Known issue: When updating permissions, it can take up to an hour for the changes to take effect.
Known issue: Projects and all resources in a project are designed for use only in HERE Workspace, not the Marketplace. For example, a catalog created in a platform project can only be used in that project. It can't be marked as "Marketplace-ready" nor be listed in the Marketplace.
Workaround: Don't create catalogs in a project that are intended for use in both Workspace and Marketplace.
Known issue: In support of the Object Store layer type, a newer version of the Blob API (blob v2) is available in production. The availability of this newer Blob API version can impact existing workflows if developers use Lookup API to get a list of all provided endpoints for a given resource BUT do not select the right baseUrl based on the right API and API version. Because multiple versions of the same API exist, Lookup API responses will include specific URLs per API version.
Workaround: Always select the right baseUrl from Lookup API responses based on the API and API version that you intend to work with. To support existing workflows until you can correct your API selection logic, the Lookup API will return multiple Blob API v1 baseUrls in various positions in responses for the next 6 months, starting January 2021. Please see the deprecation summary at the end of these release notes for more information.
Known issue: The "Upload data" button in your Layer UI under "More" is hidden when the "Content encoding" field in the layer is set to "gzip".
Workaround: Files (including .zip files) can still be uploaded and downloaded as long as the "Content encoding" field is set to "Uncompressed".
Known issue: The changes released with 2.9 (RoW) and with 2.10 (China) - for adding OrgIDs to catalog HRNs - and with 2.10 (Global) - for adding OrgIDs to schema HRNs - could impact any use case (CI/CD or other) where comparisons are made between HRNs used by various workflow dependencies. For example, requests to compare HRNs that a pipeline is using with those to which a group, user or app has permissions will result in errors if the comparison is expecting results to match the old HRN construct. With this change, data APIs will return only the new HRN construct, which includes the OrgID, e.g. olp-here…, so a comparison between the old HRN and the new HRN will fail.
- Reading from and writing to catalogs using old HRNs will continue to work until this functionality is deprecated (see deprecation notice summary).
- Referencing old schema HRNs will continue to work indefinitely.
Workaround: Update any workflows comparing HRNs to perform the comparison against the new HRN construct, including the OrgID.
Known issue: Searching for a schema in the platform portal using the old HRN construct returns only the latest version of the schema. The portal won't show older versions associated with the old HRN.
Workaround: Search for schemas using the new HRN construct, or look up older versions of schemas using the old HRN construct in the CLI.
Known issue: Visualization of index-layer data isn't supported yet.
Deprecation reminder: The batch-2.0.0 environment will soon be removed as its deprecation period has ended. It would be best to migrate your batch pipelines to the batch-2.1.0 run-time environment to benefit from the latest functionality and improvements.
Known issue: A pipeline failure or exception can sometimes take several minutes to respond.
Known issue: Pipelines can still be activated after a catalog is deleted.
Workaround: The pipeline will fail when it starts running and show an error message about the missing catalog. Find the missing catalog or use a different one.
Known issue: If several pipelines are consuming data from the same stream layer and belong to the same group (pipeline permissions are managed through a group), then each pipeline will only receive a subset of the messages from the stream. This is because, by default, the pipelines share the same application ID.
Workaround: Use the Data Client Library to configure your pipelines so they consume from a single stream. If your pipelines/apps use the Direct Kafka connector, you can specify a Kafka Consumer group ID per pipeline/application. If the Kafka consumer group IDs are unique, the pipelines/apps can consume all the messages from the stream.
If your pipelines use the HTTP connector, create a new group for each pipeline/app, each with its own app ID.
Marketplace (Not available in China)
Added: You can edit or cancel a previously set subscription deactivation due date by going to an active subscription detail page and click on "Change expiry date".
Added: You can view a list of your invited customers on your provider listing view until the invitations are accepted.
Known issue: When adding an Interactive Map layer to a Marketplace catalog and offer with a subscription option, the subscription reporting doesn't generate usage metrics.
Workaround: Subscription reporting will support Interactive Map layer type in future releases.
Known issue: When adding an Object Store layer to a Marketplace catalog and offer with a subscription option, the subscription reporting does not generate usage metrics.
Workaround: Subscription reporting will support Object Store layer type in future releases.
Known issue: There is no throttling for the beta version of the External Service Gateway. When the system is overloaded, the service slows down for everyone reading from the External Service Gateway.
Workaround: Contact HERE support for help.
Known issue: When the Technical Accounting component is busy, the server can lose usage metrics.
Workaround: If you suspect you're losing usage metrics, contact HERE support to get help with rerunning queries and validating data.
Known issue: Projects and all resources in a project are designed for use only in HERE Workspace and not available for use in HERE Marketplace. For example, a catalog created in a platform project can only be used in that project. It can't be marked as "Marketplace-ready" nor be listed in the Marketplace.
Workaround: Don't create catalogs in a project if intended only for use in the Marketplace.
Summary of active deprecation notices
This lists only deprecation notices for APIs that are not part of the HERE platform changelog. For those APIs that are covered by the changelog you can filter by 'deprecated' to list all deprecation notices.)
Deprecation period announced (platform release)
Deprecation period announced (month)
Deprecation period end
OrgID added to catalog HRN (RoW)
June 30, 2021 (extended)
Catalog HRNs without OrgID will no longer be supported in any way.
Batch-2.0.0 run-time environment for pipelines
August 19, 2020 (past due)
The deprecation period is over and Batch-2.0.0 will be removed soon. Pipelines still using it will be canceled. Migrate your batch pipelines to the Batch-2.1.0 run-time environment to benefit from the latest functionality and improvements. For more details about migrating a batch pipeline to the new Batch-2.1.0 run-time environment, see Migrate Pipeline to new Run-time Environment.
Schema validation to be added
June 30, 2021
For security reasons, the platform will start validating schema reference changes in layer configurations after this deprecation period. Schema validation will check if the user or application trying to make a layer configuration change has at least read access to the existing schema associated with that layer (i.e., a user or application cannot reference or use a schema they do not have access to).
If the user or application does not have access to a schema associated with any layer after this date, attempts to update configurations of that layer will fail until the schema association or permissions are corrected. Make sure all layers refer only to real, current schemas - or have no schema reference at all - before the deprecation period end. It's possible to use the Config API to remove or change schemas associated with layers to resolve these invalid schema/layer associations. Also, any CI/CD jobs referencing non-existent or inaccessible schemas need to be updated by this date, or they will fail.
|4||Stream-2.0.0 run-time environment for pipelines||2.17||July 2020||February 1, 2021|
|Deprecation summary:|| |
Stream-2.0.0 (with Apache Flink 1.7.1) run-time environment is now deprecated. Existing stream pipelines that use the Stream-2.0.0 run-time environment will continue to operate normally until February 1, 2021. During this time, Stream-2.0.0 run-time environment will receive security patches only.
For this period, to continue developing pipelines with the Stream-2.0.0 environment, use platform SDK 2.16 or older. After February 1, 2021, the Stream-2.0.0 run-time environment will be removed and pipelines using it will be canceled. Migrate your stream pipelines to the new Stream-3.0.0 run-time environment to benefit from the latest functionality and improvements. For more details about migrating an existing stream pipeline to the new Stream-3.0.0 run-time environment, see Migrate Pipeline to new Run-time Environment. For general support for Apache Flink, please see Stream Pipelines - Apache Flink Support FAQ.
|5||pipeline_jobs_canceled metric in pipeline status dashboard||2.17||July 2020||February 1, 2021|
|Deprecation summary:||The pipeline_jobs_canceled metric used in the pipeline status dashboard is now deprecated because it was tied to the pause functionality and caused confusion. The metric and its explanation will be available to use until February 1, 2021. Thereafter, the metric will be removed.|
|6||Stream throughput configuration changes from MBps to kBps||2.19||September 2020||March 31, 2021|
|Deprecation summary:|| |
Support for stream layers with configurations in MBps is deprecated and will no longer be supported in six months or by March 31, 2021.
After March 31, 2021 only kBps throughput configurations will be supported. This means that Data Client Library and CLI versions included in SDK 2.18 and earlier can no longer be used to create stream layers because these versions do not support configuring stream layers in KB/s.
|7||Monitoring stability improvements||2.20||October 2020||April 30, 2021|
|Deprecation summary:|| |
The "kubernetes_namespace" metric has been deprecated and will be supported until April 30, 2021. Update all Grafana dashboard queries using this metric to use the "namespace" metric.
The label_values(label) function has been deprecated and will be supported until April 30, 2021. Update all Grafana dashboard queries using this function to use label_values(metric, label)
The datasource "<Realm>-master-prometheus-datasource" has been deprecated and will be supported until April 30, 2021. Update all Grafana dashboards using this datasource to use the Primary datasource.
|8||Additional support to help distinguish Blob API versions in Lookup API responses || |
|December 2020||June 30, 2021|
|Deprecation summary:|| |
Because multiple versions of different Data APIs exist, it's important that your automated workflows that request service endpoints from Lookup API are updated to select the right baseUrls for the right API and API version you are working with. As some existing customer workflow automation is not yet updated to select the right baseUrls from the Lookup API responses, the Look Up API will return multiple Blob API V1 baseUrls in various positions in responses over the next 6 months, starting January 2021.
To prevent downtime, update your workflow automation during this deprecation period to select the right baseUrl from Lookup API responses based on the right API and API version.