A new Object Store layer type is available to store objects without requiring HERE partitions
HERE Workspace now supports Object Storage as a new layer type. Read, write, update, delete and list objects stored in Object Store layers via any of the Data interfaces with exception of the Portal. Portal support, including a file system view and data inspection/visualization, will be delivered at a later time.
A Hadoop File System interface-based connector is also provided with this solution in the form of a new component of the Data Client Library. This library enables you to access data stored in this layer via any tooling which supports HDFS. The combined solution of storage + library enables you to store objects in HERE Workspace without requiring HERE partitions and reduces custom code necessary to access your stored data from outside HERE Workspace.
Note: The rollout of this new storage type introduces a new version of the Blob API (data-blob-v2). In order to leverage Object Storage directly via API, you must use this new version. The preexisting API (data-blob-v1) will continue to exist. See more information here.
Share catalogs between projects
This release introduces the capability to share catalogs between projects within the same organization, facilitating collaboration between teams who want to reuse resources. The capability consists of two interactions:
First, users with manage access to a catalog in a project can now make that catalog available to other projects in their organization with read and/or write permissions. They may choose to make the catalog(s) available to specific projects or all projects in their organization. They may also subsequently revoke availability of the catalog as needed.
Second, once a catalog has been made available to a project, users in that project can link to it choosing from the available permissions if more than read access has been granted. Users can also subsequently de-link the linked catalog from their project as needed.
With the addition of this feature, projects become a more flexible way to work with your resources, and we encourage platform users to only create new catalogs in the context of/scoped to a project.
Data Inspector Improvements
The Data Inspector in Workspace is available as a standalone online tool to load and inspect your locally developed partition files. You can access it through your platform portal launcher, or visit https://platform.here.com/data-inspector.
When visiting the Inspect tab of a versioned layer, you can inspect the data freshness as a heat map on higher zoom levels, just like you would on data density. To activate the freshness heat map, find the Heatmap section on the left control panel of your Inspect tab and set type to Freshness.
Changes, additions and known issues
SDKs and tools
Web and portal
Changed: The Data Inspector has been available for you to inspect data locally and run a small web server locally during your local development, and now you can also use it as an online tool to load and inspect your locally developed partition files. You may access it through the platform portal launcher, or simply visit https://platform.here.com/data-inspector.
Changed: When visiting the Inspect tab of a versioned layer, you may now inspect the data freshness as a heat map on higher zoom levels, just like you would on data density. To activate the freshness heat map, find the Heat map section on the left control panel of your Inspect tab, and set type to Freshness.
Known issue: Pipeline templates can't be deleted from the platform portal UI.
Workaround: Use the CLI or API to delete pipeline templates.
Known issue: In the platform portal, new jobs and operations are not automatically added to the list of jobs and operations for a pipeline version when the list is open for viewing.
Workaround: Refresh the "Jobs" and "Operations" pages to see the latest job or operation in the list.
Projects and access management
Known issue: A set number of access tokens (~250) is available for each app or user. Depending on the number of resources included, this number may be smaller.
Workaround: Create a new app or user if you reach the limitation.
Known issue: A set number of permissions is allowed for each app or user in the system across all services. This will be reduced depending on the inclusion of resources and types of permissions.
Known issue: All users and apps in a group are granted permissions to perform all actions on any pipeline associated with that group. There's no support for users or apps with limited permissions. For example, you can't have a role that is limited to viewing pipeline statuses, but not starting and stopping a pipeline.
Workaround: Limit the uses in a pipeline group only to those who should have full control over the pipeline.
Known issue: When updating permissions, it can take up to an hour for the changes to take effect.
Known issue: Projects and all resources in a project are designed for use only in HERE Workspace, not the Marketplace. For example, a catalog created in a platform project can only be used in that project. It can't be marked as "Marketplace-ready" nor be listed in the Marketplace.
Workaround: Don't create catalogs in a project that are intended for use in both Workspace and Marketplace.
Changed: The Data Stream API and the Data Client Library are now integrated with OAUTH2. OAUTH2 mitigates the need for Kafka data producers and consumers to restart whenever authentication tokens are refreshed, which was the case with the prior version of authentication. Such restarts can drive data duplication in streaming data workflows, therefore this update is intended to address that issue.
Known issue: In support of the new Object Store layer type, a newer version of the Blob API (blob v2) is also now available in production. The availability of this new Blob API version can impact existing workflows if developers use Lookup API to get a list of all provided endpoints for a given resource BUT do not select the right baseUrl based on the right API and API version. Given multiple versions of the same API exist, Lookup API responses will include specific URLs per API version.
Workaround: Always select the right baseUrl from Lookup API responses based on the API and API version that you are intending to work with. To support existing workflows until you are able to correct your API selection logic, the Lookup API will return multiple Blob API v1 baseUrls in various positions in responses for the next 6 months only, starting January 2021. Please see the deprecation summary at the end of these release notes for more information.
Known issue: When using spark-sql to read a single file from an Object Store layer using the blobfs library, blobfs returns an empty result because it will only list directories, not files.
Workaround: When using spark-sql to read a single file from an Object Store layer using the blobfs library, include the respective directory path in the request.
Known issue: The "Upload data" button in your Layer UI under "More" is hidden when the "Content encoding" field in the layer is set to "gzip".
Workaround: Files (including .zip files) can still be uploaded and downloaded as long as the "Content encoding" field is set to "Uncompressed".
Known issue: The changes released with 2.9 (RoW) and with 2.10 (China) - for adding OrgIDs to catalog HRNs - and with 2.10 (Global) - for adding OrgIDs to schema HRNs - could impact any use case (CI/CD or other) where comparisons are made between HRNs used by various workflow dependencies. For example, requests to compare HRNs that a pipeline is using with those to which a group, user or app has permissions will result in errors if the comparison is expecting results to match the old HRN construct. With this change, data APIs will return only the new HRN construct, which includes the OrgID, e.g. olp-here…, so a comparison between the old HRN and the new HRN will fail.
- Reading from and writing to catalogs using old HRNs will continue to work until this functionality is deprecated (see deprecation notice summary).
- Referencing old schema HRNs will continue to work indefinitely.
Workaround: Update any workflows comparing HRNs to perform the comparison against the new HRN construct, including the OrgID.
Known issue: Searching for a schema in the platform portal using the old HRN construct returns only the latest version of the schema. The portal won't show older versions associated with the old HRN.
Workaround: Search for schemas using the new HRN construct, or look up older versions of schemas using the old HRN construct in the CLI.
Known issue: Visualization of index-layer data isn't supported yet.
Deprecation reminder: The batch-2.0.0 environment will soon be removed as its deprecation period has ended. It would be best to migrate your batch pipelines to the batch-2.1.0 run-time environment to benefit from the latest functionality and improvements.
Known issue: A pipeline failure or exception can sometimes take several minutes to respond.
Known issue: Pipelines can still be activated after a catalog is deleted.
Workaround: The pipeline will fail when it starts running and show an error message about the missing catalog. Find the missing catalog or use a different one.
Known issue: If several pipelines are consuming data from the same stream layer and belong to the same group (pipeline permissions are managed through a group), then each pipeline will only receive a subset of the messages from the stream. This is because, by default, the pipelines share the same application ID.
Workaround: Use the Data Client Library to configure your pipelines so they consume from a single stream. If your pipelines/apps use the Direct Kafka connector, you can specify a Kafka Consumer group ID per pipeline/application. If the Kafka consumer group IDs are unique, the pipelines/apps can consume all the messages from the stream.
If your pipelines use the HTTP connector, create a new group for each pipeline/app, each with its own app ID.
Changed: Additions to existing layers in the HERE Map Content catalog:
- Added ConnectorTypeId to Port in "Places" layer
Marketplace (Not available in China)
Known issue: When adding an Object Store layer to a Marketplace catalog and offer with an subscription option, the subscription reporting currently does not generate usage metrics.
Workaround: Subscription reporting will support Object Store layer type in the next release.
Known issue: There is no throttling for the beta version of the External Service Gateway. When the system is overloaded, the service slows down for everyone reading from the External Service Gateway.
Workaround: Contact HERE support for help.
Known issue: When the Technical Accounting component is busy, the server can lose usage metrics.
Workaround: If you suspect you're losing usage metrics, contact HERE support to get help with rerunning queries and validating data.
Known issue: Projects and all resources in a project are designed for use only in HERE Workspace and not available for use in HERE Marketplace. For example, a catalog created in a platform project can only be used in that project. It can't be marked as "Marketplace-ready" nor be listed in the Marketplace.
Workaround: Don't create catalogs in a project if intended only for use in the Marketplace.
Summary of active deprecation notices
This lists only deprecation notices for APIs that are not part of the HERE platform changelog. For those APIs that are covered by the changelog you can filter by 'deprecated' to list all deprecation notices.)
Deprecation period announced (platform release)
Deprecation period announced (month)
Deprecation period end
OrgID added to catalog HRN (RoW)
February 26, 2021 (extended)
Catalog HRNs without OrgID will no longer be supported in any way after February 26, 2021.
Batch-2.0.0 run-time environment for pipelines
August 19, 2020 (past due)
The deprecation period is over and Batch-2.0.0 will be removed soon. Pipelines still using it will be canceled. Migrate your batch pipelines to the Batch-2.1.0 run-time environment to benefit from the latest functionality and improvements. For more details about migrating a batch pipeline to the new Batch-2.1.0 run-time environment, see Migrate Pipeline to new Run-time Environment.
Schema validation to be added
June 30, 2021
For security reasons, the platform will start validating schema reference changes in layer configurations as of November 30, 2020. Schema validation will check if the user or application trying to make a layer configuration change has at least read access to the existing schema associated with that layer (i.e., a user or application cannot reference or use a schema they do not have access to).
If the user or application does not have access to a schema associated with any layer after this date, attempts to update configurations of that layer will fail until the schema association or permissions are corrected. Make sure all layers refer only to real, current schemas - or have no schema reference at all - before November 30, 2020. It's possible to use the Config API to remove or change schemas associated with layers to resolve these invalid schema/layer associations. Also, any CI/CD jobs referencing non-existent or inaccessible schemas need to be updated by this date, or they will fail.
|4||Stream-2.0.0 run-time environment for pipelines||2.17||July 2020||February 1, 2021|
|Deprecation summary:|| |
Stream-2.0.0 (with Apache Flink 1.7.1) run-time environment is now deprecated. Existing stream pipelines that use the Stream-2.0.0 run-time environment will continue to operate normally until February 1, 2021. During this time, Stream-2.0.0 run-time environment will receive security patches only.
For this period, to continue developing pipelines with the Stream-2.0.0 environment, use platform SDK 2.16 or older. After February 1, 2021, the Stream-2.0.0 run-time environment will be removed and pipelines using it will be canceled. Migrate your stream pipelines to the new Stream-3.0.0 run-time environment to benefit from the latest functionality and improvements. For more details about migrating an existing stream pipeline to the new Stream-3.0.0 run-time environment, see Migrate Pipeline to new Run-time Environment. For general support for Apache Flink, please see Stream Pipelines - Apache Flink Support FAQ.
|5||pipeline_jobs_canceled metric in pipeline status dashboard||2.17||July 2020||February 1, 2021|
|Deprecation summary:||The pipeline_jobs_canceled metric used in the pipeline status dashboard is now deprecated because it was tied to the pause functionality and caused confusion. The metric and its explanation will be available to use until February 1, 2021. Thereafter, the metric will be removed.|
|6||Stream throughput configuration changes from MBps to kBps||2.19||September 2020||March 31, 2021|
|Deprecation summary:|| |
Support for stream layers with configurations in MBps is deprecated and will no longer be supported in six months or by March 31, 2021.
After March 31, 2021 only kBps throughput configurations will be supported. This means that Data Client Library and CLI versions included in SDK 2.18 and earlier can no longer be used to create stream layers because these versions do not support configuring stream layers in KB/s.
|7||Monitoring stability improvements||2.20||October 2020||April 30, 2021|
|Deprecation summary:|| |
The "kubernetes_namespace" metric has been deprecated and will be supported until April 30, 2021. Update all Grafana dashboard queries using this metric to use the "namespace" metric.
The label_values(label) function has been deprecated and will be supported until April 30, 2021. Update all Grafana dashboard queries using this function to use label_values(metric, label)
The datasource "<Realm>-master-prometheus-datasource" has been deprecated and will be supported until April 30, 2021. Update all Grafana dashboards using this datasource to use the Primary datasource.
|8||Additional support to help distinguish Blob API versions in Lookup API responses || |
|December 2020||June 30, 2021|
|Deprecation summary:|| |
Given multiple versions of different Data APIs exist, it is important that your automated workflows requesting service endpoints from Lookup API are updated to select the right baseUrls per the right API and API version you are working with. As some existing customer workflow automation is not yet updated to select the right baseUrls from the Lookup API responses, the Look Up API will return multiple Blob API V1 baseUrls in various positions within the responses for the next 6 months only, starting January 2021.
In order to prevent downtime, please update your workflow automation within this deprecation period to select the right baseUrl from Lookup API responses based on the right API and API version.