This short guide describes how to upgrade the HERE Data SDK for Python packages to newer versions. Apart from pip and conda technicalities, a summary of changes from previous versions is provided to highlight deprecations and changes in the python APIs and general behavior of the SDK.

## Updating Packages with Conda

### Caution

Always upgrade all the installed SDK packages together at once: having a mix of different versions of the SDK packages is not supported. Conda should prevent installing a mix of versions. Please heed all warning and error messages you may encounter and don't force conda to install an incompatible mix of versions.

Use the conda list command to inspect your active conda environment, before and after the upgrade.

You can upgrade all the SDK packages using conda like this:

conda update -c conda-forge -c https://repo.platform.here.com/artifactory/api/conda/olp_analytics/analytics_sdk here-platform=2.15.0 here-geotiles=2.15.0 here-geopandas-adapter=2.15.0 here-content=2.15.0 here-inspector=2.15.0

If you don't have all the packages installed, you should mention only the ones you have installed by removing the packages not needed from the command above.

It's always preferred to upgrade all the installed packages at once, with one single command. To upgrade to the latest version, don't mention the version numbers.

If your working environment is described by a environment.yml file, amend the version numbers inside the file and then use the following command to upgrade the active environment to the new versions:

conda env update --file environment.yml -c conda-forge -c https://repo.platform.here.com/artifactory/api/conda/olp_analytics/analytics_sdk

## Updating Packages with Pip

### Caution

Always upgrade all the installed SDK packages together at once: having a mix of different versions of the SDK packages is not supported. Pip should warn if only some packages are upgraded and you end up with an incompatible mix of versions. Please heed all warning messages and address them.

Use the pip list command to inspect your active pip environment, before and after the upgrade.

You can upgrade all the SDK packages using the same command used to originally install them, specifying updated version numbers:

pip install --extra-index-url https://repo.platform.here.com/artifactory/api/pypi/analytics-pypi/simple/ here-platform==2.15.0 here-geotiles==2.15.0 here-geopandas-adapter==2.15.0 here-content==2.15.0 here-inspector==2.15.0

If you don't have all the packages installed, you should mention only the ones you have installed by removing the packages not needed from the command above.

It's always preferred to upgrade all the installed packages at once, with one single command. To upgrade to the latest version, use the --upgrade parameter and don't mention the version numbers.

If your working environment is described by a requirements.txt file, amend the version numbers inside the file and then use the following command to upgrade the active environment to the new versions:

pip install -r requirements.txt --extra-index-url https://repo.platform.here.com/artifactory/api/pypi/analytics-pypi/simple/

## Changes from previous Versions

This is a list of important changes and deprecations to consider when upgrading from a previous version of the HERE Data SDK for Python. This is a migration/upgrade guide and doesn't cover new features added to each release. For such details, please consult the release notes.

No significant changes required.

Minor change in behavior when encoding dataframes to JSON:

• When using GeoPandasAdapter to write a DataFrame or GeoDataFrame to a layer as partitioned JSON, the resulting JSON representation might be different from what it was in previous SDK versions. This is because of a bugfix: the orient parameter that controls the details of the representation of a dataframe when encoded in JSON was ignored and its value always records. Users can now control that parameter. To obtain the behavior that was previously hardcoded, pass orient="records" in writing functions. You can now specify a different value, if you wish. For more information, please consult the reference of GeoPandasEncoder, in turn based on the to_json function of Pandas, where the parameter is forwarded to.

Change in behavior when writing to stream layer:

• Data that is small enough (1 MB configurable threshold) is not saved through the Blob API of a stream layer but inlined directly in the data field of the corresponding Kafka message.

Due to the major rehaul of protobuf support in this release:

• When reading protobuf data via GeoPandasAdapter, the DataFrame or GeoDataFrame returned may have a different format than previous SDK versions. The columns of the dataframe may be different, as data is expanded in columns in slightly different ways. In particular, the paths parameter when reading data from a layer has been removed and replaced with record_path. The behavior is not exactly the same as before. For more information and examples please consult the general GeoPandasAdapter guide and, for more details, the reference documentation of GeoPandasDecoder to know more about record_path and other new parameters that fine-tune the decoding of protobuf in a dataframe.
• Enumerated fields in protobuf messages are decoded to their string values when using GeoPandasAdapter, instead of being represented as integer values.
• Default values are returned in case fields are missing when using GeoPandasAdapter. This behavior can be toggled via the including_default_value_fields parameter of GeoPandasAdapter constructor.
• Field names are not automatically converted to camelCase anymore when using GeoPandasAdapter but retain the names they have in the original protobuf schema. This behavior can be toggled via the preserving_proto_field_name parameter of GeoPandasAdapter constructor.
• Deprecated here.platform.schema.schema.Schema.decode_and_parse_blob , please use directly protobuf function MessageToDict to parse a Message in a python dictionary. It supports many options to customize the parsing. Please continue using here.platform.schema.schema.Schema.decode_blob to decode a blob to a Message type and MessageToDict for further parsing, if and when this is needed.

To avoid confusion when dealing with properties of StreamPartition:

• Deprecated here.platform.partition.StreamPartition.offset and offset_partition, please use the equivalent kafka_offset and kafka_partition.

To establish a more uniform naming convention of functions:

• Deprecated here.platform.layer.InteractiveMapLayer.list_features, please use the equivalent here.platform.layer.InteractiveMapLayer.iter_features.

No significant changes required.

• Deprecated here.platform.layer.StreamLayer.read_stream_data, please use the equivalent here.platform.layer.StreamLayer.read_stream.
• Deprecated here.platform.layer.IndexLayer.update_partitions_index, please adapt the code to use here.platform.layer.IndexLayer.set_partitions_metadata.