# Configure versioned layer settings

This topic describes the configuration settings for a versioned layer. You configure these settings when you create a layer in a catalog.

## Partitioning

The partitioning scheme determines how partitions in the layer are named. Use HERE Tile partitioning for map data and use generic partitioning for other kinds of data. For more information, see Partitions.

## Content type

The content type specifies the media type to use to identify the kind of data in the layer.

## Content encoding

The content encoding setting determines whether to use compression to reduce the size of data stored in the layer. To enable compression, specify gzip.

Compressing data optimizes storage size, transfer I/O, and read costs. However, compressing data results in extra CPU cycles for the actual compression. Consider both the benefits and costs of compression when deciding whether to enable compression for a layer.

Some formats, especially textual formats such as text, XML, JSON, and GeoJSON, have very good compression rates. Other data formats are already compressed, such as JPEG or PNG images, so compressing them again with gzip will not result in reduced size. Often, it will even increase the size of the payload. For general-purpose binary formats like Protobuf, compression rates depend on the actual content and message size. You are advised to test the compression rate on real data to verify whether compression is beneficial.

Compression should not be used for Parquet. Compression breaks random access to blob data, which is necessary to efficiently read data in Parquet.

If the layer contains SDII data, note that the ingest API's /layers/<layerID>/sdiimessagelist endpoint does not support compression. So if you enable compression for a layer containing SDII data, you must use the ingest API's generic endpoint (/layers/<layerID>) and all compression and decompression must be handled by your application.

If you are using the Data Client Library to read or write data from a compressed layer, compression and decompression are handled automatically.

If you are using the Data API to read or write data from a compressed layer, you must compress data before writing it to the layer. When reading data, the data you receive is in gzip format and you are responsible for decompressing it.

## Schema

Specifying a schema enables you to share data with others by defining for others how to consume the data. For more information, see Schemas.

## Digest

The digest property specifies the algorithm used by the data publisher to generate a hash for each partition in the layer. By specifying a digest algorithm for the layer, you communicate to data consumers the algorithm to use to verify the integrity of the data they retrieve from the layer.

You can specify a digest algorithm when creating or updating a layer. If you specify "undefined", you can specify another digest algorithm after the layer is created. If you specify a digest algorithm, you cannot change it later.

When choosing a digest algorithm, consider the following:

• SHA-256 is recommended for applications where strong data security is required
• MD5 and SHA-1 is acceptable when the purpose of applying a hash is to verify data integrity during transit.

Including a hash is optional, but if you intend to provide hashes for partitions in this layer you should specify the algorithm you will use.

### Note

The HERE platform does not verify that the algorithm you specify here is the one used to generate the actual hashes, so it is up to the data publisher to ensure that the algorithm specified here is the one used in the publishing process.

### Note

Digest and CRC are two different fields. Digest is used for security to prevent human tampering. CRC is used for safety to prevent bit flips by computer hardware or network transportation. You can use both fields.

## CRC

The crc property specifies the CRC algorithm used by the data publisher to generate a checksum for each partition in the layer. When you specify a CRC algorithm for the layer, you tell data consumers which algorithm to use so they can verify the integrity of the data they retrieve from the layer.

You can specify a CRC algorithm when creating or updating a layer. If you specify "undefined", you can specify another CRC algorithm after the layer is created. If you specify a CRC algorithm, you cannot change it later.

#### Note:

This CRC has the following properties

• Padded with zeros to a fixed length of 8 characters
• Stored as a string For example, if your calculated CRC is the uint32 value of 0x1234a, then the CRC that is actually stored for the partition is the string 001234af.

Currently only one CRC algorithm is supported:

• CRC-32C (Castagnoli)

Including a checksum is optional but if you intend to provide checksums for partitions in this layer, you should specify the algorithm you will use.

### Warning

The HERE Workspace does not verify that the algorithm you specify here is the one used to generate the actual checksums, so it is up to the data publisher to ensure that the algorithm specified here is the one used in the publishing process.

#### Note

Digest and CRC are two different fields. Digest is used for security reasons to prevent human tampering. CRC is used for safety reasons to prevent bit flips caused by computer hardware or network transportation. You can use both fields.

## Coverage

The geographic area that this layer covers. This setting controls which areas of the world are highlighted in the layer's coverage map in the platform portal.

Specify a list of countries and regions using the two-character ISO 3166-1 alpha 2 code. Optionally, you can add a two-character country subdivision code using the ISO 3166-2 codes for country subdivisions. For example, you can specify 'DE' for Germany, 'BR' for Brazil, or 'CN-HK' for Hong Kong.