Compress Data or Use an Efficient Binary Format

In the majority of cases, in an application the largest volume of data generated by the application and passed over the network is HTTP responses to client requests. Minimizing the response size reduces the load on the network, optimizes storage size and transfer I/O. Enabling layer compression can reduce response size considerably.

Keep in mind that once the layer is created, you cannot update compression attribute anymore.

If you are using the HERE Data SDK for Java & Scala to read or write data from compressed layer in Data API, compression and decompression are handled automatically.

Some formats, especially textual formats such as text, XML, JSON, and GeoJSON, have very good compression rates. Other data formats are already compressed, such as JPEG or PNG images, so compressing them again with gzip will not result in reduced sizes. Often, it will even increase the size of the payload. For general-purpose binary formats like Protobuf, compression rates depend on the actual content and message size. Layer compression should not be used for Parquet as it breaks random access to blob data, which is necessary to efficiently read data in Parquet. One can potentially benefit from using internal Parquet compression.

Data compression can reduce the volume of data transmitted and minimize transfer time and costs, but the compression and decompression processes incur overhead, as such, compression should only be used when there is a demonstrable gain in performance.

results matching ""

    No results matching ""