Performance Figures

Path Matching in a Streaming Environment

The following describes a speed performance assessment when path matching GPS traces in a Flink environment using the Location Library.

Test Data

The input data for the performance test is a stream of SDII Messages, each of which contains 7 position estimates (also called GPS trace points). The traces cover all of Germany. The distribution is such that it is very likely that there is at least one trace in every level 10 HERE Tile. This means that the map matcher has to load map data for every tile in Germany.

The input stream produces around 4K messages per second (around 28K points per second). This setup is sufficient to make sure that the stream does not suffer from starvation.

Map Matching

The carPathMatcher algorithm is part of the High-Level API. The application is essentially the Stream Processing Application example. For this performance test, the application runs in an OLP stream pipeline with one supervisor and one single worker both with smallest possible setup of 1 CPU - 7GB RAM - 8GB Disk Space.

Measurement

The test counts the number of GPS points processed per second. This includes the reading from the input stream, path matching and the writing to the output stream. The measurement value represents the steady processing speed after the initial warm-up phase, in which all the map tiles are being loaded.

In addition, the test reveals the number of HERE Credit Units consumed by this streaming application in one month.

Note

While this HCU cost includes the reading from the input stream and writing to the output stream, it does not include the cost for the input and output streaming catalogs itself.

Test Results

The steady map matching speed after warm-up is around 9K GPS points per second.

The approximate cost to operate such a minimal pipeline for one month:

Description HERE Credits
OLP Compute Core 2.7
OLP Compute RAM 2.6
OLP Transfer Pipeline IO 3.5
OLP Transfer Data IO 0.1

These numbers are based on measurements on our continuously running test pipeline.

Further Considerations

The performance scales linearly with the number of CPUs applied in an OLP pipeline. However, for many pratical use cases the choice of pipeline setup is going to be driven by the memory need for caching the map rather than by throughput needs. The data needed for map matching the traces in Germany just fits into the 7GB RAM provided by the minimum pipeline. As this test involves only serial processing, the time to process one message is inverse to the throughput. This results in 125µs per GPS point, which is 0.9ms per message with 7 GPS points.

You may reproduce the test with your own test data using the above-mentioned Stream Processing Application monitoring the performance on your custom Grafana dashboard.

results matching ""

    No results matching ""