Path Matching in a Streaming Environment
The following describes a speed performance assessment when path matching GPS traces in a Flink environment using the Location Library.
The input data for the performance test is a stream of
SDII Messages, each of which contains 7 position estimates (also called GPS trace points). The traces cover all of Germany. The distribution is such that it is very likely that there is at least one trace in every level 10 HERE Tile. This means that the map matcher has to load map data for every tile in Germany.
The input stream produces around 4K messages per second (around 28K points per second). This setup is sufficient to make sure that the stream does not suffer from starvation.
carPathMatcher algorithm is part of the High-Level API. The application is essentially the Stream Processing Application example. For this performance test, the application runs in a stream pipeline with one supervisor and one single worker both with smallest possible setup of 1 CPU - 7GB RAM - 8GB Disk Space.
The test counts the number of GPS points processed per second. This includes the reading from the input stream, path matching and the writing to the output stream. The measurement value represents the steady processing speed after the initial warm-up phase, in which all the map tiles are being loaded.
In addition, the test reveals the cost ($/€) required for a streaming application in one month.
While this cost includes the reading from the input stream and writing to the output stream, it does not include the cost for the input and output streaming catalogs itself.
The steady map matching speed after warm-up is around 9K GPS points per second.
The approximate cost to operate such a minimal pipeline for one month:
|Description ||$/€ |
|Platform Compute Core ||187.2 |
|Platform Compute RAM ||201.6 |
|Platform Transfer Pipeline IO ||234.5 |
|Platform Transfer Data IO ||1.5 |
These numbers are based on measurements on our continuously running test pipeline.
The Platform Transfer Data IO cost is negligible since the map data is downloaded only once and it is kept in the in-memory cache.
The performance scales linearly with the number of CPUs applied in a pipeline. However, for many pratical use cases the choice of pipeline setup is going to be driven by the memory need for caching the map rather than by throughput needs. The data needed for map matching the traces in Germany just fits into the 7GB RAM provided by the minimum pipeline. As this test involves only serial processing, the time to process one message is inverse to the throughput. This results in 125µs per GPS point, which is 0.9ms per message with 7 GPS points.
You may reproduce the test with your own test data using the above-mentioned Stream Processing Application monitoring the performance on your custom Grafana dashboard.