Optimal Alert Locater takes a Versioned layer of item/s of interest as input and creates a Versioned catalog layer in GeoJSON of optimal alerts, a specified distance from the item of interest. The items of interest can include signs, people, pot holes, points of interest or other items. The alerts can be sent to vehicles or displayed on variable message signs as notifications, a configurable number of meters from the item of interest. A most probable path algorithm is used to compute the transition probability depending on the angle between the roads and the difference in the functional class. Two incoming alerts are calculated from opposite directions of the item of interest. Note: If the distance from the alert radius exceeds an intersection or multiple intersections, all incoming paths must be traversed.
This pipeline template operates as a Batch pipeline that reads items of interest from a Versioned layer and publishes GeoJSON data to a Versioned layer of a catalog using the Hazard Detection schema.
In order to deploy and run this pipeline template, you will need the Wizard Deployer. The Wizard executes interactively, asking questions about the application, and expects the user to provide needed answers. Assuming you followed the Wizard's documentation instructions and set up the needed parameters beforehand, follow these steps:
You can use your existing output layer or let the Wizard create a new catalog/layer for you. If using an existing catalog, make sure it is shared with the GROUP_ID that will be used for this deployment.
PLEASE NOTE: During deployment with Wizard script you will be asked to provide a bounding box of the area you wish to process by supplying four coordinates. You will also be asked to describe the size of the area by picking one of the presented options. Pick the one that best defines the size of your area (approximately). The answer to this question helps to define the number of workers(cores) required for processing and will affect the processing time.
Select the Pipelines tab in the Platform Portal. Verify that your pipeline was deployed and is running. Note the version of your output catalog. Once your pipeline has finished running, check the log for any exceptions or errors. Verify that data was written to your output catalog. The output catalog version should have been incremented. In the Platform Portal, select your output layer in the catalog, select the Inspect tab, select the "Partitions list" icon in the left-hand corner, select a partition id, and see that data is decoded and shown in the right-hand side of the user interface.
For a given area, the following processing times have been observed for this pipeline. When using the Wizard, the following question is asked:
With one of four answers to choose from:
Based on your answer, supervisor and worker units, the number of workers, and spark.default.parallelism are allocated programmatically.
|Less than or around 100 sq km||~5 minutes|
|Less than or around 1,000 sq km (a City, large/dense metropolitan)||~20 minutes|
If you need support with this pipeline template, please contact us.