Alert-Lib

Alert-lib contains classes that provide functionalities related to distance estimation and alerts.

Distance Estimation

The distance filter estimates the distance of an object from the camera.

The StaticDistanceEstimator uses an assumed width/height and the camera's properties to determine the camera's distance from an object. The estimator works with cars and pedestrians by default, but may be extended if other objects are needed for your use case. Distances are in meters for default values. For any custom values, the distance unit is the same as the provided size.


// DISCLAIMER: These camera properties are from the camera of a 
// Pixel 1 (G-2PW4100) Android phone, so may cause inaccurate 
// distance values if the capture camera of the input 
// video has different properties.
 const ls::SizeF sensorSize{6.2743998f, 4.7181997f};
 const float focalLength(4.67f);
 const ls::Size frameSize{640, 480};

std::vector<std::shared_ptr<ls::TrackedRecognition>> masterObjectRecognitions;

std::shared_ptr<ls::TrackedRecognition> trackedRecognitionPtr = std::make_shared<ls::TrackedRecognition>();
trackedRecognitionPtr->id = "example_id";
trackedRecognitionPtr->detectionLabel = "car";
trackedRecognitionPtr->trackedLocation = ls::BoundingBox(15, 15, 100, 100);
masterObjectRecognitions.push_back(trackedRecognitionPtr);

 // Static distance estimator
ls::StaticDistanceEstimator distanceEstimator;
distanceEstimator.estimateDistance(masterObjectRecognitions, frameSize, 0, focalLength, sensorSize);

The following list describes the method parameters:

  • recognitions - List of recognitions to process.
  • frameSize - Size of the image that recognitions were derived from at 0 rotation.
  • sensorRotation - Clockwise rotation to each frames's natural orientation as a multiple of 90 degrees.
  • focalLength - Focal length of the capture camera in millimeters.
  • sensorSize - Size of the physical camera sensor in millimeters.

Note

The distance value for a Recognition may already be available from the ML model.

RoI - Region of interest

A Region of Interest (RoI) defines a region of the source image to enable filtering of detections based on their location in the image. For example, when only detections in front of the vehicle are relevant, detections on the far side can be removed.

The SDK defines the following types of regions:

  • Primary: This region type typically represents the ego lane
  • Secondary: This region type typically represents a region including adjacent lanes on either side of the ego lane

The SDK can generate an RoI via the following methods:

  • makeStaticRoi: A static RoI is generated based on vertices defined by the application

    Note

    The static RoI verticies must meet the following constraints:

    • The vertices must form a trapezoid with points ordered clockwise starting with the bottom-left point
    • The x and y of each vertex must have a value between 0.0 to 1.0, representing a percentage of the image's width and height respectively
  • makeLaneRoi: The lane RoI is generated based on the output of the Road Lanes model
    • The primary region is defined by the 2 ego lane markings
    • The secondary region is defined by the adjacent lane markings, if present

Alerts

Live Sense SDK for Linux supports several types of alerts via the LSDAlertManager class.

Entry

Entry alerts signal to the application when a detection first enters the primary RoI region.

These alerts help pedestrians and vehicles. They alert the driver when a pedestrian has entered the road and when another vehicle is merging into the ego lane.

Lane departure

Note

This feature is in Beta.

A lane departure alert indicates the subject vehicle is leaving the ego lane and approaching the adjacent region.

There are two severities of lane departure alerts:

  • WARNING: The subject vehicle is likely drifting from their lane rather than intentionally changing lanes.
  • INFO: The subject vehicle has departed the lane

Note

For lane departure alerts, the SDK assumes the following:

  • The camera is reasonably well centered within the vehicle and facing straight ahead.
    • If the camera is placed to far to one side of the vehicle, the SDK may generate false positive alerts.
  • The camera has a field of view similar to a phone's wide angle camera
    • Ultra wide lenses cause false positive alerts.

TTC - Time to Collision

Live Sense SDK for Linux provides the feature of alerting the user/driver of Time to Collision. This feature helps to avoid accidents and ensure driver safety. The feature is applicable for car, car-brake-light-on, pedestrian and bicycle. Time to Collision or TTC is calculated as follows:

Time to Collision (T) = -1 * d / ( Δd / Δt ) when Δd < 0

  • d = distance to the object
  • Δd = change in distance to the object between processed frames
  • Δt = change in time between processed frames

ls::LSDAlertManager contains extra heuristics for alerting the user of possible hazards in their path. This includes time-to-collision with the leading vehicle and people entering the vehicle's direct path.

The following list describes the different severity of alerts based on the TTC value:

  • INFO - If T > 2.5secs
  • WARNING - If 1.8secs < T <= 2.5secs
  • ALERT - If T <= 1.8secs

For ls::LSDAlertManager to fully function, it requires the following inputs:

  • Certain properties of the camera being used. For more information, see ls::LSDCameraProperties.
    • This only needs to be set once per camera setup. This does not include device rotation.
  • The vehicle's current speed continuously fed in via setCurrentSpeed().
  • The latest collection of ls::TrackedRecognition continuously fed in via determineAlerts().

std::vector<std::shared_ptr<ls::TrackedRecognition>> masterObjectRecognitions;

std::shared_ptr<ls::TrackedRecognition> trackedRecognitionPtr = std::make_shared<ls::TrackedRecognition>();
trackedRecognitionPtr->id = "example_id";
trackedRecognitionPtr->detectionLabel = "car";
trackedRecognitionPtr->trackedLocation = ls::BoundingBox(280, 240, 360, 300);
masterObjectRecognitions.push_back(trackedRecognitionPtr);

int frameWidth = 640;
int frameHeight = 480;
auto frameTimestamp = ls::getCurrentTimestamp();

auto managerPtr = ls::LSDAlertManager::make();
 //Configure ls::LSDAlertManager.
addAlertSettings(managerPtr);

// Add callback to receive alerts
auto callbackHandle = managerPtr->addCallback([](const std::vector<ls::LSDAlert>& alerts) {
    // Alerts available via callback and as return of `ls::LSDAlertManager::determineAlerts`
});

managerPtr->determineAlerts(masterObjectRecognitions, frameWidth,frameHeight,frameTimestamp);

void addAlertSettings(const std::shared_ptr<ls::LSDAlertManager> &managerPtr) {
    ls::LSDAlertSettings alertSettings;
    alertSettings.useProximity = true;
    alertSettings.useOnEntry = true;
    alertSettings.enableInfoAlerts = true;
    //Change the speed if you want to see proximity alerts.
    managerPtr->setCurrentSpeed(0);
    managerPtr->setAlertSettings(alertSettings);

    //Camera settings
    ls::LSDCameraProperties cameraProperties;
    // DISCLAIMER: These camera properties are from the camera of a
    // Pixel 1 (G-2PW4100) Android phone, so may cause inaccurate or
    // incorrect alerts if the capture camera of the input
    // video has different properties.
    const ls::SizeF sensorSize{6.2743998f, 4.7181997f};
    cameraProperties.setFocalLength(4.67f);
    cameraProperties.setSensorSize(sensorSize);
    managerPtr->setCameraProperties(cameraProperties);
}

results matching ""

    No results matching ""