XYZ

How to Render a Map of San Diego's Smart City Streetlights with HERE XYZ, Python and Tangram

By Jayson DeLancey | 14 May 2019

It can be hard to get your head around how immense an IoT network deployment is without a map, especially for Smart Cities.  Using HERE XYZ to visualize infrastructure at a city level makes a world of difference.  Let's look at how to do that, first fetching and scrubbing data with Python and then using HERE XYZ with Tangram and a bit of JavaScript to render a web map like the one shown.

This sample project and others like it can be completed entirely using the freemium account if you sign-up and start experimenting for yourself.

CityIQ IoT Platform

The City of San Diego deployed the world’s largest smart city platform where thousands of streetlights around the city have been equipped with IoT sensors. These sensors collect metadata that can be fetched from a set of public access web services providing traffic, pedestrian flow, parking, and environmental data.

An asset is a physical thing and in the context of CityIQ refers to street lights. Each asset has sensor nodes including a camera to capture video and audio along with environmental sensors to capture things like temperature, humidity, and pressure. To maintain privacy, the assets also include an edge device with compute capabilities to process these node feeds with computer vision and upload metadata about what was captured to the cloud. It is this metadata that the city provides free public access. You can learn more from the San Diego Sustainability website.

There are two initial steps to accessing the valuable asset data like this:

  1. Get an OAuth token
  2. Get the Asset metadata

This is a good lesson in that some data sources are not ready for geospatial analysis so using tools like Python are helpful to clean the data.

Get an OAuth token

Here’s an example method in Python for authenticating and retrieving the token.

import json
import base64
import requests

def get_client_token(client, secret):
    uri = 'https://auth.aa.cityiq.io/oauth/token'
    credentials = base64.b64encode(client + ':' + secret)
    headers = {
        'Content-Type': 'application/x-www-form-urlencoded',
        'Cache-Control': 'no-cache',
        'Authorization': 'Basic ' + credentials
    }
    params = {
        'grant_type': 'client_credentials'
    }

    response = requests.post(uri, headers=headers, params=params)
    return json.loads(response.text)['access_token']

def main():
    token = get_client_token('PublicAccess', 'uVeeMuiue4k=')
    print(token)

main()

Get the Asset metadata

So what do you do with the token? You use it in subsequent requests such as fetching the locations of assets. Here’s a bit more Python to demonstrate the second step.

def get_assets(token):
    uri = 'https://sandiego.cityiq.io/api/v2/metadata/assets/search'
    headers = {
       'Content-Type': 'application/x-www-form-urlencoded',
       'Predix-Zone-Id': 'SD-IE-TRAFFIC',
       'Authorization': 'Bearer ' + token
       }

    # Bounding Box for San Diego Camera locations because these are the
    # nodes that have traffic, parking and pedestrian data
    params = {
            'bbox': '33.077762:-117.663817,32.559574:-116.584410',
            'page': 0,
            'size': 200,
            'q': 'assetType:CAMERA'
            }
    response = requests.get(uri, headers=headers, params=params)


def main():
  token = get_client_token('PublicAccess', 'uVeeMuiue4k=')
  assets = get_assets(token)
  print(json.dumps(assets))

main()

The first thing to note is that the response only includes 200 assets out of more than 11 thousand. You’ll need to make multiple calls to fetch the entire list of assets or reduce the bounding box of the search area. The second thing is the content itself which is an array of dictionaries that look like this:

        {
            "assetType": "CAMERA",
            "mediaType": "IMAGE",
            "coordinates": "32.71573892:-117.133679",
            "parentAssetUid": "0b0e643f-473d-482b-927e-b1e7e7a6ec2c",
            "eventTypes": [
                "PKOUT",
                "PKIN"
            ],
            "assetUid": "049e6af3-6865-4c80-a0a3-46218c859fde"
        },

The event types indicate the types of data available from this particular node which includes PKOUT and PKIN for parking out and in respectively for a particular location. There are also traffic (TFEVT) and pedestrian (PEDEVT) events that can be interesting.

Now that we have access to the assets we can do a bit more cleanup.

GeoJSON and Reverse Geocoding

One of the reasons for choosing Python in this exercise so far is the ease of editing and transforming data. To display this data on a map, it’s easier to transform it into something like GeoJSON which is a standardized specification for representing geospatial objects in JSON. It would look something like this:

{
    "geometry": {
        "type": "Point",
        "coordinates": [
            -117.2642206,
            32.81047216,
            0
        ]
    },
    "type": "Feature",
    "properties": {
        "assetType": "CityIQ Camera Node",
        "street": "La Jolla Hermosa Ave",
        "coords": "32.81047216:-117.2642206",
        "assetUid": "000b2365-5309-422a-9be6-1b7127ca18db",
    }
},

As you can see from this example an object in GeoJSON has geometry associated with it, in this case a point (longitude comes first). It also has a set of properties which is just a dictionary of key-value pairs.

This was a small exercise to take the JSON response from the CityIQ services above, split the location into its corresponding latitude and longitude parts.  I also did one additional bit of data enrichment with an operation called reverse geocoding. Taking the latitude and longitude, I used the HERE Reverse Geocoder to identify the name of the street. The intention is to be able to create a tag in HERE XYZ so that streetlights can be filtered by the street name when queried with the XYZ Hub API (ie. show only streetlights on La Jolla Hermosa Ave).

Here’s an example:

import geojson

def reverse_geocode(lat, lon, app_id_here, app_code_here):
    uri = 'https://reverse.geocoder.api.here.com/6.2/reversegeocode.json'
    headers = {}
    params = {
            'app_id': app_id_here,
            'app_code': app_code_here,
            'prox': str.join(',', [lat,lon]),
            'mode': 'retrieveAddresses',
            'maxresults': 10,
    }

    response = requests.get(uri, headers=headers, params=params)
    return json.loads(response.text)

def get_street_name(lat, lon):
    results = reverse_geocode(lat, lon)
    street = None
    for a in results['Response']['View'][0]['Result']:
        if 'Street' in a['Location']['Address']:
            street = a['Location']['Address']['Street']
            return street

    print("No address found for %s,%s" % (lat,lon))
    return street

def enrich_feature(lat, lon):
    point = geojson.Point(coordinates=(lon, lat))

    props = {}
    props['coords'] = lat + ':' + lon
    props['street'] = get_street_name(lat,lon)
    props['assetUid'] = feature['properties']['description']
    props['assetType'] = 'CityIQ Camera Node'

    return geojson.Feature(geometry=point, properties=props)

# exercise left for the reader
# loop over data to make a list of enriched features # for each asset then write it to disk with open('assets.geojson', 'w') as output: geojson.dump(features, output)

An additional step of aggregating all those features and geojson.dump() to a file gives me my data set as a geojson file.

 

HERE XYZ

As datasets get large enough it can be challenging to find a way to render it with client-side JavaScript. There are two tools I used for this project, Tangram and HERE XYZ.

CLI

The HERE CLI is easy to install and get started.  There’s a tutorial on Using the HERE XYZ CLI which explains how that works so I won’t go into more detail here.

The summary for this project is to create a space for the streetlight data and then upload it while tagging the name of the street.

That looks something like this:

here xyz create -t 'san-diego-streetlights'
here xyz upload xyz-space-id -f assets.geojson -t street

That's it, the geospatial data is now safely stored in the cloud and accessible in Studio, from the CLI, or using the underlying APIs which we'll do in the next section.

 

Tangram

Tangram is a Real-Time WebGL Map tool for Vector Data. Instead of splitting geospatial data up into raster images (like PNGs) for each region, a vector tile gives far more flexibility to customize the style and presentation. In combination with Leaflet you can create an interactive experience many expect from web maps today.

How do you get vector tile data? That’s where HERE XYZ is valuable. With the free tier you can upload GeoJSON data and fetch vector tile data with the API from what is called a space.

 To get started quickly with Tangram, our friend Dylan Babbs wrote a helpful node.js package runner.  Try running:

npx tangram-make mymap xyz-space-id xyz-token

This bootstraps you with a skeleton tangram web map.  You should see the following files in a project directory.

mymap
├── index.css
├── index.html
├── index.js
└── scene.yaml

If you used a valid space id and token simply firing up a web server should let you view a web map.  A simple way to do this is with python -m SimpleHTTPServer if you have Python installed or use the node.js http-server.  This initial map is centered on Seattle but you can customize that in the index.js.


const map = L.map('map', {
   center: [32.714, -117.170],   // Try San Diego
   zoom: 7.5,
   layers: [tangram],
   zoomControl: false
});

What's next? If you open in a browser a tool called Tangram Play you can read the scene.yaml from this folder.  This is a useful resource because you can get some immediate feedback as you learn how to customize the Tangram rendering by making changes to the scene.yaml and view the resulting map.

Tangram Play

With some color changes and editing you can end up with a map that looks like this:

Smart Streetlights of San Diego

Take some time playing with this project and if you want to recreate the source code is available on github in the HERE XYZ Showcase.