Seeing the signs (and locating them) with Google Street View and deep learning

Google API and machine learning combine to help councils improve street signage datasets, in RMIT proof of concept

Street signs are everywhere, but where they are precisely is not always known by the local government authorities that manage them.

Councils and governments keep datasets of all signs in an area – a record of location data is mandatory – but as roads are redeveloped they are increasingly incomplete and due to errors by humans doing field surveys, often inaccurate.

The practice of sending staff out on the road to update signage datasets is also expensive and comes with safety risks, so councils tend to avoid it.

“Road transport spatial databases are therefore regularly underrepresented by street signage information,” explains Andrew Campbell from RMIT’s geospatial science school.

What councils need is a cheap and easy way to update their signage datasets, that doesn’t require sending staff out on the road – a fix Campbell and his RMIT colleagues have created using Google’s Street View API and machine learning models.

“Councils have requirements to monitor this infrastructure but currently no cheap or efficient way to do so. Recent advancements in the form of Google's comprehensive and high-resolution street view imagery database, and machine learning pre-trained object detection models, have provided technological counter measures,” Campbell said.

“By using free and open source tools, we’ve now developed a fully automated system for doing that job, and doing it more accurately,” he added.

Sign of the times

To build their tool, Campbell with fellow researchers Alan Both and Chayn Sun first put together a training dataset of street sign imagery. They took a relatively complete street sign dataset – provided by the City of Greater Geelong – and filtered it for only Stop and Give Way signs.

The locations of signs were found on Street View, and the largest face-on view of each was extracted. The final training dataset consisted of 500 Give Way signs and 500 Stop signs, which were annotated using software RectLabel. A Stop and Give Way sign detection deep learning model was trained using the open-source TensorFlow and tested.

The researchers then applied the model to two areas, namely the metropolitan Melbourne localities of Melton and Melton South. The areas were selected because Street View imagery had been captured since 2014 “meaning that Google's modern fleet captured the imagery and the resolution would be appropriate” the researchers explained, and its signage dataset was “noted as being considerably incomplete”.

The results of the recognition and classification model, which are published in the journal of Computers, Environment and Urban Systems, show the system detected signs with near 96 per cent accuracy, and identified their type with near 98 per cent accuracy.

The tricky part came in determining the precise location of the signs – mapping their location in a 2D image to their true physical location. Images featuring a sign were ‘photogrammetrically processed’.

“By calculating the ratio between the street sign classes in physical space, and the image space, and incorporating Google's street view camera lens specifications, the distance of the sign from the vehicle can be calculated,” the researchers said. This in turn allowed them to generate a longitude and latitude for each sign.

The work “uncovered a variety of gross errors” with location data in the city’s signage dataset, with signs “sometimes up to 10 metres off” where they were thought to be.

Campbell said the proof-of-concept model could fairly easily be trained to identify many other road signs and street furniture, and was easily scalable for use by local governments and traffic authorities.

“Our system, once set up, can be used by any spatial analyst – you just tell the system which area you want to monitor and it looks after it for you,” he said.

The where and now

Local authorities are increasingly adopting technology to help them track and locate their assets. Councils in Australia have been for some time put RFID tags in bins, with their location recorded each time they are collected by a garbage truck. Library books are also commonly microchipped.

GPS devices have been fitted to garbage trucks and street sweeping machines in Sydney and elsewhere to track their location.

“Local government authorities are viewing inexpensive location technology as an effective way to manage low-cost high-frequency asset types,” said project co-lead Sun.

Imagery and camera footage was also increasingly being collected and stored by councils, Sun added.

“This imagery is critical for local governments in monitoring and managing assets and with the huge amount of geospatial applications flourishing, this information will only become more valuable,” she said.

“Where footage is already being gathered, our research can provide councils with an economical tool to drive insights and data from this existing resource. Ours is one of several early applications for this to meet a specific industry need but a whole lot more will emerge in coming years,” Sun said.


Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags GoogleAPIgoogle street viewGISgeospatialRMITcouncillocal governmentcitydatasetdatalocal authoritygeospatial science schoolCity of GeelongMelton

More about AustraliaCity of Greater GeelongGoogleRMITSun

Show Comments