Traffic signal detection and classification in street views using an attention model

Abstract

Detecting small objects is a challenging task. We focus on a special case: the detection and classification of traffic signals in street views. We present a novel framework that utilizes a visual attention model to make detection more efficient, without loss of accuracy, and which generalizes. The attention model is designed to generate a small set of candidate regions at a suitable scale so that small targets can be better located and classified. In order to evaluate our method in the context of traffic signal detection, we have built a traffic light benchmark with over 15,000 traffic light instances, based on Tencent street view panoramas. We have tested our method both on the dataset we have built and the Tsinghua–Tencent 100K (TT100K) traffic sign benchmark. Experiments show that our method has superior detection performance and is quicker than the general faster RCNN object detection framework on both datasets. It is competitive with state-of-the-art specialist traffic sign detectors on TT100K, but is an order of magnitude faster. To show generality, we tested it on the LISA dataset without tuning, and obtained an average precision in excess of 90%.

Citation

bibtex

Dataset

You can download the training set here.

The json file is a list of annotations for image files in the datasets. Structure in every single image is shown below. Red words are keys in json file, blue lines explain their meanings.

			Path: image path
			Objects: annotations
				Category: 1-6 for 6 classes mentioned in our paper. 
				BBox: bounding box based on pixels
			

Categories integers 1-6 in the json file, represents in order six categories : Red, Green, Red left turn, Green forward, Red pedestrian, Other.

Contact

Jiaming Lu (loyaveforever@gmail.com)