Abstract
This paper presents a method for detection and recognition of traffic
signs based on information extracted from an event camera. The solution
used a FireNet deep convolutional neural network to reconstruct events
into greyscale frames. Two YOLOv4 network models were trained, one based
on greyscale images and the other on colour images. The best result was
achieved for the model trained on the basis of greyscale images,
achieving an efficiency of 87.03%.