RoadData Systems is a video recognition platform for automated detection and classification of road objects from video streams and files. Here are the main concepts of the RoadData Systems architecture.
There are two main parts of the platform: video processing units "Servers" that do all the recognition job, and the "Manager" that controls the processing units
RD Manager
A user accesses the manager via browser. Users get all systems’ status, does all configurations, accesses reports through the manager. One system may have multiple video processing units, distributed over different locations, but only one manager.
RD Server
The servers are video processing units – separate compute units that receive video streams, process them and return extracted structured data. The servers can also be called "workers". They don’t hold any pre-loaded configurations or processing modules. Everything is received from the manager in the moment of their start.
Neural Model
The servers are video processing units – separate compute units that receive video streams, process them and return extracted structured data. The servers can also be called "workers". They don’t hold any pre-loaded configurations or processing modules. Everything is received from the manager in the moment of their start.
Source
RoadData allows you to process both ‘live’ video streams and video files. We call them ‘Sources’. You can define and process video from multiple sources in parallel.
Pipeline configuration
The pipeline is a set of instructions for the system how to process the video. It includes detection area, counting lines, sensitivity of the neural net, filters, etc. Usually, you create one pipeline configuration per scene.
Output
Output is the destination of the recognized data. By default, the system saves data to its SystemDB. But we can send data to other destinations, such as 3rd party SQL database or HTTP endpoints at the same time.
Event
Event is an important concept of RoadData object recognition platform. We save not the detected objects, but register and save the Events that are associated with the objects. For example, when the object is detected in a frame for the first time, we register a newly_detected
event. The last frame that has an object before it disappears is saved as a last_shot
event. Another option is saving the best_shot
event - the event when an object is detected with the highest confidence. The most common example of an event is the line_cross
– the event, when an object crosses a virtual line.