Sensors in our world have experienced rapid promotion, in fact, the sensor is now very common, so that each of us daily use of mobile phones have several different types of sensors. Some of these sensors can detect simple changes Sensor in pressure, temperature, acceleration and gravity, as well as more advanced sensors such as GPS, RADAR, LIDAR and image sensors.

Sensor fusion refers to the extraction of data from several different sensors to produce information that can not be provided by a single sensor. Subsequently, the information can be further processed and analyzed. And according to the final application, if necessary, can also use the sensor to make decisions. Sensor fusion is divided into two categories:
Real-time sensor fusion - Extract and fuse sensor data and make decisions in real time based on the information obtained.
Offline sensor fusion - this scheme is to extract and fuse sensor data, but at some point in time to make decisions.

For embedded vision systems and sensor fusion applications, most applications are suitable for real-time sensor fusion. Embedded vision applications are experiencing rapid growth, involving a wide range of fields, from robots, advanced driver assistance Suction Control Valve systems (ADAS) to enhance the reality, and so forth. These embedded visual applications will be of great help to the successful application of the final application. Integrating the information provided by the embedded vision system with information from different sensors or multiple sensors helps to better understand the environment and improve the performance of the selected application.

Many embedded vision applications use only one image sensor to monitor a direction, such as monitoring only the front of the car. The use of such an image sensor can detect, classify and track objects. However, since only one sensor is used, the distance from the object in the image can not be measured. In other words, we can detect and track to another vehicle or pedestrian, but if we do not use another sensor, we can not determine whether there is a risk of collision. In this case Fuel Rail Pressure Sensor we need another sensor, such as RADAR or LIDAR, which provides the distance to the detected object. Because this method can be from a number of different types of sensor information fusion, so called heterogeneous sensor fusion.


Another solution is to provide a second image sensor to achieve stereoscopic vision. This solution is to let the two image sensors move in the same direction, but separated by a short distance, as the two eyes of the person, through the parallax to determine the depth of the object in the field of view. Like this use multiple of the same type of image sensor, known as isomorphic sensor fusion. Of course, there is a need to determine the choice of architecture and sensor type based on driving conditions. This includes a range of depths, measurement accuracy, ambient light and weather conditions, cost savings, and complexity. Embedded vision can be used not only for object detection and car crash, but also as part of the navigation system to collect traffic sign information. In addition, you can fuse medical X-ray, MRI and CT and other different images, or integration of security and monitoring equipment in the visible and infrared images.

If you do not merge, it takes a lot of computing power when processing images, because the system performs a series of preprocessing functions. For example, when using color image sensors, these processing tasks include color filter interpolation, color space conversion / resampling, and image correction. In addition to this, we have to perform the processing task of the sensor fusion algorithm itself. In the object detection instance we used earlier, we need to perform background subtraction, threshold and contour detection to locate the object using the simplest scheme, or may need to be stronger HoG / SVM classifier. As the frame rate and image size increase, the processing power required to preprocess the image and extract information increases.

However, extracting the information from the image is only part of the task. If we use heterogeneous fusion, we need to configure, drive, receive, and extract the information from the second sensor. If we choose the isomorphic system, it is necessary to re-execute the same image processing line as the first sensor for the second image sensor. This can provide two sets of data, these two sets of data must Temperature Sensor be processed to determine the actual distance with the object, this is the real integration.


  In the embedded vision system, the general use of All Programmable FPGA or All Programmable SoC to achieve image processing pipeline. If they can be used for traditional embedded visual applications, it is also suitable for embedded visual fusion applications.

  Whether you choose an FPGA or a SoC, embedded vision applications typically use a processor for monitoring, control, and communication. If you choose All Programmable SoC, then there is a hard core in the middle, and there are many support peripherals and interface standards. If you use All Programmable FPGA, you will use a soft core, such as MicroBlaze ?, and the use of more customized peripherals and interface support.

For embedded vision sensor fusion applications, we can further use the processor for the use of many sensors to provide a simple interface. For example, accelerometers, pressure gauges, gyroscopes, and GPS sensors are equipped with a Serial Peripheral Interface (SPI) and an internal integrated circuit (I2C) interface that are supported by the All Programmable Zynq® -7000 and MicroBlaze soft-core processors. This allows ABS Sensor the software to quickly and easily obtain the required information from different types of sensors and provide it to the extensible architecture.

  When using the All Programmable Zynq-7000 or All Programmable UltraScale + MPSoC, the tightly coupled architecture between the processor memory and the programmable logic allows the application software to access the resulting data set for further processing and decision making. The independent sensor chain can be implemented in programmable logic, and can run in parallel, which for the stereo vision and other needs of the operation of the situation is very favorable. To accelerate the delivery process for converged applications implemented in programmable logic, we can use high-level synthesis (HLS) to develop algorithms that can be implemented directly in a programmable logic architecture.

  Develop the object detection and distance algorithm described earlier, using All Programmable SoC to demonstrate isomorphism and heterogeneous schemes. Although the two types of solutions use the sensor type is not used, but the ultimate goal of these two architectures is to put two data sets in the processing system of DDR memory, while the programmable logic architecture to maximize performance. Implementing isomorphic object detection systems requires the use of the same sensor type, here is the CMOS imaging sensor. The advantage of this is that only an image processing chain needs to be developed, which can be instantiated twice for two image sensors in a programmable logic architecture.

Homogeneous architecture One of the conditions for implementing a stereoscopic vision system is to require two image sensors to synchronize the parallel implementation of two image processing chains Throttle Position Sensor in a programmable logic architecture and use the same clock with appropriate constraints, which helps to meet this demanding requirement The Although parallax calculations require intensive processing, the ability to achieve the same image processing chain twice can significantly reduce development costs.

RADAR architecture can be divided into two parts: signal generation and signal reception. The signal generation part is responsible for generating the continuous wave signal or the pulse signal to be transmitted, regardless of which scheme needs to use the signal generation IP module and the high speed digital-analog converter interface connection. The signal receiving section also requires the use of a high-speed analog-to-digital converter to capture the received continuous wave or pulse signal. Speaking of signal processing, both schemes require the use of FFT analysis methods implemented by a programmable logic architecture; similarly, we can use DMA to transfer the resulting data set to PS DDR.

Regardless of which architecture is chosen, the fusion algorithm for both datasets is executed by software using PS. In addition, these fusion algorithms are required to handle higher bandwidth requirements, and one way to achieve higher performance is to use existing tool set functionality, especially the design environment SDSoC ?.

SDSoC utilizes Vivado HLS and the connection framework, both of which are transparent to software developers, to seamlessly transfer software functionality between the processor and the SoC programmable logic. Of course, we can use high-level synthesis for the isomorphism and heterogeneous implementation of the processing chain development function. We can further expand Pressure Switch the customization of the selected solution to create a custom SDSoC platform, and then use the SDSoC function, the use of unused logic resources to further accelerate the performance of the entire embedded visual system upgrade.

  Sensor fusion has been rooted at the same time, embedded vision system is rapidly growing, the rapid propagation and popularization of the sensor. All Programmable FPGA and SoC provide the ability to enable multiple types of sensors to run in parallel and to synchronize as required; and to implement data fusion and decision-making activities using SoC processing systems or soft-core processors.