Sensors in our world have experienced rapid promotion, the sensor is now very common, so that each of us in the daily use of mobile phones have several different types of sensors. Some of these sensors can detect simple Sensor changes in pressure, temperature, acceleration and gravity, as well as more advanced sensors such as GPS, RADAR, LIDAR and image sensors.

  Sensor fusion refers to the extraction of data from several different sensors to produce information that can not be provided by a single sensor. Subsequently, the information can be further processed and analyzed. And according to the final application, if necessary, can also use the sensor to make decisions. Sensor fusion is divided into two categories:

Real-time sensor fusion - Extract and fuse sensor data and make decisions in real time based on the information obtained.

Offline sensor fusion - this scheme is to extract and fuse sensor data, but at some point in time to make decisions.

  For embedded vision systems and sensor fusion applications, most applications are suitable for real-time sensor fusion. Embedded visual applications are experiencing rapid growth, involving a wide range of fields, from robots, advanced driver support systems to enhance the reality, and so forth. These embedded visual applications will be of great help to the successful application of the final application. Integrating the Fuel Rail Pressure Sensor information provided by the embedded vision system with information from different sensors or multiple sensors helps to better understand the environment and improve the performance of the selected application. Many embedded vision applications use only one image sensor to monitor a direction, such as monitoring only the front of the car. The use of such an image sensor can detect, classify and track objects. However, since only one sensor is used, the distance from the object in the image can not be measured. In other words, we can detect and track to another vehicle or pedestrian, but if we do not use another sensor, we can not determine whether there is a risk of collision. In this case we need another sensor, such as RADAR or LIDAR, which provides the distance to the detected object. Because this method can be from a number of different types of sensor information fusion, so called heterogeneous sensor fusion.

  Another solution is to provide a second image sensor to achieve stereoscopic vision. This solution is to let the two image sensors move in the same direction, but separated by a short distance, as the two eyes of the person, through the parallax to determine the depth of the object in the field of view. Like this use multiple of the same type of image sensor, known as isomorphic sensor fusion. There is a need to determine the choice of architecture and sensor type based on driving conditions. This includes a range of depths, measurement accuracy, ambient light and weather conditions, cost savings, and complexity. Embedded vision can be used not only for object detection and car crash, but also as part of the navigation system to collect traffic sign information. In addition, you can fuse medical X-ray, MRI and CT and other different images, or integration of security and monitoring equipment in the visible and infrared images. We generally believe that embedded visual applications only fuel metering valve use visible electromagnetic spectrum, in fact, many embedded visual applications can be integrated from the visible electromagnetic spectrum of data.

  Processing requirements If you do not merge, it takes a lot of computing power to process the image because the system performs a series of preprocessing functions. For example, when using color image sensors, these processing tasks include color filter interpolation, color space conversion / resampling, and image correction. In addition to this, we have to perform the processing task of the sensor fusion algorithm itself. In the object detection instance we used earlier, we need to perform background subtraction, threshold and contour detection to locate the object using the simplest scheme, or may need to be stronger HoG / SVM classifier. As the frame rate and image size increase, the processing power required Temperature Sensor to preprocess the image and extract information increases.

  In the embedded vision system, the general use of All Programmable FPGA or All Programmable SoC to achieve image processing pipeline. If they can be used for traditional embedded visual applications, it is also suitable for embedded visual fusion applications. For embedded vision sensor fusion applications, we can further use the processor for the use of many sensors to provide a simple interface. For example, accelerometers, pressure gauges, gyroscopes, and GPS sensors are equipped with a Serial Peripheral Interface (SPI) and an internal integrated circuit (I2C) interface for all Programmable Zynq-7000 and MicroBlaze soft
When using the All Programmable Zynq-7000 or All Programmable UltraScale + MPSoC, the tightly coupled architecture between the processor memory and the programmable logic allows the application software to access the resulting data set for further processing and decision making. The independent sensor chain can be implemented in programmable logic, and can run in parallel, which for the stereo vision and other needs of the operation of the situation is very favorable.

  Implementing isomorphic object detection systems requires the use of the same sensor type, here is the CMOS imaging sensor. The advantage of this is that only an image processing chain needs to be developed, which can be instantiated twice for two image sensors in a programmable logic architecture. Homogeneous architecture One of the conditions for implementing a stereoscopic vision system is to require two image sensors to synchronize the parallel implementation of two image processing chains in a programmable logic architecture and use the same clock with Speed Sensor appropriate constraints, which helps to meet this demanding requirement The Although parallax calculations require intensive processing, the ability to achieve the same image processing chain twice can significantly reduce development costs.

  RADAR architecture can be divided into two parts: signal generation and signal reception. The signal generation part is responsible for generating the continuous wave signal or the pulse signal to be transmitted, regardless of which scheme needs to use the signal generation IP module and the high speed digital-analog converter interface connection. SDSoC utilizes Vivado HLS and the connection framework, both of which are transparent to software developers, to seamlessly transfer software functionality between the processor and the SoC programmable logic. Of course, we can use high-level synthesis for the isomorphism and heterogeneous implementation of the processing chain development function. We can further expand the customization of the selected solution to create a custom SDSoC platform, and then use the SDSoC function, the use of unused logic resources to further accelerate the performance of the entire embedded visual system upgrade.

  Sensor fusion has been rooted at the same time, embedded vision system is rapidly growing, the rapid propagation and popularization of the sensor. All Programmable FPGA and SoC provide the ability Throttle Position Sensor to enable multiple types of sensors to run in parallel and to synchronize as required; and to implement data fusion and decision-making activities using SoC processing systems or soft-core processors.