Since General Motors showed the world’s first self-driving concept car to the world in 1939, for more than 80 years since then, major technology companies and car companies around the world have successively stuffed sensors, cameras, computing chips, memory chips, etc. into cars and continue to try autonomous driving in specific scenarios such as tracks and deserts. Today, manufacturers are gradually allowing autonomous driving to penetrate the family car market, and are committed to achieving higher levels of autonomous driving in this field.

However, we can also see that in the process of advancing technology and solutions, there is an obvious “factional war” in the implementation of autonomous driving. One faction believes that the fusion route of vision and radar should be taken; the other faction believes that pure visual implementation is sufficient. The root cause of factional disputes is that each manufacturer currently has its own understanding of autonomous driving, and all have achieved certain results. Of course, through the various components sold on the Mouser Electronics website for autonomous driving, we also found some commonalities between the two factions, such as flexible design and performance redundancy. So, let’s talk about it next.

Pure Vision VS Multi-Sensor Fusion

There are many other names for autonomous driving, such as driverless cars, computer-driven cars, or wheeled robots. The core meaning of these names is to replace the human driver through the integration of computer and artificial intelligence technology, allowing the car to complete a complete, safe and effective driving behavior on its own. Undoubtedly, this kind of intelligent driving experience will become an important selling point of new era cars, attracting global technology giants and car companies to invest in the layout. According to the forecast data of market research firm IHS Markit, the global self-driving car market will reach US$162.9 billion in 2022, a year-on-year increase of about 14%. 45.21%.

Figure 1: Global Autonomous Vehicle Market Size

(Data source: IHS Markit)

Under the assumption of autonomous driving, when a destination is selected, the car will use the navigation system to plan the best route to reach the destination. During this period, human drivers and passengers do not need to be distracted, and can fully enjoy the convenience and comfort brought by technology. . However, when a human drives a car, we all describe the state of the driver in terms of “seeing six ways and hearing all directions”, so self-driving cars need to have stronger perception performance than this. Environmental perception and precise positioning, path planning, and wire-controlled execution are called the four core technologies of autonomous driving. Safe and reliable perception is indispensable when the car is placed in social application scenarios.

In terms of environmental perception, the pure vision solution only relies on cameras to collect environmental information, and then transmits the pictures to the computing chip for analysis, which is more like a human driving a vehicle, capturing the surrounding information through the eyes, and then transmitting it to the brain for decision-making. Of course, the pure vision + AI solution has a wider perspective than the human eye. Through a large amount of picture information, the computing system inside the car has a “God’s perspective” centered on the car. The main advantages of the pure vision solution are that the implementation cost is relatively lower, it is closer to human driving, and the environmental information obtained through high-resolution, high-frame-rate imaging technology is more abundant; the main disadvantage is that the camera captures environmental information easily affected by ambient light. interference, and the pure vision scheme relies more on training for image processing, which inevitably leads to “dead spots” in environmental cognition. At present, companies that use pure vision solutions to achieve autonomous driving mainly include Tesla, Baidu, and Krypton.

The “multi-sensor fusion” solution collects the surrounding information of the vehicle through cameras, millimeter-wave radar, lidar and other equipment. The addition of lidar can obtain deeper spatial information and more accurate perception of the position, distance and size of objects, and because lidar is self-illuminating and not affected by ambient light. However, the “multi-sensor fusion” scheme also has its own limitations, and lidar is easily affected by rain, snow and fog. Due to the hard cost of lidar itself of thousands of dollars, and “multi-sensor fusion” often has higher computing power requirements for computing chips, there is no cost advantage. At present, the companies that use the “multi-sensor fusion” solution to realize autonomous driving mainly include Xiaopeng, Weilai, Jihu and so on.

In terms of technical implementation, the two factions are based on hardware to help the car frame 3D information around itself, and then let the computing chip extract key information to make driving decisions. In the future, “pure vision” will solve problems such as light-induced blindness and cognitive errors. The fusion of “multi-sensor fusion” itself is a major technical challenge, and the algorithm itself has to be optimized, and lower-cost lidar products are also required. . At present, there is still a lot of potential for the two major technical routes to be tapped.

Of course, high-level autonomous driving has not yet opened up the big market for home use. The battle for routes is not only about the perception of the environment, but also the dispute between bicycle intelligence and road-vehicle coordination. However, as mentioned above, component manufacturers have chosen high flexibility and performance redundancy to support autonomous driving, and provide as much support as possible for each implementation scheme. Below, we will focus on introducing several components and products that can be applied to autonomous driving on the Mouser Electronics website to help you complete various autonomous driving implementation solutions.

FPGAs Powering Next-Generation ADAS

Cameras are necessary for autonomous driving, regardless of the technology genre leaning toward, and they act as the “eyes” of the car. Our first recommendation, then, is an FPGA evaluation kit for next-generation ADAS development, the UltraScale+™ MPSoC ZCU102 development tool, which excels in vision processing, from manufacturer AMD Xilinx, available on the Mouser Electronics website. The manufacturer number is EK-U1-ZCU102-GJ.

Figure 2: MPSoC ZCU102 Evaluation Kit

(Image source: AMD Xilinx)

The MPSoC ZCU102 development tool uses the Zynq UltraScale+ MPSoC device as the core of the system, which has industry-leading product features in core configuration, process architecture, device size, system power consumption, software support and graphics processing.

Figure 3: Zynq UltraScale+ MPSoC device system block diagram

(Image source: AMD Xilinx)

As can be seen from the system block diagram above, the Zynq UltraScale+ MPSoC device is based on a quad-core ARM® Cortex-A53 and dual-core Cortex-R5 real-time processors with a flexible memory hierarchy. The product design of heterogeneous processing and the programmable features of the device itself allow engineers to optimize their applications from multiple angles while providing better performance.

The combination of 16nm FinFETs and ultra-miniature packages gives Zynq UltraScale+ MPSoC devices unmatched compute density, regardless of application conditions, and provides better signal integrity.

The existence of deep learning processing units that support AI/ML and Mali-400 MP2 graphics processing units enable Zynq UltraScale+ MPSoC devices to meet the high requirements of next-generation ADAS systems for vision systems and related algorithms, H.264/H.265 video coding The decoder further improves this performance. At the same time, the rich peripheral interfaces make Zynq UltraScale+ MPSoC devices no pressure to support “multi-sensor fusion”.

It can be said that the heterogeneous, high-performance, ultra-small, and high-computing Zynq UltraScale+ MPSoC device is an ideal device for building next-generation ADAS applications, and the 64-bit processor scalability provided by this device is a solution. Design brings endless possibilities. The MPSoC ZCU102 evaluation kit provides an excellent platform to capture the extraordinary charm of this device.

Figure 4: MPSoC ZCU102 Evaluation Kit System Block Diagram

(Image source: AMD Xilinx)

The MPSoC ZCU102 evaluation kit is optimized for prototyping applications using Zynq Ultrascale+ MPSoCs, including next-generation ADAS. As shown in the figure above, the evaluation kit supports all major peripherals and interfaces of the Zynq Ultrascale+ MPSoC, such as PCIe root port Gen2x4, USB3, DisplayPort, and SATA, etc., and supports the design and implementation of various solutions.

FPGAs available for ADAS prototyping

In the process of advancing autonomous driving, we see that at present, manufacturers are basically in the battle for the leading technology route. Further, even under the same route, due to the different definitions of each ECU functional unit by the manufacturer, the system architecture and the functions of the core chips will also be different. From the perspective of components, due to each company’s different understanding of autonomous driving, product design concepts are also very different. So, how can we efficiently find a more market-competitive design solution? Next this product can help you.

Here we introduce another solution from manufacturer AMD Xilinx, the Versal™ AI Core Series VCK190 Evaluation Kit, available on Mouser Electronics under the manufacturer number EK-VCK190-G-ED.

Figure 5: Versal™ AI Core Series VCK190 Evaluation Kit

(Image source: AMD Xilinx)

According to the Xilinx Wiki, the VCK190 evaluation kit is the first Versal™ AI Core series evaluation kit to help designers develop solutions using AI and DSP engines. Compared to server-class CPUs, the system core VC1902 device of this evaluation kit can provide more than 100 times the computing power.

The VCK190 evaluation kit is an “all-rounder” that allows engineers to quickly develop applications with a wide range of connectivity options, including data center computing, 5G radio and beamforming (DFE), wireless test equipment, and more. At the same time, it can also be applied to the prototype development of ADAS.

For ADAS prototyping, AMD Xilinx offers a dedicated target reference design to enable multi-sensor video analytics for automotive applications. In the reference design, the image sensor module on the VCK190 evaluation board can receive information in real time and accelerate the image sensor information hooked to the camera, while the evaluation board also supports other AI/ML workloads commonly found in automotive applications.

S32G2 processors help build next-generation gateways

As mentioned above, environmental perception and precise positioning, path planning, and wire-controlled execution are called the four core technologies of autonomous driving. In the specific implementation, “pure vision” means that multiple sets of ECUs correspond to multiple sets of cameras, and after extracting useful information, they are aggregated on the computing chip to make decisions; “multi-sensor fusion” means that different sensors correspond to different ECU module units, which are still required in the end. The extracted information is sent to the computing chip for operation. Outside the vehicle, the concept of the Internet of Vehicles requires some information to be uploaded to the cloud to realize edge-to-cloud communication with the car as the edge. Then, in this process, gateway configuration is particularly important to help improve real-time processing and security.

Next, the component we want to highlight is the S32G2 automotive network microprocessor that can be used to implement next-generation automotive gateways and architectures. It is from the manufacturer NXP Semiconductors. You can search for manufacturer number S32G274AABK0VUCT on the Mouser Electronics website. Find it precisely.

The S32G2 microprocessor is an upgraded product of NXP’s family of automotive gateway devices. It is based on the quad-core Arm® Cortex®-A53 application processor and has more than 10 times the performance and network acceleration capabilities of the company’s previous products. Able to achieve such a large performance leap, thanks to the device’s advanced design concept. The S32G2 processor has an optional two-two lockstep function for processing high-power applications and services, including three Arm Cortex-M7 full lockstep cores for processing real-time applications. The existence of three hardware accelerators, the Ethernet Packet Forwarding Engine (PFE), the Low Latency Communication Engine (LLCE) and the Hardware Security Engine (HSE), makes the transmission of the automotive network faster and safer.

Figure 6: S32G2 Automotive Network Microprocessor System Block Diagram

(Image source: NXP)

Safety is a major product feature of the S32G2 processor. In addition to the mentioned hardware accelerators, there are also on-chip hardware modules and software libraries that support the implementation of ASIL D functional safety. In the optional advanced software solution, the S32 safety software framework for functional safety implementation and the structural core self-test (SCST) for the Cortex-A53 core are also provided, bringing a more advanced information security and functional safety experience.

Overall, the versatile S32G2 processor is an excellent choice for domain controllers, automotive networking, and security processors.

Image Sensors for Various Automotive Applications

In the interpretation of the technical route, we see that no matter what development route is taken, cameras are a necessary part of the automatic driving system, collectively referred to as the vision system. Next, the device we will introduce to you is an image sensor that can be used in many automotive applications such as ADAS, automatic (find components in stock on the only sample mall) driving, surround view, reversing camera, in-cabin monitoring, etc., from the manufacturer Anson Onsemi, the manufacturer number on Mouser Electronics is ASX340AT2C00XPED0-DPBR2.

The ASX340AT image sensor is a series of ON Semiconductor’s image sensor solutions, mainly for the automotive application market, suitable for applications such as rear view cameras, blind spot monitoring and surround view.

The ASX340AT series is a single-chip CMOS active pixel digital image sensor in VGA format. It has the inherent advantages of high-quality image sensors, such as low noise, low power consumption and integration advantages, etc., and provides a variety of camera functions, including auto focus, auto white balance and automatic exposure. Notably, the ASX340AT series is a complete on-chip camera system that integrates complex camera functions on-chip, allowing integrators to design rear-view camera systems without the need for additional processing chips.

Figure 7: Internal block diagram of ASX340AT series

(Source: ON Semiconductor)

In addition to the ASX340AT series, the entire ON Semiconductor image sensor solution is a comprehensive image inspection product portfolio, supporting resolutions from VGA to over 50MP and CMOS and CCD technologies, covering a variety of applications in the automotive, industrial, and consumer fields. Currently, the new five series of image sensors in the portfolio are distributed by Mouser Electronics, helping engineers and friends achieve cutting-edge application innovation.

Route battle continues, but only one goal

Undoubtedly, for a long time in the future, there will still be factional disputes about how to realize autonomous driving, whether it is environmental perception or the realization of the overall plan. However, the pure vision solution will gradually lose its first-mover advantage in the future, and due to the obvious limitations of the solution, it is expected that more and more manufacturers will embrace the “multi-sensor fusion” solution, especially in the field of higher-level autonomous driving. . Of course, no matter how it is achieved, there is only one ultimate goal – making cars safer to drive. In order to achieve this goal, manufacturers need to use better hardware solutions to adapt to more intelligent software algorithms, and this is where Mouser Electronics can help engineers and friends. High-quality implementation of the hardware part of autonomous driving or assisted driving.

Reviewing Editor: Tang Zihong

Leave a Reply

Your email address will not be published.