The three supports of artificial intelligence are hardware, algorithm and data. Hardware refers to the chip running AI algorithm and the corresponding computing platform. In terms of hardware, GPU parallel computing neural network is mainly used at present. At the same time, FPGA and ASIC also have the potential to emerge in the future. GPU (graphics processing unit) is called graphics processor. It is the “heart” of graphics card. Similar to CPU, it is just a microprocessor specialized in image operation.

GPU is designed to perform complex mathematical and geometric calculations necessary for graphics rendering. GPU can provide dozens or even hundreds of times the performance of CPU in floating-point operation, parallel computing and other computing. NVIDIA has launched relevant hardware products and software development tools since the second half of 2006. At present, NVIDIA is the leader in the artificial intelligence hardware market.

GPU’s parallel computing ability for massive data coincides with the needs of deep learning. Therefore, it was first introduced into deep learning. In 2011, Professor Wu Enda took the lead in applying it to Google brain and achieved amazing results. The results show that 12 NVIDIA GPUs can provide in-depth learning performance equivalent to 2000 CPUs.

As an image processor, GPU is designed to deal with the need for large-scale parallel computing in image processing. Therefore, it has three limitations when applied to deep learning algorithm: 1. It can not give full play to the advantages of parallel computing in the application process. 2. The hardware structure is fixed without programmability. 3. The energy efficiency of running deep learning algorithm is far lower than that of ASIC and FPGA.

Three special chips of AI are GPU, FPGA and ASIC

FPGA (field programmable gate array) is called field programmable gate array. Users can repeat programming according to their own needs. Compared with GPU and CPU, it has the characteristics of high performance, low energy consumption and hardware programming.

FPGA has lower power consumption than GPU, shorter development time and lower cost than ASIC. Since Xilinx created FPGA in 1984, it has occupied a place in the fields of communication, medical treatment, industrial control and security, and has also had a very high growth rate in the past few years. In the past two years, due to the prosperity of cloud computing, high-performance computing and artificial intelligence, the attention of FPGA with inherent advantages has reached an unprecedented height.

In terms of the current market, Intel, IBM, Texas Instruments, Motorola, Philips, Toshiba, Samsung and other giants have set foot in FPGA, but the most successful are Xilinx and Altera. The two companies have a total market share of nearly 90% and more than 6000 patents. Intel acquired Altera for us $16.1 billion in 2015, which also focuses on the development of FPGA dedicated computing power in the field of artificial intelligence. From the actions of industry giants, it can be seen that since FPGA greatly makes up for the shortcomings of CPU in computing power and flexibility, the combination of CPU + FPGA will become an important development direction in the field of deep learning in the future.

FPGA also has three kinds of limitations: 1. The computing power of the basic unit is limited; 2. Speed and power consumption need to be improved; 3. FPGA is expensive. ASIC (application specific integrated circuit) is an integrated circuit designed for special purposes. Unable to reprogram, high efficiency, low power consumption, but expensive.

In recent years, various dazzling chips such as TPU, NPU, Vpu and BPU are essentially ASIC. ASIC is different from the flexibility of GPU and FPGA. Customized ASIC cannot be changed once it is manufactured, so the entry threshold is high due to high initial cost and long development cycle. At present, most of them are giants with AI algorithms and good at chip research and development, such as Google’s TPU. Because it is perfectly suitable for neural network related algorithms, ASIC is superior to GPU and FPGA in performance and power consumption. Tpu1 is 14-16 times higher than traditional GPU and NPU is 118 times higher than GPU. The external application instruction set has been released in Cambrian, and ASIC is expected to be the core of AI chip in the future.

Another future development of ASIC is brain like chip. Brain like chip is an ultra-low power chip based on neuromorphological engineering and learning from human brain information processing mode, which is suitable for real-time processing of unstructured information and has learning ability. It is closer to the goal of artificial intelligence. It tries to imitate the principle of human brain in the basic architecture and replace the traditional “von Neumann” architecture system with neurons and synapses, so that the chip can be asynchronous, parallel Low speed and distributed processing capabilities, as well as independent perception, recognition and learning capabilities. IBM’s northtrue is a brain chip. At present, brain chip is still in its infancy, and there is still a distance from commercialization, which is also where countries are actively layout.

Leave a Reply

Your email address will not be published. Required fields are marked *