As we all know, chip defines the basic computing architecture of industrial chain and ecosystem. Just as CPU is the core of IT industry, chip is also the core of artificial intelligence industry. Up to now, the mainstream AI chips recognized in the industry include GPU, FPGA and ASIC in addition to CPU. People in the industry who are familiar with the chip industry know that the final infrastructure (or genre) of the so-called various AI chips is nothing more than this. When based on the above infrastructure, the pattern has been determined.
Needless to say, Intel has an absolute leading advantage, and there is little possibility of breaking through on this architecture. As for GPU, more than 70% of the global GPU market share is occupied by NVIDIA. The GPU market for general computing applied in the field of artificial intelligence is basically monopolized by NVIDIA. It is reported that there are more than 3000 AI start-ups in the world, most of which use the hardware platform provided by NVIDIA.
Looking at FPGA, although its market prospect is attractive, the threshold is as high as any in the chip industry. More than 60 companies around the world have successively spent billions of dollars to try to ascend the heights of FPGA, including industry giants such as Intel, IBM, Texas Instruments, Motorola, Philips, Toshiba and Samsung. However, only four companies in Silicon Valley in the United States have successfully ascended the heights: Xilinx (Xilinx), Altera (Altra), lattice (lattice) MICROSEMI, in which Xilinx and Altera occupy nearly 90% of the market share, with more than 6000 patents. Of course, the technical barrier constituted by so many technical patents is unattainable. Xilinx has always maintained the dominant position of FPGA in the world.
It is precisely because the chip infrastructure pattern has been determined that the so-called domestic AI chip enterprises (including start-ups) actually do only secondary development or optimization based on the above basic architecture. Taking Shenjian technology acquired by Xilinx this time as an example, since its establishment in 2016, Shenjian technology has been developing machine learning solutions based on Xilinx’s technology platform, and the two companies cooperate closely. The two underlying architectures of deep learning processor launched by Shenjian technology – Aristotle architecture and Cartesian architecture DPU products are based on Xilinx FPGA platform.
In addition, since Xilinx was previously one of the investors of Shenjian technology, we think Shenjian technology is more like a manufacturer or partner optimizing Xilinx FPGA. The reason is very simple. Once separated from Xilinx FPGA platform, Shenjian technology will be a tree without roots and a source of water. Of course, in addition to Shenjian technology, it is said that the so-called AI chip BPU of horizon, another well-known AI chip start-up in China, is also a secondary development based on FPGA. Since it is based on FPGA, the core underlying architecture is inseparable from the reference and support of our above-mentioned Xilinx, Altra, lattice and MgO SenMei FPGA platforms. Even if it is really a disruptive innovation with core architecture, since FPGA has been divided by these four enterprises, it is difficult to have a foothold to survive.
I’ll look at ASIC again. Under the situation that large foreign manufacturers almost monopolize the CPU, GPU and FPGA markets, coupled with high technical barriers, Chinese AI chip manufacturers have been lack of key core independent technologies in the chip field. It is difficult to make a breakthrough in CPU, GPU and FPGA only by the unilateral strength of the market and enterprises, so they can only find another way. At present, Chinese AI chip manufacturers are mainly small and medium-sized companies, combined with practical application requirements, focus on the development of AI ASIC at the equipment end, optimize a vertical field, and win with low power consumption and low cost. For example, the Cambrian, a well-known AI chip start-up in China, is such a case.
Here we are not saying that ASIC has no prospect in the field of AI chips. On the contrary, the previously famous Google TPU in the industry is based on ASIC. However, it should be noted that the reason why Google developed TPU is based on the application scale of its own data center, and the scale is the key to determine the benefits of ASIC. What makes the prospect of ASIC more unpredictable is that there is an analysis and view in the industry that FPGA benefits from the scale effect brought by the exponential rise of chip NRE cost. With the continuous improvement of process technology and the exponential rise of chip NRE cost, more and more ASIC chips will be forced to give up because they can not achieve economies of scale, so as to turn to direct development and design based on FPGA.
According to tractica’s estimation, FPGA can hardly be found in deep learning applications until last year. However, by 2025, its deployment will be equivalent to that of CPU (if not more than CPU). As a result, by 2025, FPGA will gain a significant market share in the deep learning chipset market with a total scale of US $12.2 billion. The so-called ten thousand changes do not leave its origin. Although AI chips are called in various ways, they are still inseparable from the cores of CPU, GPU, FPGA and ASIC, and among these cores, it is obvious that they are still traditional chipmakers Business, such as Intel, NVIDIA, Xilinx and other foreign manufacturers.
Through Xilinx’s acquisition of Shenjian technology, we can see that a considerable number of so-called Chinese AI chip enterprises only do secondary development, optimization and Application on the basis of other people’s architecture, but change a novel name and name, just like the competition in the traditional chip industry, China’s AI chips, which are seemingly noisy, are still dependent survival modes.