You haven’t heard of itFPGA? Then you must have not updated your knowledge in the enterprise IT field for a long time. Today, I want to talk about what FPGA is? What are the main application scenarios of FPGA? Some people say that FPGA is an alternative to traditionCPUandGPUDo you believe in our future?
FPGA full name field programmable GateArray), initially as a dedicatedIntegrated circuitSemi customization in domainCircuitHowever, it has certain programmability and can perform data parallel and task parallel computing at the same time.
Actually,Intel, Ziguang, Inspur and other enterprises have started to lay out FPGA. As early as the sc2015 conference, Inspur has been unitedAltera, and iFLYTEK, China’s largest provider of intelligent voice technology, jointly released a set ofDeep learning22522;”20110;”alters Arria 10-shot”21488a”;
Of course, the most famous one is Intel Hao’s $16.7 billion acquisition of Altera, the largest acquisition in Intel’s history. Intel acquired Altera mainly for FPGA. According to the e52600 V4 processor integrated with FPGA chip that Intel later displayed at IDF exhibition, the $16.7 billion acquisition is worth it: with the help of FPGA chip, the performance of Xeon processor per watt has increased by 70%.
Ziguang is another company that hopes to directly contact the latest technology of FPGA through acquisition. After seeking failure in acquisition of Meguiar and failure in acquisition of WD, Ziguang may acquire lattice semiconductor of the United States（LatticeSemiConductor) to make a layout for grabbing the FPGA market.
From the perspective of application scenario, we can see that with Google’s alpha dog beating the human go champion, deep learning has come down from the altar, and more and more people begin to realize that deep learning may change their future life and become the direction of future technology development; and FPGA design tools make it more compatible with the upper software often used in the field of deep learning , FPGA is just a big technology to help in-depth learning.
However, if FPGA is the future of traditional CPU and GPU, it’s a bit exaggerated. No matter the technology of CPU and GPU is mature and has perfect ecological chain, the structure of CPU and FPGA is different. The CPU has the process of controlling fetching, decoding, etc., and the ability to handle all kinds of strange instructions.
In contrast, FPGA can’t handle all kinds of unknown instructions as flexibly as CPU, and can only process the input data and then output according to a fixed mode, which is why FPGA is often regarded as an expert exclusive architecture. Different from CPU, FPGA and GPU have a large number of computing units, so their computing power is very strong. In progressneural networkWhen computing, the speed of both will be much faster than that of CPU. But GPU’s architecture is fixed, and FPGA is programmable.
We can see that the application fields of FPGA are mainly deep learning and neural network algorithm, while the traditional CPU pays more attention to “general”, GPU pays more attention to calculation speed, but its instructions are still fixed. The emergence of FPGA is popular all over the world because of its programmability, which gives fpga a unique advantage in the field of deep learning. In this way, it is not surprising that Google has developed its own chip called TPU in order to develop deep learning. As Holzer, the head of Google’s data center, said: Google develops its own chips to solve the problem of which province to solve.