If FPGA is the future of traditional CPU and GPU, it is a bit exaggerated. Regardless of the maturity of CPU and GPU technology and the perfect ecological chain, the structure of CPU and FPGA is also different. The CPU has the ability to control instruction fetching, decoding and other processes, and has the ability to handle all kinds of strange instruction requirements.
In contrast, FPGA can not process all kinds of unprecedented instructions as flexibly as CPU. It can only process the input data according to a fixed mode and then output. This is why FPGA is often regarded as an expert exclusive architecture.
Different from CPU, FPGA and GPU have a large number of computing units, so their computing power is very strong. When performing neural network operations, the speed of both will be much faster than that of CPU. However, due to the fixed architecture of GPU, the instructions supported by the hardware are fixed, while FPGA is programmable.
We can see that the application fields of FPGA are mainly deep learning and neural network algorithms, while the traditional CPU pays more attention to “general”. Although GPU pays more attention to computing speed, its instructions are still fixed. The emergence of FPGA is popular all over the world because of its programmability, which makes FPGA have unique advantages in the field of deep learning. It is not surprising that Google has developed its own chip called TPU in order to develop in-depth learning. As Holzer, head of Google’s data center, said: Google’s research and development of its own chips is to solve the problem of which provinces to solve.
When the market demand changes, the technology will develop accordingly. When in-depth learning becomes a hot field, the FPGA that best matches it has also become the focus of manufacturers.