Beijing, July 15, 2020 – graphcore today officially released the second generation IPU and IPU machine: M2000 (ipu-m2000), a large-scale system level product. The new generation products have stronger processing power, more memory and built-in scalability, and can handle extremely large machine intelligent workload.
Ipu-m2000 is a plug and play machine intelligent blade computing unit, which is made of graphcore’s new 7-nm colossus ™ The second generation gc200 IPU is powered by poplar ™ The software stack provides full support. It is designed for easy deployment and supports systems that can scale to large scale. This slim 1U blade machine can provide one petaflop machine intelligent computing, and integrates network technology optimized for AI expansion.
Graphcore second generation Colossus ™ IPU processor: gc200
Ipu-m2000 can be built into ipu-pod64, a graphcore new modular rack scale solution, which can be used for intelligent horizontal expansion of large machines, providing unprecedented AI computing possibilities, complete flexibility and easy deployment. It can scale from a rack mounted local system to more than 1000 ipu-pod64 systems in highly interconnected ultra-high performance AI computing facilities.
“With the launch of ipu-m2000 and ipu-pod64, graphcore has further expanded our product competitive advantage in the field of machine intelligence.” Nigel toon, CEO of graphcore, said: “Graphcore realizes a stronger product line through technological innovation, which can provide the industry-leading performance expected by customers. For customers seeking to add machine intelligent computing to the data center, graphcore’s newly launched ipu-m2000 will have strong feasibility and value improvement with its powerful computing power, easy to expand flexibility and outstanding ease of use Potential. “
Users of MK1 IPU products can be confident that their existing models and systems can run seamlessly on these new MK2 IPU systems. Although the first generation graphcore IPU products are already in the leading position, the performance of the second generation products will be improved by 8 times.
Performance comparison between MK1 IPU and MK2 IPU
The design of ipu-m2000 enables customers to work in ipu-pod ™ A data center scale system with up to 64000 IPUs is built in the configuration to provide machine intelligent computing power of 16exaflops. The new ipu-m2000 can handle even the toughest machine intelligence training or large-scale deployment workloads.
Graphcore’s new IPU fabric ™ Technology makes it possible to connect ipu-m2000 and ipu-pod on a large scale. This technology is designed from scratch for machine intelligent communication, and provides a special low delay structure, which can connect IPU in the whole data center.
Graphcore IPU-Fabric ™ technology
Graphcore’s virtual IPU software, integrated with workload management and orchestration software, can easily provide training and reasoning services for many different users, and allow the adjustment and reconfiguration of available resources according to work conditions.
Whether you want to use a single IPU or thousands of IPUs to complete the machine intelligence workload, graphcore’s popular SDK can make this process simple. You can use the preferred AI framework (such as tensorflow or pytorch). Moreover, from this high-level description, poplar will build a complete computing diagram to capture computing, data, and communication. Then, it will make full use of the available IPU hardware, compile this calculation diagram, and build a runtime program for managing computing, storage and network communication.
Graphcore’s latest product line is achieved through three disruptive technological innovations that deliver the industry-leading performance customers expect:
·Calculation: the core of each ipu-m2000 is graphcore’s new graphcore colossus ™ Mk2 GC200 IPU。 The chip is developed by TSMC’s latest 7-nanometer process technology. Each chip contains more than 59.4 billion transistors on an 823 square millimeter bare chip, making it the most complex processor in history.
·Data: each IPU has a large number of in processor memories ™。 Graphcore’s new MK2 gc200 has an unprecedented 900mb ultra-high speed SRAM in the processor, and a large amount of ram is set next to each processor core to achieve the access of each processor with the lowest energy. Graphcore’s Poplar software also allows IPU access to graphcore’s unique exchange memory ™ Communication access streaming memory ™。 This can even support the largest model with hundreds of billions of parameters. Each ipu-m2000 can support exchange memory with a density of up to 450gb ™， And an unprecedented bandwidth of 180tb / s.
·Communication: ipu-m2000 has built-in dedicated AI networking IPU fabric ™。 Graphcore has created a new graphcore gc4000 IPU gateway chip, which can provide incredible low latency and high bandwidth. Each ipu-m2000 can provide 2.8tbps. In the process of expanding from dozens of IPUs to tens of thousands of IPUs, IPU fabric technology keeps the communication delay almost constant.
“By combining strong computing power with network capabilities, we can handle the world’s most advanced and complex algorithm models,” said Lu Tao, senior vice president and general manager of graphcore in China “Such an algorithm model will promote local AI algorithm landing scenarios in China, such as cloud computing, Internet and communication, and will provide great value to AI industry.”
In the Chinese market, graphcore has carried out close early cooperation with leading local commercial users. The IPU based developer cloud was officially launched in early July, and its ipu-pod product technology has been accessible to users on the IPU developer cloud. Therefore, China is likely to become one of the regions where graphcore’s latest second-generation processor technology is first commercialized.
The above efforts are only part of graphcore’s considerable investment in the Chinese market. Graphcore has also built a strong engineering team locally, hoping to work closely with local AI industry and AI innovators to boost China’s AI innovation with advanced technology.