Bristol, UK,two thousand and twenty-oneyearonemonthtwenty-sixDay——Today, graphcore announced that it has taken a new step in helping customers accelerate innovation and large-scale use of artificial intelligence technology.
Graphcore recognizes that artificial intelligence technology brings great opportunities to the market, but also brings a series of unique computing challenges: the size of the model is increasing rapidly, and the accuracy standard is also improving. If customers want to take full advantage of the latest innovations, they need a tightly integrated hardware and software system built specifically for AI.
Graphcloud is a secure and reliable cloud service of ipu-pod series, which enables customers to obtain the powerful functions of graphcore IPU when expanding from experiments, proof of concept and pilot projects to larger production systems.
At the launch, graphcore offered two available products. In the coming months, graphcore will offer a larger scale out system:
·IPU-PODsixteen：Provide artificial intelligence calculation of 4petaflops (4 ipu-m2000, i.e. 16 Colossus mk2gc200ipu)
·IPU-PODsixty-four：Provide 16 petaflops AI computing (16 ipu-m2000, i.e. 64 Colossus mk2gc200ipu)
Compared with the latest GPU system, ipu-pod system can reduce the total training cost and shorten the solution time.
Poplar and system software have been pre installed in the instance of graphcloud system. Sample code and application examples are available locally and can be used for advanced models used in the graphcore benchmark, including Bert and efficientnet. Users can also access comprehensive documents that help them quickly enable multiple frameworks including pytorch and tensorflow.
Headquartered in the UK, healx is one of the first graphcore customers to use graphcloud, and its AI drug development platform is looking for new treatments for rare diseases. The company won the “best use of AI in health and medicine” at the 2019 AI awards.
Dano’donovan, head of healx machine learning engineering technology, said: “we started using ipu-pod16 on graphcloud in late December 2020 to migrate our existing mk1ipu code to MK2 system. This process has no obstacles and brings huge performance advantages. Providing more storage for the model means that we no longer need to segment the model, but just focus on segmenting the data. This makes the code simpler and the model training more efficient. “
He also pointed out: “in the cooperation with graphcore, graphcore has always opened access to the latest hardware, SDK and tools for us. In addition, we can have continuous dialogue with the hardware and software professional engineers of graphcore through direct meetings and support services. “
As for the release of graphcloud, graphcore co-founder and CEO Nigel toon said: “whether users are evaluating our hardware and pop lar software stack for the first time, or expanding artificial intelligence computing resources, enabling IPU with graphcloud has never been easier. We are happy to work with cirrascale to bring graphcloud to the world. “It is an important part of our global internal sales and support program, as well as the construction of products and services with graphcore.”
Cirrascale eopjgo said: “cirrascale is very proud of the strategic cooperation with graphcore, which further promotes the development of cloud based machine learning solution era, and then carries out new and large-scale commercial deployment with Fortune 500 companies.”
Pricing and specifications
Available system types
IPU-PODsixteenFour ipu-m2000 systems
IPU-PODsixty-four16 ipu-m2000 systems
·Both systems utilize graphcore’s unique IPU fabric ™ Interconnection architecture. IPU-Fabric ™ Designed to eliminate communication bottlenecks and allow thousands of IPUs to run as a single high-performance, ultra fast aggregation unit on machine intelligent workloads.
·Each ipu-podsixty-fourThe instances are supported by four dellr6525 host servers, and are equipped with the most powerful dual slot amdepyc2 CPU used by the local artificial intelligence data center system, and each ipu-podsixteenThere is a dedicated server of the same specification.
·Ipu-podsixty-fourAnd ipu-podsixteenIt provides 16TB and 4tb of secure local nvme storage respectively.
·Each ipu-podsixty-fourBoth provide 57.6gb in processor storage and 2048gb stream storage (32x64gb DIMM).
·Each ipu-podsixteenIt provides 14.4gb in processor storage and 512gb stream storage (8x64gbdimm).