Data intensive workloads such as machine learning and data analysis have become commonplace. To handle these compute intensive tasks, enterprises need acceleration servers optimized for high performance.
Intel yesterday released the 3rd generation Intel Xeon extensible processors (code named “ice lake”). The processor is based on a new architecture, which can greatly improve its performance and scalability. With the help of NVIDIA GPU and network functions, the new system becomes an ideal enterprise accelerated computing platform. In addition, the new system also has the function of GPU accelerated application.
Advantages of ice lake platform in accelerating computing
Ice lake adopts PCIe Gen 4. Its data transmission rate is twice that of the previous generation products. Now it can match the native speed of NVIDIA ampere based GPU (such as NVIDIA A100 tensor core GPU). The adoption of PCIe Gen 4 improves the throughput with GPUs, which is particularly important for machine learning workloads involving a large amount of training data. At the same time, the adoption of PCIe Gen 4 also improves the transmission speed of data intensive tasks, such as 3D design tasks of NVIDIA RTX virtual workstation accelerated by high-performance NVIDIA a40 data center GPU.
The increase of data rate also realizes the network speed of 200gb/s, such as the HDR 200gb/s Infiniband network card of NVIDIA connectx family, 200gb/s Ethernet network card, and the upcoming NDR 400gb/s Infiniband network card technology.
Ice lake platform supports 64 PCIe channels, so you can install more hardware accelerators in the same server, including GPUs and network cards, so as to improve the acceleration density of each host. This also means that the multimedia VDI environment accelerated by the latest NVIDIA GPU and NVIDIA Virtual PC software can achieve higher user density.
These enhancements enable unprecedented GPU accelerated expansion. Enterprises can handle the largest scale of work by using more GPUs in one host and more effectively connecting the GPUs of multiple hosts.
Intel also improved the performance of ice lake memory subsystem by increasing the number of DDR4 memory channels from 6 to 8, making the maximum data transfer rate of memory reach 3200mhz. This makes the data transmission bandwidth from main memory to GPU and network larger, which can improve the throughput of data intensive workloads.
Finally, the improvement of the processor itself further accelerates the speed of computing workload. Increasing the number of instructions per clock cycle by 10-15% can improve the overall performance of the CPU part corresponding to the accelerated workload by up to 40%. In addition, the number of cores has also increased. For example, there are as many as 40 cores in the 8xxx series. This will increase the Virtual Desktop Session density of each host and further increase the return on the server GPU investment.
NVIDIA is very pleased to see the partners release the new ice lake system accelerated by NVIDIA GPU, including Dell EMC PowerEdge r750xa built by Dell technology for GPU acceleration, and the new Lenovo thinksystem server based on the 3rd generation Intel Xeon scalable processor and PCIe gen4 (many of which are equipped with NVIDIA GPU).
Intel’s new ice lake platform and accompanying accelerator hardware are ideal for enterprise customers preparing to update their data centers. The enhancements of the new architecture enable enterprises to run data center level accelerated applications with better performance. NVIDIA and Intel’s mutual customers will be able to benefit quickly.