90% of the global data have been generated in the past two years. According to IDC’s prediction, by 2025, the data will grow exponentially, reaching 163 ZBS. In gallons, these figures are equivalent to two Atlantic oceans. But only 1% of these data can be used. Both governments and enterprises hope to improve their competitiveness through digital transformation. Digitization cannot bypass two dimensions, application and data.

There is still a long way to go on how to use data. How to achieve both storage performance and capacity, Intel enables users to store more data in a more economical way by continuously developing, recording, transmitting, storing and processing data technology, so as to process all data from the cloud to the edge. According to IDC statistics, by 2025, the proportion of real-time data will reach 29%. For example, automatic driving will produce more data. Cars collect 4tb data every hour. How to integrate these data with traffic, make the data become new oil for cars, and reduce traffic accidents and casualties? This requires computer vision, edge computing, map data acquisition, and sharing data to the cloud to train AI models.

However, to make full use of these data, we will face many difficulties. The first challenge is how to store this data. As we all know, the larger the amount of data used for AI training, the more accurate the results of AI training. From the current situation, emerging applications such as automatic driving have produced more data in a shorter time. It is necessary to store as much data as possible, which means that storage with better performance and larger capacity is required.

From the perspective of the overall system, the storage with better performance and larger capacity can be realized by building a larger server cluster through distributed technology. However, the larger the scale of the server cluster, the more complex the architecture design, deployment and use of the whole system. Another relatively simple solution is to use storage media with better performance and larger capacity to provide several times of performance and capacity under the same server cluster scale.

To a certain extent, it is difficult to have both storage performance and capacity, but it can be solved from two dimensions by using modern IT technology. The capacity problem can be solved by deploying a new generation of QLC SSD.

For example, the latest generation QLC SSD launched by Intel can store 16TB data in a 2.5-inch drive. If Intel ruler series SSDs are used, the single disk capacity can reach 32tb. Due to its unique shape design, 32 ruler series SSDs with 32tb capacity specification can be deployed in one 1U server, so that a single server can provide up to 1PB of storage capacity.

QLC SSD has larger storage capacity and better data sequential read-write performance, but its random read-write performance is lower than TLC SSD. It is suitable for sequential read-write dominated application scenarios. From the perspective of the overall system, better write performance can be achieved through more uniform data write distribution, or Intel optane can be used to build a cache layer for the system to speed up data writing.

Intel optane series SSDs, regardless of data reading or writing, have a single disk performance of more than 500000 IOPs. Different from the traditional SSD based on NAND flash memory, optane (aoteng) is made of new materials and technologies. It not only has extreme and balanced reading and writing performance, but also has a writing life far higher than that of NAND flash memory. Combined with the characteristics of low delay, it can even be used as a supplement to memory.

The test data shows that with the increasing write load, the delay of traditional 3D NAND solid-state disk continues to rise, while the delay of optane remains the same. This means that optane’s stable delay is more conducive to the improvement of storage system QoS.

In fact, many applications, such as artificial intelligence, data analysis and high-performance computing (HPC), require more memory capacity than performance. Larger memory capacity means better performance in practical applications. For example, data shows that the performance of Apache spark has been improved by more than five times through the use of aoteng data center level persistent memory combination.

Aoteng data center level persistent memory can not only provide greater memory capacity, but also be more economical. In addition, the identity of optane nonvolatile storage is unmatched by memory. Even in case of power failure, data can not be lost (APP direct mode), without reloading data after system restart like traditional memory.

Using Intel optane and QLC SSD technology is bringing new changes to the data center and new ideas for the architecture design of the data center. It can not only improve the performance of the data center system several times, but also be more economical. Data will become the basis for changing the traditional measurement. Process optimization, product iteration and business model innovation will be driven by data. In the process of traditional business turning to digitization, it is necessary to use intelligent infrastructure to support new applications, which will be the trend in the future.

Combined with QLC SSD, optane brings new ideas to data center architecture design. Optane data center level persistent memory can expand memory capacity and improve memory application performance; Optane SSD can realize data acceleration and improve the performance of the storage system; QLC SSD can provide larger storage capacity and store more data with significantly higher performance than hard disk.

Leave a Reply

Your email address will not be published. Required fields are marked *