In 2018, a total of 32 ZB data were generated worldwide (ZB is the 12th power GB of 10). In four years, we will enter a phased ZB level data era, and more than 100 ZB data will be generated every year. With the development of 5g, IOT, AI, deep learning and ultra-high definition technology, data will grow explosively and pose new challenges to data storage. On September 11, 2019, Mr. Zhu Haixiang, vice president of Western digital products marketing department, delivered a keynote speech at the 2019 World Computer Conference on the theme of next generation storage innovation.

Zhu Haixiang, vice president of product marketing department of Western Digital Corporation, said: “Data is quietly born in every bit of our daily life. With the development of the Internet of all things, we have come to the era of data explosion. I believe that the data performance generated by the development of streaming media can quickly resonate with everyone: a 40 episode TV play needs about 69gb of storage space in HD format, but now 4K Ultra HD picture quality , we need about 480gb of storage capacity, which is about 7 times the previous storage capacity. Today, we can see that a lot of data are mainly artificially generated; However, it is estimated that by 2023, more than 90% of the world’s data will be automatically generated by machines. In view of these explosive data, it is necessary to re-examine whether today’s storage can meet the rapidly growing development needs of the data center. How to meet the storage demand of data growth is the proposition given to Western data by the times. “

From standard definition to ultra-high definition, users get a better visual experience, while the data behind it increases exponentially. The storage capacity of ultra-high definition is about 16 times that of standard definition, and the storage capacity of data also increases exponentially. With 90% of the data automatically generated by machines in the future, there will be explosive data storage demand. Western Digital believes that using more changes in data generation process and characteristics to store data in an effective, low-cost and high-density way is an effective measure to deal with the change of data storage structure.

With the current storage technology, only 15% of the data stored worldwide will be retained. By 2023, on the premise of unchanged technology, only about 10% of the data will be retained, and the remaining 90% of the data will be abandoned due to costs and benefits. In the era of big data, every line of data has potential value, Data is the new currency of the times. The implication is that you don’t realize the value of data when generating data, but you will mine the value of data in later analysis. Taking into account the effectiveness and low cost of data storage is a major trend in the future.

In the NAND flash process, you can often hear 24 floors, 48 floors and 96 floors. For enterprises pursuing cost, the higher the process is, the more appropriate it is. Zhu Haixiang made a vivid description: “there is a 61 storey building in downtown San Francisco, which is compared to a 64 storey 3D NAND. The increase of each floor needs to increase the cost, so you can never see a 200 storey building.”

The number of layers increased vertically will be reflected in the cost, and the combination of horizontal and vertical dimensions will produce the third digital and logical dimension. When the flash memory hole stores 422 bits per unit, it is a good way to promote logic expansion to the boundary in order to maintain an efficient 4KB state. The logic expansion cost of QLC can be controlled and excellent access performance can be obtained, At the same time, it also produces the limitation of Op.

Western Digital has a very mature HDD technology reserve. The HDD technology evolution is based on the system architecture. Over the past ten years, Western Digital has been promoting the storage capacity of HDD single disk. The traditional vertical recording technology has reached the boundary, and the new laminated magnetic recording technology (SMR) has increased. Through logical expansion, it has continuously refreshed the new record of single disk area density.

It is estimated that by 2002, half of the HDDs in the global data center will turn to the HDD of laminated magnetic recording technology. At the same time, western data will also promote the development of flash memory technology. With the explosive growth of data volume, the data center will face super large load, resulting in huge resource demand and cost expenditure. The data generated by the machine has the characteristics of sequential writing. The workload is reasonably optimized through partitioned storage devices, so as to improve performance and efficiency and reduce TCO faster.

The partitioned storage architecture uses the application layer, host and storage to cooperate with the data storage location, uses SMR HDD to achieve the most storage capacity, and realizes durability, low latency and QoS performance through the nvme SDD of the emerging partitioned namespace (ZnS) standard, so as to make up for the limitation of Op, The partitioned storage architecture composed of SMR and ZnS SSD is a new future oriented storage architecture to optimize the infrastructure and achieve larger economic benefits. At present, based on the open source model, western data has established an alliance with major global manufacturers and OEMs to jointly promote the standardization of partitioned storage structure in October this year.

From the perspective of data architecture, partitioned storage structure is an evolution that really changes the storage efficiency, especially to meet the large amount of data generated by emerging technologies. Partitioned storage technology can better the development of different industries.

Leave a Reply

Your email address will not be published. Required fields are marked *