Storage devices for virtualization have changed the data center in many ways. In addition to reclaiming the remaining space, it can also save power consumption and conduct centralized management.

Although virtualization can support multiple storage methods, many enterprises choose to migrate most of their data from local disks to some kind of shared storage at the same time. This dependence on shared storage may be the biggest change virtualization has brought to data centers.

By migrating data storage requirements from a large number of underutilized disks to San like central devices, virtual storage enables enterprises to consolidate storage resources and centralize management. These superficial advantages of virtual storage also bring great pressure to many aspects of traditional data center, especially in the aspect of rapid growth of storage resource consumption.

It is the first time for many enterprises that are just exposed to virtualization to enter the field of shared storage. For virtualized storage deployment with San, there are many challenges for system architecture administrators.

Challenges in Virtual Storage Investment

For virtual storage deployment, the best solution is to migrate all storage requirements from a single server to the San, but this approach is accompanied by significant cost increases. In many San environments, the virtual server is usually a big consumer of storage resources. The cost of SAN storage itself has been very high, coupled with the increased investment for dedicated connection equipment, further aggravates the cost factor of virtual storage architecture. In particular, when using optical fiber architecture, the cost of optical fiber storage plus dedicated optical fiber switch and the HbAS that need to be installed on each server is very high.

The storage controller used to provide virtual storage capacity also increases the initial investment, but on the other hand, it can greatly reduce the storage resource consumption of virtualization installation. Considering these factors, a large amount of pre storage equipment investment is needed when starting to deploy virtual shared storage.

What are the challenges of virtual storage

Virtual storage and its backup

Virtualization completes the large-scale integration of servers, and also migrates the data storage requirements from a large number of independent servers that were not connected to each other to centralized storage devices. This change provides greater flexibility for the implementation of data protection strategies. Although traditional backup and recovery strategies can still be used for virtual architectures, we now have more efficient models.

In short, the traditional backup agent mode can be replaced by San based backup. Now many virtualization friendly storage products provide a large number of data protection options for centralized data. For example, NetApp’s snapvault provides a disk based San backup solution.

This disk based backup method monitors the contents of data blocks on the disk. On a virtual server architecture, these data blocks may span multiple virtual machines. The snapvault engine keeps track of those blocks that have changed, skipping a large number of unchanged blocks when protecting.

Compared with the simpler and consolidated data protection scheme, the biggest advantage of virtual storage based on San backup is its extremely short recovery time. Disk based backup and recovery is much faster than restoring something from tape.

Many enterprises are committed to the SAN storage controller to complete all data protection work, but this is not a model that all enterprises can refer to. Data backup based on SAN storage is limited by bandwidth, remote site maintenance and existing investment.

Other considerations of centralized storage

Of course, SAN storage is expensive. However, the functions of some levels of virtual storage devices can save customers the cost of the storage devices themselves. For example, the SAN storage controller monitors whether there is the same block of data on multiple disks, and then retains only one instance of the duplicate block, thus avoiding repeated writing of the same data on multiple disks.

When the virtual machines consolidated into the San are created from the same source, this situation can be well combined with data De duplication. Nowadays, most virtual environment deployment is based on the virtual machine template. For example, 10 virtual machines are created from a certain template. In most cases, the 10 virtual machines are not significantly different at the data block level. In particular, the operating system of virtual machine is basically the same in its whole life cycle even if it has been patched and upgraded.

Virtual storage devices reduce the storage requirements to the size of raw data as much as possible through word to word comparison. Some storage products also offer a data De duplication guarantee. NetApp is the industry leader in virtualization assurance, under which NetApp promises to save at least 50% of space for virtualization deployment.

Data De duplication is one of the key factors to consider when considering which virtualization storage device to adopt. This function can help establish cost models to analyze which products are suitable for what requirements.

Most SAN storage has front-end controller, which is an important aspect of storage device initialization investment required by virtualization. Before choosing a storage platform, it is important to analyze how many terabytes of data there are actually. Assuming only 3TB of virtual storage capacity is required, there is no need to choose a large dual controller SAN storage device with data De duplication to save space. If the virtual storage device has more than 15tb of data, it makes sense to invest in San devices with additional functions.

Editor in charge: CT

Leave a Reply

Your email address will not be published. Required fields are marked *