Accommodating this exponential growth in data while reducing the overall power profile is fast becoming one of the prime IT concerns that today’s enterprises face.
Adding to the complexity of this challenge is the operational requirement for high availability access to applications and data. Mission-critical applications require more powerful processors and redundancy built into all levels including network connectivity, servers, fabric pathing and data storage. Such requirements are crucial for top tier business applications and the corresponding increase in energy consumption is only but inevitable. However, listed below are some green storage techniques and best practices that can help enterprises achieve high availability of applications and data while actually reducing total energy requirements.
Fabrics provide the interconnect between servers and storage systems. For larger data centers, fabrics can be quite extensive with thousands of ports in a single configuration. Because each switch or director in the fabric contributes to the data center power bill, designing an efficient fabric should include the energy and cooling impact as well as rational distribution of ports to service the storage network. Consolidating the fabric into higher port count and more energy efficient director chassis and core-edge design can help simplify the fabric design and potentially lower the overall energy impact of the fabric interconnect.
Storage virtualization refers to a suite of technologies that create a logical abstraction layer above the physical storage layer. Instead of managing individual physical storage arrays, virtualization enables administrators to manage multiple storage systems as a single logical pool of capacity.
It should be noted though that on its own, storage virtualization is not inherently more energy efficient than conventional storage management but can be used to maximize efficient capacity utilization and thus slow the growth of hardware acquisition. By combining dispersed capacity into a single logical pool, it is possible to allocate additional storage to resource-starved applications without having to deploy new energy-consuming hardware. Storage virtualization is also an enabling foundation technology for thing provisioning, resizeable volumes, snapshots and other solutions that contribute to more energy efficient storage operations.
By some industry estimates, 75 percent of corporate data resides outside of the data center, dispersed in remote offices and regional centers. This presents a number of issues, including the inability to comply with regulatory requirements for data security and backup, duplication of server and storage resources across the enterprise, management and maintenance of geographically distributed systems and an increased energy consumption for corporate-wide IT assets. File system virtualization includes several technologies for centralizing and consolidating remote file data, incorporating that data into data center best practices for security and backup and maintaining local response-time to remote users. From a green perspective, reducing dispersed energy inefficiencies via consolidation helps lower the overall IT energy footprint.
Depending on the implementation, compression can impose a performance penalty because the data must be encoded when written and decoded (decompressed) when read. Simply minimizing redundant or recurring bit patterns via compression, however, can reduce the amount of processed data that is stored by one half or more and thus reduce the amount of total storage capacity and hardware required.
Data deduplication is a powerful technique which can reduce storage requirement by up to 20:1. This may be carried out either in band, as data is transmitted to the storage medium, or in place, on existing stored data. In band techniques have the obvious advantage that multiple copies of data never get made, and therefore never have to be hunted down and removed. In place techniques, however, are required to address the immense volume of already stored data that data center managers must deal with.
Another approach to increasing capacity utilization and thus reducing the overall disk storage footprint is to implement variable size volumes. Typically, storage volumes are of a fixed size, configured by the administrator and assigned to specific servers. Dynamic volumes, by contrast, can expand or contract depending on the amount of data generated by an application. Resizeable volumes require support from the host operating system and relevant applications, but can increase efficient capacity utilization to 70% or more. From a green perspective, more efficient use of existing disk capacity means fewer hardware resources over time and a much better energy profile.
Thin provisioning is a means to satisfy the application server’s expectation of a certain volume size while actually allocating less physical capacity on the storage array or virtualized storage pool. This eliminates the under-utilization issues typical of most applications, provides storage on demand and reduces the total disk capacity required for operations. Fewer disks equate to lower energy consumption and cost and by monitoring storage usage the storage administrator can add capacity only as required.
A combination of some or all of these techniques can drastically reduce the amount of data being stored, which in turn means fewer hardware resources and therefore lower energy requirements. Data growth is inevitable, leveraging the latest technologies however can help organizations stay one step ahead!
Monday, April 23- 2012 @ 14:21 UAE local time (GMT+4) Replication or redistribution in whole or in part is expressly prohibited without the prior written consent of Mediaquest FZ LLC.