OVER the past few decades, rapid advancements in hard disk drive technology have created a yearover-year price erosion of about 10% to 15% in disk media costs.
Cheap hard disk prices have made it easier for many enterprises to buy new disks rather than maintain old ones.
However, due to manufacturing and supply shortages following the catastrophic floods in Thailand last year, hard disk prices have now increased for the first time ever - by as much as 5% to 15%.
Industry researcher Gartner predicts that hard disk prices will continue to increase this year (even higher to 20%) while rival IDC anticipates the shortage will continue to affect the industry into 2013.
Whether this price surge is just a short-term anomaly or a long-term reality as manufacturers rebuild their production facilities and invest in next-generation disk technologies, one thing is for certain: It is creating new cost burdens for datacentres.
Indeed, even when hard disk prices were going down, IT managers faced a long list of cost challenges due to the rapid growth of data and the need to acquire new storage devices.
These managers have also had to constantly struggle to maintain rapidly-increasing storage islands made up of multivendor hardware and software, control the demands for expanding floor space as well as reduce power and cooling costs.
Capitalise storage assets for lower TCO
Although storage costs are spiralling upwards for many organisations, it's an interesting fact that the scale of an organisation's storage often doesn't match its actual consumption.
In the majority of cases, organisations use only 30% of their storage capacity while the remaining 70% sits idle.
What are the implications of this phenomenon? For a start, it means there's still plenty of room for growth and resource utilisation in existing storage capacity. More importantly, it means the organisation needn't suffer from rising prices.
Instead of acquiring expensive new media, these organisations can instead focus on capitalising their existing storage assets to enhance storage utilisation, enable higher Capacity Efficiency (CE) and achieve a lower Total Cost of Ownership (TCO).
Achieving a high level of capacity efficiency (CE) requires maximising storage in two key aspects: Allocation efficiency and utilisation efficiency.
Allocation efficiency involves eliminating the waste of over-allocation. Utilisation efficiency, on the other hand, is about using the capacity in an efficient manner so as to reduce costs and increase performance/availability.
The over-allocation of hard disk resources is a common practice for many IT personnel. They usually allocate capacity beyond users' requests and keep 10 to 15 copies of all data in order to ensure server levels and avoid unexpected capacity running out.
However, this over-allocation can be eliminated by using thin provisioning, where you provide virtual space for the requested allocation and only provision the capacity that is actually being used.
This approach also helps to support the APIs for file systems, like VMFS and Symantec file systems that can notify the storage system when files are deleted so that the allocation for those files can be reclaimed by the storage system.
By eliminating the allocation of unused space, the capacity and time needed to make copies are reduced. Reduction of copies can be made with copy-on-write so that only the new changes are replicated.
Utilisation efficiency is about the placement of data on the appropriate tier of storage, based on frequency of access, business value and cost of the storage.
This can be achieved by automated tiering based on policies that are triggered by specific times or events.
There are two types of data automations, namely volume level and page level, but utilisation efficiency can only be achieved by putting data on the "page level."
This is because volume level tiering needs to move the whole volume in all the tiers, which requires significantly more space than page level tiering where only hot pages move around tiers, thus occupying only about 5% to 10% of the total volume.
Storage virtualisation: The cornerstone of CE
Storage virtualisation is by far the most important solution to enable capacity efficiency. It extends storage efficiency tools like automated tiering and thin provisioning to existing storage systems that do not have those capabilities.
With storage virtualisation, you can consolidate all data types - files, content and block storage, for both structured and unstructured data - from internal, external and multivendor storage onto a single storage platform.
As all storage assets become a single pool of resources, the function of automated tiering and thin provisioning is amplified to the entire storage infrastructure.
This makes it easy to reclaim capacity and maximise utilisation, and even repurpose existing assets to extend usage, significantly enhancing your IT agility and capacity efficiency to meet unstructured data growth.
Storage virtualisation also offers the benefit of competitive, multivendor storage purchasing strategies. With the freedom to virtualise, external storage effectively becomes commoditised.
This allows you to design different price-range storage systems for different tiers of storage in order to achieve the maximum return on your investment - and choose the lowest bid where appropriate.
More competitive-priced media can be assigned on mid- and low-tier storage while the high-end disk is used for high-tier storage.
This means organisations no longer need to worry about upcoming hard disk shortages and price increases, and ultimately they can flexibly design their long-term storage purchasing strategies according to their specific needs and actual consumption patterns.
Independent storage control console
To optimise storage performance with the highest efficiency, the ideal storage infrastructure needs to separate disk capacity with an independent storage controller.
This is because dynamic page-level tiering requires the handling of more metadata and more processing power within the storage system.
By implementing separate pools of processors to support this expanding function, the efficiency of data mobility can be maximised without impacting the basic I/O performance and throughput.
Another advantage of separating disk capacity from the storage system controllers is the freedom it creates to manage storage media. Disk capacity no longer needs to be refreshed at the same pace as the storage system controllers, which are normally kept current with systems technologies on a three-year cycle.
This means you can prolong the useful life of existing disk capacity on a five- to seven-year cycle, according to your needs. Since storage media still accounts for the bulk cost of a new storage system, this longer depreciation cycle will significantly reduce your capital costs.
In addition to that, you can enjoy multivendor purchasing strategies for external storage, thus getting much more competitive prices on mid- and low-tier storage.
Rather than buying additional disks to tackle Big Data in this period of rising storage costs, IT decision makers should choose more cost-efficient alternatives to maximise capacity efficiency with storage virtualisation.
The right storage solution will help to simplify infrastructure, ensure quality of service, reduce risk, and align the right storage tier to the right application thereby reducing Opex and Capex.
It is now critical that you not only free up the capacity you need today from your existing storage assets, but also position your business for sustainable growth long into the future.
(Johnson Khoo is managing director of Hitachi Data Systems Malaysia, which is in the business of helping organisations transform raw data into valuable information)