Analysts are forecasting that hyperconvergence will be the fastest-growing sector of the market for integrated systems. According to Gartner, the market will reach $5 billion by 2019, equivalent to 24 percent of the total market. That’s up from the firm’s forecast value of $2 billion by 2016, which itself represents growth of 79 percent following the technology’s first appearance in 2012.
Commentators believe that hyperconvergence is gaining popularity because it simplifies the datacenter infrastructure and helps IT teams meet the demand for increasingly faster delivery of new applications and services while driving down costs.
Despite its growing acceptance, there is still a lack of clarity around the definition of hyperconvergence and the use cases that will prove to be of greatest benefit.
In simple terms, hyperconvergence brings together the key IT compute, storage, networking and virtualization resources, converging them in a tightly integrated single system managed through a software layer.
Advocates claim that this software-centric solution eliminates the time and cost of sourcing, deploying, managing and scaling legacy infrastructures that create inflexible silos and require specialized skills. Hyperconvergence, they believe, offers cloud-like speed and agility, with significant improvements in performance, reliability and cost effectiveness.
These benefits are important as organizations see creation of new software and services as a valuable revenue stream. They therefore want to accelerate delivery by adopting agile development and frequent deployment. That puts the onus on IT to create a more flexible infrastructure that supports rapid, continuous delivery.
From an operational perspective, hyperconvergence aims to eliminate the complexity of legacy datacenter infrastructure, which might have as many as 10 or 12 different elements to manage. Instead, it is managed through a single interface. Hyperconvergence also utilizes modular, commodity hardware systems, which reduces the cost, complexity and delay of scaling the infrastructure.
In the wider context of datacenter evolution, hyperconvergence builds on the widespread adoption of virtualization and succeeds its predecessor, converged infrastructure, as part of the ongoing move towards greater automation of datacenter processes. Converged infrastructure shares similar aims and benefits, but hyperconvergence claims to add more value by focusing on software to control solution management.
Despite its promise, critics argue that the emphasis on single source and tight integration can result in vendor lock-in. They also believe that, so far, use cases for hyperconverged infrastructure are limited.
Feedback from the market indicates that small and medium businesses with smaller IT teams represent the main adopters, with the most frequent deployments used for VDI. However, there are increasing reports of hyperconvergence being used for mission-critical and Hadoop-based applications, as well as disaster recovery and branch infrastructure deployments.
There is also interest in hyperconvergence as an alternative to public cloud by organizations who want the efficiency and agility of cloud, but are reluctant to migrate because of security or convergence concerns.
As the technology matures, commentators believe that it will become a natural part of the datacenter environment and new use cases will evolve as confidence and wider adoption grows.
Virtual Tech Gurus believes that hyperconvergence represents an important development with many benefits for the datacenter. However, organizations should not rush their decision to invest in hyperconvergence; they must consider carefully which applications and services they will deploy to ensure the best return on investment.
For more information, please check out the articles and infographics on our website.