The world is getting connected at a nerve-wracking pace. According to data by Stock Apps, 67% of the world’s population, roughly 5.3 billion, owned a mobile phone in July this year. A year-on-year comparison means that 117 million new users have been added to the fold. Modern networks have little choice but to bear the workload to thrive in a competitive, digital-transformation-driven global market. On top of that, the capacity to support new advancements in big data, artificial intelligence (AI), and the hybrid cloud along with the demands of traditional workloads has posed a massive challenge to the existing IT infrastructure model. Data centers invariably face downtimes due to not-so-well functioning components, preventing quick-turnaround demands of business and the cloud.
Hyperconverged Infrastructure or HCI represents centralization of IT resources and management with a promise of significant cost efficiency, simplicity, and high performance. The hyper-converged infrastructure market size was valued at $3.84 billion in 2018, and is projected to reach $33.16 billion by 2026, growing at a CAGR of 30.7% from 2019 to 2026, according to Allied Market Research findings.
What is Hyperconverged Infrastructure?
A traditional data center infrastructure works on the 3 tier infrastructure – network, server, and storage with all three requiring separate management and resources, which can sometimes lead to operational efficiencies, especially during upgrades of the siloed tiers. The concept of HCI combines computation, virtualization, storage, and networking in a single group. HCI integrates all components, essentially swapping stand-alone storage groupings with servers -- called nodes -- infused with a lot of disks that are managed by software-defined storage or SDS, making it easy for the user to scale out as per the computing and storage resource requirement across data centers, remote branches, and edge locations.
Pluses of hyper-converged setup
Although the traditional HCI has continued to evolve, improved versions of the model are also worth consideration. The pay-as-you-go model of HCI architecture can scale noticeably, resulting in the growth of the market. Experts predict that business-critical applications which are currently deployed on three-tier IT infrastructure will eventually transition to HCI as it offers integrated stack systems, infrastructure systems, and reference architectures.
Essentially, HCI simplifies administration by providing a single point of management with a single user interface (UI) and bypasses many of the hardware components from the 3 tiers, and enables delivery of on-demand infrastructure for data-centric workloads. In addition, enterprises can also benefit from the HCI model in the following areas:
- Operational economy: By bringing together components into one platform, HCI cuts down storage footprint, energy consumption, and total cost of ownership (TCO) and allows data centers to scale optimally in manageable phases.
- Agility and performance: HCI deployment takes far less time than traditional IT infrastructure as it does not require IT engineers for separate resource area. Moreover, automation simplifies network management and allows organizations to deploy workloads with far superior performance even for the most intensive workloads, including enterprise apps and SQL Server. New resources are automatically identified and integrated into the cluster.
- Multicloud interoperability: HCI complements hybrid cloud environments and reduces the time and cost of transitioning to a hybrid cloud, making it easy to move data and applications back and forth between on-premises servers and the public cloud.
- Data protection and security: HCI facilitates self-encrypting drives and tools that ensure visibility and security of the highest level. Backup and disaster recovery can be done without WAN optimization using specialized file systems for compressing and optimizing data so that they can be sent long distances.
Some issues with HCI
Vendor lock-in concerns: Sourcing hardware and software resources for the infrastructure from one supplier can throw in vendor lock-in challenges. For instance, experts say that while taking the software route for HCI on their hardware, organizations would be buying patented storage software from a specific vendor. An alternative to that would be opting for open source HCI; however, some organisations say that the simplicity factor of HCI is hard to come by with this option by creating additional complexity.
Hypervisor selection restriction: Some HCI storage nodes are dedicated to a specific server hypervisor software that creates and runs virtual machines) and can only be used with workload virtualized and operated using that hypervisor. Other storage nodes can be deployed in combination with different hypervisors, but provide a common management console for all instantiations of their kit. Therefore, planners of HCI solutions need to consider the impact on the short- and longer-term technical fit of hypervisors to benefit from the solution selected.
In conclusion, development in data center modernization and greater demand for better data security and disaster recovery solutions are pushing forward the HCI market growth. Many big players such as Cisco, Huawei, and others are offering advanced HCI solutions. However, setting up a result-oriented HCI essentially depends on two key requirements- selection of infrastructure products and services that are best suited to workload requirements, and selection of HCI model that can adapt and scale with dynamic storage demands without costing an arm and a leg for the organization. Also, with the above-mentioned hurdles of vendor-lock-ins and hypervisor compatibility that hinders swift adoption of the HCI model, the words of American computer scientist Alan Kay seem to hold an element of reality after all.
“People who are really serious about software should make their own hardware.”