A decade or more of data centre vested interests delivered a network centric power base in which core business applications and their attending servers were marginalised, an unwanted but tolerated business distraction. But in an increasingly agile business environment the applications is the king in waiting and data centre designs need to move with the times
These vested interests created a slow, relationship based IT procurement process, which initially drove the Devops world towards the on-demand computational world of the public cloud, opening the door for more widespread cloud migration. For their part the public cloud providers realised traditional data centre infrastructure designs failed to address basic business needs. Gone the complex routing and leaf-spine connectivity replaced by flat optical planes dynamically configured by individual applications.
Today, that mantra drives a cloud native business model, focused totally on the continuous delivery of applications and services, supported by third party platform-as-a-service, seamlessly scaling against business needs. The network has become irrelevant. You wonder now why any company would own their own infrastructure.
However, many factors like regulation, security, and compliance, continue to drive the on-premise model forward, with organisations accommodating public cloud through hybrid solutions and overlays. Indeed third party private clouds predominantly vendor driven offer additional Internet or on-net service options.
To counter an application focused public cloud, on-premise solution sets move away from the network centric leaf-spine architecture. The shift began with converged infrastructures, a rack combination of network, compute and storage focused on core application needs. These often involved silo’d management domains and clunky service delivery so on the back of emerging Software Defined Networks, Hyper-converged Infrastructure(HCI) arrived with a common IaaS delivery interface. But HCI does not scale for vanilla compute requirements and business once again looked to the public cloud to fulfil the need.
At this stage one could believe that public cloud service growth is inevitable and the rest should pack up and go home. Not so. The glimmer of light first emerged from the tunnel of computationally complex applications, namely Machine Learning(ML) and Artificial Intelligence(AI)
Whilst containerisation with say micro-services offers developers flexible methods of managing code production, upgrades and access to cloud additional services the underlying public cloud bare metal infrastructure offers generic computational functionality. For optimal performance ML and AI applications needed specific computational booster, hardware in the form of GPUs, TPUs and FPGAs. Replace generic hardware with bespoke and you introduce additional support, maintenance and depreciation costs in addition to overlaying orchestration, creating obvious scaling issues. Just as with compute, storage migrated from HDD to SSDs and in turn created the Dark SSD syndrome, expensive components overporvisioned to specific applications, laying idol as demand never exceeds capacity
Enter Composable Infrastructure.
If you consider a traditional compute platform as large glass of orange with a straw, then virtualisation is the ability to create dozens of smaller glasses of orange each with their own straw.
Composable Infrastructure(CI) would have separate groups of glasses, orange juice and straws, with the application dynamically creating the required resource, say a large glass half full of juice and with two straws. In essence CI allows the operator to craft many pools of resource including specific hardware like GPUs and hence improve operational control. Likewise the use of a generic pool of SSDs removes the Dark SSD problems and associated costs. These pools can be within a data centre, campus or even geographical dispersed restricted only by latency and application response considerations.
Our journey into CI began when an international MSP requested a design for application centric data centres, where each application had dedicated redundant connectivity. In business terms, give your key applications enough network resource to operate consistently at maximum capacity.
Plexxi https://www.plexxi.com (now part of HPE’s Synergy offering) had its origins in the data centre designs of Google, where complex routed leaf-spine architectures where replaced by flat dumb optical switches capable delivering an aggregated resource view(end-points, bandwidth, latency, routes) northbound into an application interface. Here specific applications requirements were mapped to optical lambdas creating for the first time dynamic, secure, latency aware vlans on a per service basis. By integrating this management functionality into the likes of vCentre and Nutanix, the power had shifted, gone the complex routing structures replaced by application driven, secure latency sensitive pathways.
Next came a request from a Disaster Recovery company, around the concept of mirrored writes into geographically diverse data centres for business continuity.
Kazan Networks https://kazan-networks.com offer NVME-oF bridges. Take dispersed pools of SSDs each with its own Kazan Network adapter in situ and within the latency constraints of the overlay software craft a cost effective seamless offering. Fold in a Plexxi latency aware optical network with DR application specific pathways and geographically dispersed solutions can be constructed.
It applies equally to Content Delivery Networks.
As CI orchestrators evolve they fit into existing bare metal architectures. Application invokes say Openstack Ironic or Kubernetes, which invokes a supported CI orchestrator, which drives dynamic resource allocation to meet the application needs.
The emergence of CI further enhances application agility whilst improving underlying operational costs especially for bespoke AI and ML software. Indeed CI could enable private clouds to flourish and bring the public cloud on-premise.