It was a strange cloudy week which began by AWS announcing wavelength service for 5G carriers, a move which tossed a hand grenade into the Cloud Edge debate and all those associated infrastructure providers. How long before AWS starts buying up mobile carriers one asks?
Later I attended a cisco presentation which, to the amazement of the audience, suddenly declared the Application is King, only to revert to type and spend most of session talking Intent based networking before suddenly going quiet when Public Cloud access and services reared its large bulbous head.
Which got me thinking. If I can brew my own beer then can I craft my own public cloud?
Let’s look back before we gaze forward.
Ten years ago, bypassing slow tedious IT procurement became the new challenge for software developers and a combination of innovative open source services and pay-as-you-go AWS compute emerged to differentiate the dynamic few from the laggard majority. Distil the laggard majority down to an IT level and you get network silos constructed over time through well meaning business initiatives. The network was king right, so compute and storage were after thoughts tagged on without any cohesive infrastructure policy. Hence when Joe wants 100Mbps, GPU based compute and a TB of storage the only answer was ‘To the cloud young man’
As Public Cloud usage grew so new executive cries could be heard echoing through the corridors of power. First came the CFO when presented with a cloud based bill ‘How much and for what?’ then the CSO under his GDPR umbrella ‘ Where is my data and under what jurisdiction?’ finally and a more recent addition the CEO ‘I want my Cloud on-premise!’
So let us look at the Private Public cloud-like request in more detail
Private Cloud can only flourish when those existing on-premise technology silos are morphed into a coherent automated single pane service, hence releasing the corporate coders to innovate in peace.
First barrier is the politics, who controls what, under what budgets?
To develop On-premise public cloud someone senior has two critical calls.
- Identify the critical applications that drive the business and who owns them. Then make that person responsible for the entire IT budget.
- Build Top Down Control; basically the Applications drive the underlying compute, storage and network resources.
Historically traditional network vendors produced proprietary operating systems as part of their poorly disguised lock-in process. The counter attack began with Software Defined Networking, a two fingered salute to the major network vendors that allowed third parties the ability to program and manage multi-vendor equipment.
Over the years the term has blurred with SD-WAN now the current hot topic. But always ask ‘Can I program it?’ and if the answer is no you have SESD-WAN Someone Else’s Software Defined WAN.
Any top down on-premise cloud solution MUST be programmable, typically(but not exclusively) via APIs, allowing businesses to craft in-house applications capable of commanding the underlying infrastructure resources.
Any top down on-premise cloud solution MUST integrate with existing applications. Makes sense right. If VMWare runs my primary business applications then I need VMWare to directly control say a dynamic VSAN deployment.
The same goes for Ticketing software, Analytics and of course Security, they all must integrate one way or the other.
Now you have two more questions. What hardware can I use to drive this service and how do I migrate over from the existing silo’d infrastructure
Answer One : Composable Fabrics. Remove the traditional networking models, with their siloed compute and storage, a craft a single visible top-of-the-rack Ethernet fabric with associated storage and compute. Pure original Software Defined abstraction then presents all routes, to all attached devices, all latencies and available bandwidth. Push this up to the Applications and let them define which resource they need at any point in time.
This is dedicated but dynamic application resource. The buck now stops at the Application guys desk as it is his application that owns everything. No more silos, no more barriers to innovation, just a single pool of resources with an analytics package that lets you know if your about to run out.
Answer Two: Your shiny new Ethernet fabric already contains end points like servers and storage so simply assign router ports connecting to your old infrastructure. Make the ports supportive of routing protocols for end-point delivery or fabric transit and gradually migrate across application by application, user by user.
If I can dedicate resource and know end-to-end latency then I can liaise with say Kubernetes to geo-locate containers for efficient micro-service deployments. Try doing that on a traditional siloed leaf-spine routed network architecture!
I guess this only goes so far, and that access public cloud compute for exceptional resource requirements makes perfect sense, but for the very first time the CFO, CIO and CEO have a manageable on-premise cloud solution that behaves just like its large public brother.