Multicloud has been a dominant architectural topic for almost a decade. Enterprises adopt it to reduce dependency on a single vendor, negotiate better pricing, satisfy regulatory constraints, or access specialized services that only exist on specific providers. In many companies, multicloud isn’t a strategy as much as an outcome of history: acquisitions, independent teams making local decisions, or legacy workloads that never moved.
Despite all of these legitimate drivers, multicloud remains difficult to implement in a consistent and operationally sane way. At the same time, a newer model — Sky Computing — offers a perspective that avoids many of multicloud’s long-standing challenges without dismissing the reasons enterprises adopted it in the first place.
This post explains why multicloud tends to stall in practice, why Sky Computing approaches the problem differently, and how the two models coexist rather than compete.
Why Multicloud Appeals in Theory
The motivation behind multicloud is straightforward: don’t place all your bets on one vendor.
Reasons vary:
- Some teams want leverage during contract renewals.
- Others must respect data residency or regulatory constraints.
- Some want to access specialized hardware (GPUs, TPUs, accelerators).
- Others look to reduce the blast radius of major cloud outages.
These motivations are real. The gap is between the intention and the ability to operate consistently across providers. And that’s where multicloud starts to struggle.
Why Multicloud Struggles in Practice
Most multicloud programs discover quickly that running two or three clouds does not mean running one coherent platform across them. Instead, it often becomes two or three separate platforms that share a slide deck but not much else.
Environments drift apart
Identity and access models differ across providers.
Networking primitives, routing semantics, and VPC structures vary.
Storage behaves differently, especially around durability guarantees and throughput.
Even if Kubernetes is used everywhere, it does not eliminate underlying differences:
CNI plugins, node types, storage drivers, and autoscalers rarely behave the same way.
Managed services are fundamentally vendor-specific
Teams assume they can switch from AWS SQS to Google Pub/Sub, or from RDS to CloudSQL.
In reality, the APIs, latency behavior, failover characteristics, cost models, and operational semantics differ significantly.
Abstracting these differences away typically creates leaky abstraction layers that break at the first real workload.
Operational inconsistency increases complexity
Monitoring pipelines, observability stacks, logging, on-call workflows, cost attribution, and IAM policies must all be duplicated and kept in sync.
This increases operational load and introduces subtle divergence that is difficult to eliminate.
Failover is rarely automatic
Multicloud is often sold as a resilience strategy.
But true cross-cloud failover is extremely hard to achieve—data replication, DNS propagation, cold-standby clusters, and incompatible storage semantics turn “automatic failover” into an aspirational slide rather than a reality.
Costs are harder to control
Each cloud has different egress rules, reservation models, spot market behavior, and pricing structures.
Instead of getting the best price everywhere, teams often pay more due to fragmentation.
None of these issues invalidate multicloud — they simply illustrate why it rarely achieves the clean abstraction originally imagined.
What Sky Computing Introduces
Sky Computing starts with a different assumption:
cloud providers will never behave the same way, and we should stop pretending they will.
Rather than trying to unify clouds into a single platform, Sky Computing treats each cloud as an independent region in a larger, loosely connected “sky” of compute.
The key idea is the intercloud broker.
Instead of teams deciding where a workload runs, the workload is submitted to the broker along with constraints:
- required hardware (e.g., A100, H100, TPU)
- latency bounds
- cost limits or bidding strategy
- region or data residency requirements
- fallback or retry policies
The broker evaluates options across clouds and provisions the workload accordingly.
This turns multicloud from a manual abstraction problem into a placement and orchestration problem.
What Makes Sky Computing More Practical Than Traditional Multicloud
Sky Computing works because it leans on interfaces that are already portable:
- container images
- Kubernetes APIs
- object storage interfaces
- workflow engines (Airflow, Flyte, Ray)
- ML frameworks
- S3-compatible storage
- POSIX-like filesystem abstractions
Instead of asking cloud providers to standardize, Sky Computing builds on the fact that much of modern compute is already portable enough.
Workload mobility, not platform portability
Traditional multicloud tries to make entire platforms portable — same networking, same ops model, same tools everywhere.
Sky Computing focuses on moving workloads, not entire stacks.
This is an order of magnitude easier.
Automated placement
Rather than engineers deciding where to deploy, the broker continuously evaluates:
- pricing fluctuations
- GPU availability
- spot market volatility
- regional outages
- compliance constraints
This leads to dynamic placement: a training run might start on GCP due to TPU availability, then switch to AWS spot GPUs if conditions change.
Fault tolerance across clouds
If a region or provider experiences a major outage, the broker can reschedule elsewhere.
This is meaningful resilience without engineering multi-cloud failover by hand.
Optimizing for cost and hardware
Different providers excel in different niches.
Sky Computing allows workloads to exploit this variety without developers learning each provider’s API.
Supports specialized clouds
HPC providers, GPU startups, sovereign clouds — all can participate as independent “regions” in the sky, broadening available compute resources beyond the “Big Three.”
A Balanced View: Sky Computing Isn’t a Silver Bullet
Sky Computing is still early. It doesn’t solve everything:
- Managed databases and proprietary services remain cloud-specific.
- Data transfer costs between clouds are still high.
- Identity and networking remain fragmented.
- Not every workload benefits from cross-cloud execution.
- Brokering logic becomes a new operational dependency.
And many enterprises will continue to use traditional multicloud for regulatory reasons that Sky Computing does not automatically simplify.
The right perspective is that Sky Computing builds on multicloud’s motivations while avoiding its operational pitfalls.
How the Two Models Coexist
Multicloud remains relevant because organizations are complex.
Sky Computing is relevant because engineering teams want mobility and flexibility without doubling operational overhead.
A realistic future looks like this:
- Enterprises still use multiple providers for policy, cost, and procurement reasons.
- Sky Computing tools provide a portability layer for specific workloads (ML, batch, HPC, analytics).
- Platform teams keep core services on a primary cloud while allowing workloads to burst into others when needed.
- Specialized GPU providers and sovereign clouds become first-class citizens in the compute ecosystem.
Rather than replacing multicloud, Sky Computing rides on top of it.
Summary
Multicloud promises portability and resilience but often leads to operational divergence: duplicated platforms, inconsistent services, fragile failover, and higher complexity. Sky Computing reframes the problem by accepting cloud differences and orchestrating workloads across them through an intercloud broker.
By focusing on workload mobility instead of platform portability, Sky Computing offers a more realistic path to using multiple clouds effectively — without pretending they are all the same.