The Bill has to Stop
Presenting a serious Alternative for VM Workloads
Published by: Pathgate
Date: April 2026
Author: Technical Team
Executive Summary
Exiting your well-known virtualization provider is most of the times considered a complex, long-term project rather than a simple switch, it can take from 12 to 36 months to execute properly. For many organizations, the challenge is no longer whether they should modernize their infrastructure, but whether they can do it without rewriting every virtual-machine-based workload or committing to another costly licensing cycle. (Tied to virtualization provider contracts, or cloud-dependant systems).
A growing alternative is to use self-hosted Kubernetes as the control plane and KubeVirt as the virtualization layer orchestration framework, allowing virtual machines and containers to run side by side on the same self-managed hardware platform.
In practical terms, KubeVirt extends Kubernetes with virtualization-specific APIs and controllers, while Kubernetes continues to handle scheduling, networking and storage. This creates a unified operational model for both legacy and cloud-native workloads.
The result is not “VMs inside containers” in the simplistic sense. It is a model in which KVM remains the virtualization foundation, while Kubernetes becomes the system of record for lifecycle, placement and infrastructure automation.
For organizations with their own hardware, this opens a pragmatic path away from expensive virtualization and cloud providers, towards an open GitOps-friendly platform that can be operated without mandatory cloud licensing.
1. Why this model matters now
› KubeVirt = K8 + KVM
A useful shorthand is that KubeVirt = Kubernetes + KVM. KVM provides the hypervisor capability in Linux, and KubeVirt bridges that capability into Kubernetes so that VMs can be orchestrated using Kubernetes-native patterns. That matters because it allows teams to preserve VM-based applications while modernizing operations around them, instead of forcing an immediate refactor into containers.
KubeVirt is designed to sit on top of Kubernetes, not beside it.
KubeVirt delegates scheduling, networking and storage to Kubernetes, while KubeVirt provides the virtualization-specific functionality, that means a single cluster can run VM-based workloads next to container-native workloads, under one API, one automation model and one operational discipline.
When your infrastructure is already Kubernetes-based, VMs stop being a separate operational island, they become another class of workload that can participate in the same platform: policy, storage classes, CNI-based networking, observability, backup flows and automation pipelines.
More info at: kubevirt.io
2. Why KubeVirt is operationally credible
› GitOps turns VM management into infra as code
One of the strongest reasons to run VMs under Kubernetes is not just consolidation, It is repeatability.
Applications, projects and settings can be defined declaratively as Kubernetes manifests, and Argo CD (or many other CICD tools) continuously compares the live state against the desired state in Git, sync-in drift back to the target definition.
This is extremely relevant for KubeVirt, a VM can be expressed as code, stored in Git, templated with Helm values, and deployed idempotently through Argo CD the same way teams already deploy applications, services and policies.
In practice, that means images, DataVolumes, networking, cloud-init, services and even environment-specific VM sizes can be versioned and promoted through Git-driven workflows instead of manual clicks in a proprietary VM GUI.
> A pragmatic migration pattern
The best migration pattern is usually rehost first, optimize second, refactor later.
Starting with VM workloads that are hard to containerize but operationally stable, move them onto self-hosted Kubernetes through KubeVirt, keep them as VMs, and immediately gain a unified platform, Git-backed lifecycle management and policy-driven operations.
Then, over time, refactor only the workloads that truly benefit from becoming container-native, this phased model is exactly what makes KubeVirt strategically useful: it allows modernization without demanding disruption.
3. The Business Case
> Efficiency, agility and cost control
Cloud-native virtualization is attractive because it improves both day-1 and day-2 operating model, most savings can be categorized in one of the following items:
1- Standardized deployment patterns to reduce manual work and coordination overhead - Process removal, cost control.
2- Common control plane makes it easier to scale to new sites, add workloads, and respond faster to changing business requirements - System efficiency.
3- Automation reduces pressure on IT teams, especially where small platform teams are expected to run increasingly complex estates - Agility growth.
Most importantly, savings are not limited to one license line item, better resource utilization, fewer manual interventions and reduced overall complexity all improve total cost of ownership and operation, that is why the most compelling KubeVirt story is not “VMs on Kubernetes” but ¨cost-efficient modernization¨.
> Networking is part of the savings story too
The cost conversation should not stop at hypervisors. In many environments, there is a second layer of accumulated spend around service exposure and load balancing. With Cilium, Kubernetes can operate without kube-proxy, and Cilium provides eBPF-based service load balancing as part of the platform.
That does not mean every advanced F5 or ADC use case disappears. But it does mean many internal applications, standard L4 exposure patterns, and a meaningful subset of north-south traffic delivery can be handled inside the same software-defined platform that already runs the workloads. The practical result is fewer moving parts, less appliance footprint, and a clearer path to cost savings across compute and networking together.
4. Our Conclusion
For organizations looking to move away from costly virtualization estates, self-hosted Kubernetes with KubeVirt offers a pragmatic middle path. It preserves VM compatibility, introduces declarative operations, aligns VM and container lifecycle management under one control plane, and opens the door to real cost reduction across compute, operations and even parts of the networking stack.
The strongest version of this strategy is not ideological. It is practical: keep the workloads that still need VMs, run them on a cloud-native control plane, deploy them through GitOps, expose services through Cilium where appropriate, and benchmark the platform with production-grade criteria. Done well, that approach can deliver exactly what many teams need most right now: a cheaper, safer and more operationally efficient way forward.
Our Strategic Recommendations:
- Use KubeVirt to consolidate control planes, not just to replace a hypervisor. The real value is not only running VMs on Kubernetes, but operating virtual machines and cloud-native workloads through one consistent platform.
- Treat virtual machines as code from day one. Use Git, Helm and Argo CD to define, version and deploy workloads idempotently, reducing drift and manual operational overhead.
- Design for cost reduction across both compute and networking. Savings should come not only from avoiding expensive virtualization licensing, but also from simplifying service exposure and reducing dependence on proprietary load-balancer appliances where Cilium can cover the requirement.
Moving from traditional virtualization to a Kubernetes-based operating model is as much an architectural and operational transition as it is a technical one. Working with a partner like Pathgate that understands Kubernetes, KubeVirt, Cilium and VM migration patterns can significantly reduce risk, accelerate delivery and help teams avoid costly design mistakes during the transition.
About Pathgate
At Pathgate, we specialize in cutting-edge cloud-native technologies and have extensive experience implementing Kubernetes self-hosted and several virtualization solutions across diverse environments. Our team has worked with telecommunications providers and cloud-first enterprises to optimize network and application performance and implement innovative and efficient solutions.