Nirvana Kubernetes Service (NKS)
Nirvana Kubernetes Service (NKS) is a fully managed Kubernetes platform purpose-built on Nirvana Cloud. It is the optimal way to deploy Kubernetes-native services on Nirvana, giving you the full power of the underlying infrastructure — high-performance compute, Accelerated Block Storage (ABS), and Nirvana Connect — without the operational burden of managing Kubernetes yourself.
Built on the Nirvana Cloud Stack
Section titled “Built on the Nirvana Cloud Stack”NKS sits at L3 (Orchestration) in the Nirvana Cloud stack, integrating directly with every layer beneath it:
- L0 · Compute — Worker nodes run on dedicated Nirvana Cloud VMs, delivering deterministic performance with no noisy neighbors.
- L1 · Storage — Persistent volumes are backed by ABS, providing 20K baseline IOPS, sub-millisecond latency, and sustained throughput with no throttling.
- L2 · Networking — Clusters connect to external clouds through Nirvana Connect, a private fiber interconnect with sub-millisecond latency and no egress fees.
- L3 · Orchestration — NKS manages the full Kubernetes lifecycle: provisioning, upgrades, scaling, and monitoring.
Availability
Section titled “Availability”NKS is currently available in Silicon Valley (us-sva-2).
Why NKS?
Section titled “Why NKS?”- Fully Managed: The Nirvana team handles cluster creation, control plane operations, upgrades, and maintenance. The management layer is provided at no additional cost.
- Infrastructure-Native: NKS is not a bolt-on — it is built directly on Nirvana’s compute, storage, and private networking.
- Scalable: Add or remove worker nodes and node pools to match your workload. Auto-scaling powered by Karpenter is coming soon.
- Cost-Effective: No additional cost for the management layer. All nodes billed at standard VM rates with transparent, linear pricing.
Use Cases
Section titled “Use Cases”- AI/ML inference and serving — Run model inference endpoints, embedding services, and RAG pipelines on Nirvana Cloud with consistent low latency and no resource contention from shared tenants.
- AI agents and orchestration — Deploy autonomous agent frameworks, multi-agent systems, and LLM-powered backends that require reliable compute and fast access to persistent state via ABS.
- Data-intensive workloads — Process and analyze large datasets with ABS-backed persistent storage that sustains high IOPS without throttling. Ideal for vector databases, feature stores, and training data pipelines.
- Hybrid and multi-cloud architectures — Connect NKS clusters to AWS, GCP, or Azure through Nirvana Connect for private, low-latency cross-cloud communication — useful for pulling models or data from other clouds without egress penalties.
- Blockchain infrastructure — Run validators, sequencers, indexers, RPC clusters, and data pipelines on dedicated hardware with predictable performance.
- DeFi and trading platforms — Deploy latency-sensitive services like execution engines, order routers, and market-making bots close to chain hubs.
NKS Components
Section titled “NKS Components”- Clusters — Groups of nodes running containerized applications, deployed into a VPC of your choice.
- Controller Nodes — Run the Kubernetes control plane. Fully managed by Nirvana with no additional cost for the management layer.
- Node Pools — Groups of worker nodes with identical configurations for organizing workloads by resource requirements.
- Worker Nodes — Execute containerized applications on high-performance infrastructure. Billed based on resource consumption.
- Networking — VPC isolation, load balancing, and private connectivity via Nirvana Connect.
- Storage — High-performance persistent volumes powered by ABS.