Skip to content

AI & High-Performance Computing

Artificial Intelligence workloads have unique infrastructure demands: massive parallel processing power, high-bandwidth memory, and sustained throughput. Hyperscalers treat these resources as luxury items, billing by the second and subjecting users to preemptible interruptions.

Cyberun Cloud treats compute as a utility. Our Aegis Cluster is a dedicated environment designed specifically for the heavy lifting of AI training and inference.

The Aegis Architecture

graph TD
    %% Style Definitions
    classDef gpu fill:#ffebee,stroke:#c62828,stroke-width:2px,color:#000;
    classDef storage fill:#fff8e1,stroke:#fbc02d,stroke-width:2px,color:#000;
    classDef network fill:#e0f2f1,stroke:#00695c,stroke-width:2px,color:#000;

    subgraph ComputeNode ["Aegis Compute Node (Nuremberg)"]
        direction TB
        Pod["AI Training Pod (PyTorch)"]:::gpu
        Driver["NVIDIA GPU Driver"]:::gpu
        NIC["100GbE NIC"]:::network

        Pod == PCIe Passthrough ==> Driver
        Pod -- CSI Mount --> NIC
    end

    subgraph StorageCluster ["Auxiliary Storage Cluster (New York)"]
        direction TB
        Ceph["Ceph OSD Cluster"]:::storage
    end

    %% Trans-Atlantic Link
    NIC == "WireGuard Fabric" ==> Ceph

    %% Annotation
    Note["Data Prefetch: Direct to VRAM"]
    Note -.-> Pod

Located in Nuremberg, the Aegis cluster is physically isolated from our general-purpose compute nodes.

  • Hardware Isolation: AI workloads are noisy. By segregating them onto dedicated bare-metal GPU nodes, we ensure that CPU-bound microservices (in Destroyer) do not contend with GPU-bound training jobs.
  • Direct Hardware Access: We utilize NVIDIA Container Toolkit to pass GPU capabilities directly to Kubernetes Pods, bypassing virtualization overhead.

Fixed-Cost Compute Slots

We disrupt the traditional pricing model of AI.

  • The Problem: Public clouds charge for "GPU-hours." This creates a disincentive to experiment. Engineers are afraid to leave training jobs running overnight due to cost overruns.
  • The Cyberun Solution: Monthly Slots.
  • You reserve a GPU slice (e.g., 1/2 GPU or Full GPU) for a flat monthly fee.
  • Unlimited Usage: Run your models 24/7. Fine-tune LLMs, generate images, or run batch inference pipelines. The cost never changes.

Workload Scheduling

We leverage Kubernetes Taints and Tolerations to ensure precise workload placement.

# Example: Scheduling a PyTorch Job on Aegis
spec:
  tolerations:
    - key: "sku"
      operator: "Equal"
      value: "gpu-h100"
      effect: "NoSchedule"
  nodeSelector:
    accelerator: nvidia-gpu
  • Priority Classes: We support "Preemption" within your own organization. Your critical inference API can automatically preempt your background training job if demand spikes.

Sovereign AI

In an age of data privacy concerns, Sovereign AI is critical.

  • Private Training: Your training data never leaves your private VPC mesh. It is pulled from your private Ceph bucket, processed on Aegis, and the model weights are saved back to Ceph.
  • No Leaks: Unlike using public AI APIs where your data might be used for training their models, running your own open-source models (Llama 3, Mistral, Stable Diffusion) on Cyberun guarantees that your IP remains yours.