Architecture
This document explains how Kipper works internally. It is aimed at contributors and anyone who wants to understand the system before diving into the code.
System overview
Two modes of operation
During install: The CLI connects to the server via SSH, runs commands remotely to install k3s and all components, then fetches the kubeconfig. SSH is only used during installation.
After install: All operations go through the Kubernetes API using the kubeconfig stored locally. The CLI never uses SSH again.
Repository structure
kipper/
├── kip/ # CLI tool (Go)
│ ├── cmd/ # Cobra commands
│ └── internal/
│ ├── ssh/ # SSH client for remote execution
│ ├── k8s/ # Kubernetes API client
│ ├── installer/ # Cluster bootstrap logic
│ ├── deployer/ # App deployment (Deployment + Service + Ingress)
│ ├── service/ # Stateful service management (StatefulSet + PVC)
│ ├── infra/ # InfraProvider interface + BareMetalProvider
│ ├── git/ # GitProvider interface (GitHub, GitLab)
│ ├── domain/ # Gateway client (subdomain registration)
│ ├── config/ # Config file management
│ └── ai/ # AI provider interface (future)
│
├── console/ # Web console (Vue 3 + TypeScript)
│ └── src/
│ ├── api/ # Typed Axios client
│ ├── stores/ # Pinia stores (auth, cluster, apps, projects)
│ ├── composables/ # useDarkMode, useLogStream, useToast
│ ├── components/ # AppDetail panel, ToastContainer
│ ├── views/ # Dashboard, Projects, Apps, Services, Routes, Users, Login
│ └── layouts/ # Sidebar layout with dark mode toggle
│
├── console-api/ # Console backend (Go + Chi)
│ ├── api/v1alpha1/ # CRD type definitions (kipper.run/v1alpha1)
│ ├── controllers/ # CRD reconcilers (controller-runtime)
│ ├── controller/ # Resource auto-tuning controller
│ ├── handlers/ # REST endpoints
│ ├── middleware/ # JWT auth, logging
│ └── ws/ # WebSocket log streaming
│
├── gateway/ # Subdomain gateway (Go)
│ └── registry/ # In-memory + file-backed subdomain store
│
└── docs/ # This documentation (VitePress)Gateway architecture
The gateway is a lightweight reverse proxy that manages *.kipper.run subdomain routing.
- A wildcard DNS record (
*.kipper.run) points all subdomains to the gateway - Caddy terminates TLS using a Let's Encrypt wildcard certificate
- The proxy extracts the cluster identifier from the subdomain (e.g.
hello-203-0-113-10→ cluster203-0-113-10) - It looks up the cluster IP in the registry and proxies the request
- The original Host header is preserved so Traefik on the cluster can route to the correct app
Subdomain scheme
All subdomains are single-level to work with wildcard certificates:
| URL | Cluster | App |
|---|---|---|
203-0-113-10.kipper.run | 203-0-113-10 | (cluster itself) |
console-203-0-113-10.kipper.run | 203-0-113-10 | console |
hello-203-0-113-10.kipper.run | 203-0-113-10 | hello |
api-203-0-113-10.kipper.run | 203-0-113-10 | api |
Authentication flow
App deployment internals
When you run kip app deploy, Kipper creates an App Custom Resource. A controller-runtime reconciler watches App CRs and ensures the underlying Kubernetes resources exist and match:
This pattern applies to all Kipper resource types. The CLI and console API create CRs, and reconcilers handle the native Kubernetes resources. This enables GitOps. You can apply CRs directly with kubectl apply or sync them via ArgoCD/Flux.
Infrastructure provider interface
Kipper is designed to support multiple infrastructure providers through the InfraProvider interface:
type InfraProvider interface {
Provision(ctx context.Context, spec MachineSpec) ([]Machine, error)
Destroy(ctx context.Context, machineIDs []string) error
GetLoadBalancer(ctx context.Context, spec LBSpec) (*LoadBalancer, error)
StorageClass() string
Name() string
}Kipper currently ships BareMetalProvider, which targets any Linux server reachable over SSH. The interface is provider-agnostic so additional providers can be added without changing core install or deploy logic.