Skip to content

Installation Reference

kip install

Installs a production-ready Kubernetes cluster on a remote Linux server.

bash
kip install --host <ip> [flags]

Flags

FlagRequiredDefaultDescription
--hostYesIP address or hostname of the target server
--ssh-keyNosee belowPath to SSH private key. Saved to ~/.kip/config.yaml so subsequent kip commands inherit it. If unset, kip reads KIP_SSH_KEY, then cluster.ssh_key from config; if still unset, ssh consults your ssh-agent and ~/.ssh/config as normal
--domainNo<ip>.kipper.runCustom domain for the cluster
--admin-emailNoadmin@<domain>Email for Let's Encrypt certificates and the admin account. Defaults to admin@<domain> when --domain is set, otherwise admin@kipper.local
--orgNoOrganisation short code (e.g. labbc), used as namespace prefix
--org-display-nameNoHuman-readable organisation name (e.g. Labb Consulting)
--hardenNotrueDisable surplus host services exposed on public interfaces (e.g. rpcbind). Pass --harden=false only when you manage host security yourself
--firewallNotrueInstall and configure UFW with k3s-correct rules. Skipped automatically if another firewall is already active. Pass --firewall=false only when you manage host security yourself
--backup-storage-bucketNoS3-compatible bucket name for Velero backups. When set, backups live off-cluster and survive a wipe. See External backup storage below
--backup-storage-regionIf bucket setAWS region or provider equivalent. Use the actual region for AWS S3, auto for Cloudflare R2, whatever your provider expects elsewhere
--backup-storage-endpointNoS3 endpoint URL. Omit for native AWS S3 (Velero derives it from the region). Required for R2, self-hosted MinIO, B2, Wasabi, DigitalOcean Spaces
--backup-storage-credentialsNo~/.aws/credentialsPath to an AWS-style INI credentials file. Read only at install time and never stored back on disk
--backup-storage-profileNodefaultProfile name inside the credentials file. Lets you reuse an existing AWS CLI profile like labb without copying it to a separate file

What it installs

Preflight checks

Before installing, Kipper verifies:

  • OS: Ubuntu 20.04, 22.04, 24.04, or Debian 11, 12
  • CPU: 2 vCPU minimum (4 vCPU is the realistic floor for anything beyond a hello-world app)
  • RAM: 2 GB minimum (sub-4 GB installs run the nano profile with monitoring disabled). 8 GB is the realistic floor for a real workload. 16 GB+ for production.
  • Disk: 30 GB minimum (80 GB realistic floor, 200 GB+ if you turn on backups or the AI bundle)
  • Ports: 80 (HTTP), 443 (HTTPS), and 6443 (Kubernetes API) must be available

Kipper picks a system sizing profile at install time based on detected RAM: nano below 4 GB, small for 4-8 GB, medium for 8-16 GB, large for 16-32 GB, xlarge above 32 GB. The profile controls how much memory Prometheus, Loki, and other system components get. The nano profile turns metrics off entirely so a small VPS has room for apps.

Be generous on every dimension

"Minimum" here means "Kipper installs and runs". It does not mean "Kipper is pleasant to use". On a 4 GB / 2 vCPU box, the install works, but a single backup, a routine OS update, or a busy app can push the node into memory pressure and start evicting workloads.

Be generous on every axis at once. Memory matters because Kubernetes evicts pods on pressure. CPU matters because Velero's Kopia backup is essentially single-threaded per volume, so a 5 GB PVC backup on a 2 vCPU box takes 10-15 minutes and a restore can take longer. Disk matters because Longhorn, MinIO (Velero's object store), monitoring metrics, and log retention all share it; the AI bundle's model cache alone is 2-10 GB depending on the model.

If you skimp on one axis and not the others, you'll hit a cascade. Disk fills, MinIO refuses writes, backups fail, cleanup needs more disk to make space. Pick a server with headroom on every dimension from the start; resizing later is more work than just paying for a bigger box up front.

Recommended sizing in practice

Use caseSpecWhat you get
Demo or dev2-3 GB RAM, 2 vCPU, 40 GB SSDThe nano profile. Kipper runs without Prometheus, Loki, or Grafana. Enough for a hello-world app or two; no monitoring, no breathing room.
Evaluation4 GB RAM, 2 vCPU, 40 GB SSDThe small profile. Kipper installs and runs with monitoring enabled but tight. Velero's scheduled backups will fight with your apps for disk; the AI bundle won't fit.
Small team / production starter8 GB RAM, 4 vCPU, 80 GB SSDKipper plus a handful of apps. Backups work but are slow. Not enough free RAM for the AI bundle. kip ai install needs 8 GiB free on a single node, which an 8 GB box cannot provide after k3s and system pods.
Production16 GB RAM, 8 vCPU, 200 GB NVMeSeveral production apps with monitoring on. Backups complete in minutes. Comfortable headroom.
Production + AI32 GB RAM, 8+ vCPU (or NVIDIA GPU), 500 GB NVMeThe above plus the AI bundle running a 7B+ model with response times users won't complain about.

For comparison, enterprise Kubernetes distributions typically require 3+ nodes with 16 GB each (48 GB+ total) just for the control plane.

If you're on the small profile (4-8 GB) and need more room for apps, disabling the monitoring stack frees ~1-2 GB:

bash
kip platform disable prometheus
kip platform disable loki

The nano profile (sub-4 GB) already does this for you. See Platform Resources for the full picture.

Components installed

ComponentPurpose
k3sLightweight Kubernetes distribution
TraefikIngress controller and reverse proxy
cert-managerAutomatic TLS certificates via Let's Encrypt
LonghornDistributed persistent storage
DexIdentity provider (OAuth2/OIDC)
Prometheus + GrafanaMetrics and dashboards (can be disabled)
LokiLog aggregation (can be disabled)
VeleroBackup and restore
KEDAEvent-driven autoscaling
Kipper ConsoleWeb dashboard for cluster management

Idempotent

The install command is safe to re-run. If a component is already installed, it will be updated rather than duplicated.

External backup storage

By default Kipper installs an in-cluster MinIO and points Velero at it. That gives every new cluster automatic backups with zero configuration, but those backups live on the cluster's own storage. A kip cluster uninstall, a hardware failure, or anything that wipes Longhorn data also wipes the backup bucket.

For a cluster you care about, point Velero at off-cluster object storage instead. The bucket must already exist, kip does not create it.

Native AWS S3

bash
kip install \
  --host 203.0.113.10 \
  --domain example.com \
  --backup-storage-bucket example-kipper-backups \
  --backup-storage-region eu-west-1 \
  --backup-storage-profile labb

--backup-storage-credentials defaults to ~/.aws/credentials. --backup-storage-profile defaults to default. Both [NAME] (the credentials file convention) and [profile NAME] (the config file convention) section headers are accepted.

Cloudflare R2 or other S3-compatible providers

bash
kip install \
  --host 203.0.113.10 \
  --domain example.com \
  --backup-storage-bucket example-kipper-backups \
  --backup-storage-region auto \
  --backup-storage-endpoint https://<accountid>.r2.cloudflarestorage.com \
  --backup-storage-credentials ~/r2-credentials

Same shape works for self-hosted MinIO, Backblaze B2, Wasabi, and DigitalOcean Spaces. Set --backup-storage-endpoint to your provider's S3 URL.

Credentials handling

The credentials file is read once at install time and never written to disk again. Kipper creates a Kubernetes Secret (cloud-credentials in the velero namespace) on the cluster, and the Velero HelmChart references that Secret by name. Your local kip config records the bucket, region, and endpoint but never the keys.

kip upgrade re-applies the Velero chart without rotating the secret, so you do not need to keep the credentials file around after install. To rotate keys, run kip install again with the updated credentials file pointing at the same host — the install path is idempotent and replaces the Secret in place.

One-shot decision

Backup storage is chosen at install time and not changed after. To switch a cluster from in-cluster MinIO to external (or between providers), uninstall and reinstall. The plan is: live backups → wipe → fresh install with new flags → restore.

Sharing access with your team

After installing, you can give other developers access to the cluster without sharing SSH keys or server passwords. Export the cluster credentials and send the file to your team:

bash
kip cluster export > my-cluster.kip

Team members import the file and start working immediately:

bash
kip cluster add my-cluster.kip --set-current
kip status

See Team Access for the full workflow, including managing multiple clusters, database tunnels, and shell access.

kip cluster

Manage cluster configurations on your local machine.

bash
kip cluster export > file.kip        # export credentials for sharing
kip cluster add file.kip              # import a cluster
kip cluster add file.kip --set-current # import and switch to it
kip cluster list                      # list all clusters
kip cluster use <name>                # switch active cluster
kip cluster remove <name>             # remove local config
kip cluster uninstall <name>          # wipe the cluster off the remote host
kip cluster domain <domain>           # set a custom domain
kip cluster restart <component>       # restart a cluster component

kip cluster uninstall

Wipes Kipper from a remote Linux server. Runs k3s's own uninstall script over SSH, sweeps the data directories Kipper writes outside k3s (Longhorn volumes, Zot blobs, AI bundle data), and removes the cluster from your local kip config.

bash
kip cluster uninstall apprunr                  # interactive (prompts for cluster name)
kip cluster uninstall apprunr --yes            # skip the confirmation prompt
kip cluster uninstall apprunr --keep-local-config  # wipe host, keep local entry
kip cluster uninstall apprunr --ssh-key ~/.ssh/kipper_ed25519

The command prompts you to type the cluster name to confirm, the same way kubectl delete --confirm works for stateful resources. Pass --yes only for automation.

This is destructive. All cluster state and persistent volume data on the host is removed. The command does not revert host firewall rules or OS hardening (rpcbind disabled, etc.), because those are general OS security improvements unrelated to k3s.

Use --keep-local-config when you plan to reinstall immediately, so the existing cluster name and kubeconfig path stay in ~/.kip/config.yaml for the new install to refresh.

kip cluster restart

Restarts a Kipper cluster component by triggering a rolling restart of its deployment. Useful when a component has stale configuration or needs to pick up changes.

bash
kip cluster restart dex           # restart identity provider
kip cluster restart console       # restart web console
kip cluster restart console-api   # restart console API
kip cluster restart traefik       # restart ingress controller

kip cluster env

Sets environment variables on a cluster component and restarts it to pick up the changes.

bash
kip cluster env console-api DEV_MODE=true
kip cluster env console-api LOG_LEVEL=debug FEATURE_X=enabled

See Team Access for full documentation.

kip tunnel

Opens a secure tunnel from your machine to a service running in the cluster. Use this to connect desktop database clients (DBeaver, TablePlus, pgAdmin) to databases that are not exposed to the internet.

bash
kip tunnel mydb                        # PostgreSQL on localhost:5432
kip tunnel cache                       # Redis on localhost:6379
kip tunnel mydb --local-port 15432     # custom local port
FlagRequiredDefaultDescription
--local-portNoSame as service portLocal port to listen on
--projectNodefaultProject name
--environmentNoTarget environment

See Team Access for full documentation.

kip exec

Opens an interactive shell or runs a command inside a running container.

bash
kip exec myapp                         # interactive shell
kip exec myapp -- cat /app/config.yaml # run a single command
kip exec mydb -- psql -U kipper app   # SQL session in a database pod

See Team Access for full documentation.

kip status

Shows cluster health, node status, and component availability.

bash
kip status

kip node add

Joins a worker node to an existing cluster.

bash
kip node add --host <ip> [--ssh-key <path>]

kip node list

Lists all nodes in the cluster with role, status, version, and IP.

bash
kip node list

kip auth reset-password

Generates a new admin password, updates Dex, and displays the new credentials.

bash
kip auth reset-password

This command requires the kubeconfig stored in ~/.kip/clusters/. Only someone with cluster admin access can run it.

kip discover

Find Kipper-labelled workloads on the cluster that have no owning Kipper CR. Read-only.

bash
kip discover

Kipper considers a Service or App or Volume or Function to "exist" when its CR exists. A Deployment, StatefulSet, or PVC carrying app.kubernetes.io/managed-by=kipper without a matching CR is drift. It will not show up in kip service list or in the console, even though it occupies cluster resources.

kip discover lists each orphan and prints a suggested kip command that recreates it as a proper CR with the workload's current settings. Run the suggested command to bring the orphan under management. The controller adopts the existing workload to match the new CR, no deletion needed. Edit the suggested command first if you want to change anything.

For Deployments, the suggestion uses kip app deploy with --image, --port, --memory, --cpu, --env, --replicas. For StatefulSets, kip service add plus the service type (postgres, redis, and so on), with --storage, --memory, --cpu. For PVCs, kip volume create with --size. Functions print a comment instead, because the source code is not on the workload and has to come from kip function create.

Workloads in the kipper-system namespace are skipped. The console, console-api, and zot legitimately have no owning CR.

kip cert email

Shows or updates the Let's Encrypt email used for TLS certificates. This is the email cert-manager uses when registering with Let's Encrypt for automatic certificate issuance.

bash
kip cert email                    # show current email
kip cert email admin@example.com  # update email and renew stuck certs

When updating, the command re-registers with Let's Encrypt using the new email and triggers renewal for any certificates that are stuck or failed. Certificates usually come through within 1-2 minutes.

See Domains & SSL: Troubleshooting certificates for common certificate issues.

kip ai

kip ai covers two related things: choosing which AI provider Kipper itself uses (for log analysis, Dockerfile generation, diagnostics), and installing a private LLM stack inside the cluster that your apps can call.

kip ai configure / kip ai status

Configure which AI provider Kipper uses. Supports Claude (Anthropic), OpenAI, and Ollama (self-hosted).

bash
kip ai configure                                     # interactive setup
kip ai configure --provider claude --key sk-ant-...  # non-interactive
kip ai configure --provider ollama                   # self-hosted (no key needed)
kip ai status                                        # show current config and bundle health
FlagRequiredDefaultDescription
--providerNoAI provider: claude, openai, ollama
--keyNoAPI key (not needed for Ollama)
--modelNoModel override
--ollama-urlNohttp://localhost:11434Ollama server URL

kip ai admin create

Seed the first LibreChat admin account after kip ai install. The bundle ships with open registration off, so an admin must be created once before anyone can log in. Runs npm run create-user inside the running librechat pod via the Kubernetes API. No kubectl required.

bash
kip ai admin create --email you@example.com --name 'Your Name' --password 'a-strong-password'
kip ai admin create --email you@example.com --name 'Your Name' --username alice --password '...'
FlagRequiredDefaultDescription
--emailYesAdmin email address
--passwordYesAt least 8 characters
--nameYesDisplay name shown in the chat UI
--usernameNolocal part of --emailLibreChat username

kip ai install / kip ai uninstall

Install or remove the in-cluster AI bundle (Ollama and LibreChat). The full walkthrough is on the AI Bundle page.

bash
kip ai install                            # detect tier, pick a model, install
kip ai install --host chat.example.com    # override the chat hostname
kip ai install --model qwen2.5:7b-instruct-q4_K_M
kip ai uninstall                          # remove the bundle and wipe its data

kip ai uninstall is destructive: it deletes the workload, the PVCs (model cache, chat history, MongoDB), credentials, and the kipper-ai namespace. Take a blocking snapshot first with kip ai backup --name pre-uninstall --wait if you want to preserve any of it. Uninstall refuses by default while a Kipper AI backup is still in flight; pass --force to override (you'll get an unrestorable snapshot if you do).

FlagCommandDefaultDescription
--hostinstallchat.<cluster-domain>External chat UI hostname
--modelinstalltier-appropriate Qwen 2.5Ollama model tag to preload
--pvc-sizeinstall10/30/60 GiB by tierModel cache PVC size
-y, --yesinstallfalseSkip the auto-configure prompt

kip ai backup / kip ai restore

Velero-backed snapshot of the AI bundle. Velero is a Kipper system component, so no extra setup is needed.

bash
kip ai backup                            # auto-name, exits after 60s warmup
kip ai backup --name pre-upgrade         # named, exits after 60s warmup
kip ai backup --name pre-upgrade --wait  # block until completion (CI scripts)
kip ai backup show --name pre-upgrade    # detailed status (phase, items, errors)
kip ai backup list
kip ai backup delete --name pre-upgrade           # exits after 60s warmup
kip ai backup delete --name pre-upgrade --wait    # block until Backup CRs are gone
kip ai restore --name pre-upgrade        # requires kipper-ai uninstalled first
FlagCommandDefaultDescription
--namebackuptimestampedSnapshot name
--waitbackupfalseBlock until completion instead of exiting after the 60s warmup
--waitbackup deletefalseBlock until both Backup CRs are gone instead of exiting after the 60s warmup
--namebackup showSnapshot to inspect (required)
--namebackup deleteSnapshot to delete (required)
--namerestoreSnapshot to restore (required)

kip app update

Updates the container image for a deployed application and triggers a rolling update.

bash
kip app update api --image ghcr.io/acme/api:v2.1.0
FlagRequiredDescription
--imageYesNew container image
--projectNoProject name
--environmentNoTarget environment

kip app scale

Sets the replica count for a deployed application.

bash
kip app scale api --replicas 3
FlagRequiredDescription
--replicasYesNumber of replicas

Setting replicas to 0 stops the application without deleting it.

kip app env / kip app secret

Manage environment variables and secrets for an application.

bash
kip app env set api LOG_LEVEL=debug       # set env var
kip app env list api                      # list with values visible
kip app env delete api LOG_LEVEL          # remove

kip app secret set api DATABASE_URL       # interactive hidden prompt
kip app secret list api                   # keys only, values masked
kip app secret reveal api DATABASE_URL    # show a single value
kip app secret rollback api DATABASE_URL  # restore previous value
kip app secret delete api DATABASE_URL    # remove

See Secrets & Environment for full documentation.

Connect apps so one can reach another via a URL.

bash
kip app link domain-service api-gateway             # internal URL (backend-to-backend)
kip app link domain-service webapp --public          # public URL (for frontend apps)
kip app unlink domain-service api-gateway            # removes DOMAIN_SERVICE_URL

Use --public when linking to a frontend app that runs in the browser. It injects the target's public HTTPS URL instead of the internal Kubernetes DNS.

See Deploying Apps: Linking apps for details.

kip service

Manage stateful services (databases, caches) with persistent storage.

bash
kip service add postgres --name mydb         # deploy PostgreSQL
kip service add redis --name cache           # deploy Redis
kip service list                             # list all services
kip service info mydb                        # show connection details
kip service delete mydb --delete-data        # delete (requires flag)

See Stateful Services for full documentation.

kip project

Manage projects and environments.

bash
kip project create yourr-name --environments test,acc,prod
kip project create yourr-name --display-name "yourr.name Domain Platform" --environments test,acc,prod
kip project list
kip project add-env yourr-name prod
kip project remove-env yourr-name staging
kip project delete yourr-name

kip project remove-env deletes the matching namespace and everything in it. You'll be asked to type the environment name to confirm. See Projects & Environments for details.

kip project use

Set a persistent project context for the current cluster, so the rest of the kip commands do not need --project and --environment flags every time.

bash
kip project use yourr-name           # active project: yourr-name, default environment
kip project use yourr-name/test      # active project: yourr-name, environment: test
kip project use yourr-name test      # same, with a space instead of a slash
kip project use --clear              # forget the active project on this cluster

The active project is stored per cluster in ~/.kip/config.yaml. After setting it, kip service list, kip app list, kip volume list, kip function list, and similar commands resolve to the active project's namespace automatically. kip cluster list shows the active project on each cluster line.

Explicit --project flags still win. Passing --project other-name switches that single command to the other project; the persisted environment is not carried over so a different project never inherits a stale environment.

See Projects & Environments for full documentation.

kip app promote

Promote an app from one environment to the next (copies the image tag only).

bash
kip app promote api --from test --to acc --project yourr-name
kip app promote --all --from acc --to prod --project yourr-name

See Projects & Environments for full documentation.

kip function

Manage serverless functions. Functions scale to zero when idle and spin up on demand. Alias: kip fn.

bash
kip function create process-image --image myregistry/processor:v1 --port 8080 --project yourr-name
kip function create db-sync --trigger postgres --source mydb --query "SELECT * FROM events WHERE processed = false" --project yourr-name
kip function create cache-worker --trigger redis --source cache --list jobs --project yourr-name
kip function list --project yourr-name
kip function logs process-image --project yourr-name
kip function delete process-image --project yourr-name

kip function create flags

FlagRequiredDefaultDescription
--imageYesContainer image for the function
--triggerNohttpTrigger type: http, postgres, mysql, redis, minio
--portNo8080Port the function listens on
--sourceNoService name for event triggers
--queryNoSQL query for postgres/mysql triggers
--mark-doneNoSQL to mark rows as processed
--listNoRedis list name for redis triggers
--bucketNoMinIO bucket name for minio triggers
--idle-timeoutNo300Seconds of inactivity before scaling to zero
--projectNodefaultProject name
--environmentNoTarget environment

See Serverless Functions for full documentation.

kip job

Run one-off tasks and scheduled jobs.

bash
kip job run --name migrate --image myapp:latest --command "npm run migrate" --project yourr-name --environment test
kip job schedule --name cleanup --image myapp:latest --command "python cleanup.py" --cron "0 3 * * *"
kip job list --project yourr-name
kip job history cleanup
kip job delete cleanup

See Jobs & Scheduled Tasks for full documentation.

kip volume

Create shared persistent volumes that can be mounted by multiple apps. Backed by Longhorn with ReadWriteMany access.

bash
kip volume create uploads --size 5Gi --project yourr-name --environment test
kip volume mount uploads webapp --path /data --project yourr-name --environment test
kip volume list --project yourr-name
kip volume delete uploads --project yourr-name --environment test

kip volume create flags

FlagRequiredDefaultDescription
--sizeNo5GiVolume size
--projectNodefaultProject name
--environmentNoTarget environment

kip volume mount flags

FlagRequiredDefaultDescription
--pathNo/dataMount path inside the container
--projectNodefaultProject name
--environmentNoTarget environment

See Storage for full documentation.

kip user

Manage cluster users and roles. Kipper supports three roles: admin (full access), deployer (deploy, scale, manage apps and services), and viewer (read-only).

bash
kip user list
kip user add dev@example.com --role deployer
kip user add pm@example.com --role viewer --password secret123
kip user invite --role deployer                    # generate invite link
kip user invite --role admin --expires 24h         # invite with custom expiry
kip user role dev@example.com --role admin         # change role
kip user remove dev@example.com

kip user add flags

FlagRequiredDefaultDescription
--roleNodeployerRole: admin, deployer, or viewer
--passwordNopromptedPassword (prompted interactively if not provided)

kip user invite flags

FlagRequiredDefaultDescription
--roleNodeployerRole: admin, deployer, or viewer
--expiresNo48hExpiry: 24h, 48h, 7d

See Team Access for full documentation.

kip registry

Manage container registry credentials. Registries are configured cluster-wide and synced to all project namespaces.

bash
kip registry add --server ghcr.io --username myuser --password ghp_token123
kip registry add --server registry.example.com --username deploy --password secret
kip registry list
kip registry remove ghcr-io
FlagRequiredDefaultDescription
--serverYesRegistry server (e.g. ghcr.io, registry.example.com)
--usernameYesRegistry username or token name
--passwordYesRegistry password or access token
--nameNoauto-generatedSecret name (derived from server if omitted)

kip credentials

Read back the git and container-registry credentials stored in the cluster. Useful if you no longer have a copy of a token and need to reuse it somewhere else, such as another cluster or a CI pipeline.

bash
kip credentials list                          # masked overview, both types
kip credentials list --type git               # masked overview, git only
kip credentials get git-labb-tools            # plaintext token to stdout
kip credentials get ghcr-io --type registry   # disambiguate if names collide

The console masks credential values by default. To reveal one in the browser, click the eye icon next to a credential in Settings and re-enter your password. On the CLI, kip credentials get prints plaintext to stdout without prompting, so make sure nothing is watching your terminal.

Reading credentials from the CLI requires kubeconfig access to secrets in the kipper-system namespace, which in practice means this is a cluster-admin command.

FlagRequiredDefaultDescription
--typeNoRestrict or disambiguate: git or registry

kip backup

Create, list, and restore cluster backups using Velero.

bash
kip backup create                                                   # backup everything
kip backup create weekly --project yourr-name                       # backup one project
kip backup create --project yourr-name --environment prod --ttl 720h  # 30-day retention
kip backup list
kip backup restore weekly-20260413
kip backup restore weekly-20260413 --namespace yourr-name-prod      # restore one namespace
kip backup schedules                                                # list scheduled backups

kip backup create flags

FlagRequiredDefaultDescription
--projectNoBackup a specific project only
--environmentNoBackup a specific environment only
--ttlNo168h (7 days)Backup retention period

kip backup restore flags

FlagRequiredDefaultDescription
--namespaceNoRestore only a specific namespace
--namespace-mappingNoRestore to a different namespace (format: source:target)
--resourcesNoRestore only specific resource types (comma-separated)

kip platform

Manage system component sizing, including the observability stack (Prometheus, Grafana, Loki). See Platform Resources for the full reference.

bash
kip platform status                          # active profile + per-component state
kip platform resize prometheus --memory 2Gi  # set a manual memory override
kip platform disable loki                    # turn a component off
kip platform enable loki                     # turn it back on
kip platform restart prometheus              # rolling restart
kip platform profile show                    # current profile
kip platform profile set large               # change profile

kip monitoring enable/disable/status still works as a thin compatibility wrapper that writes to the same PlatformConfig CR, but it will be removed in a future release.

kip blueprint

Browse and install application blueprints. Blueprints are pre-built templates for common application stacks.

bash
kip blueprint list
kip blueprint info wordpress
kip blueprint install wordpress --project myblog --set domain=blog.example.com

kip blueprint install flags

FlagRequiredDefaultDescription
--projectNoProject name (overrides template)
--environmentNoTarget environment
--setNoParameter values (key=value, repeatable)

kip apply

Apply a kipper.yaml manifest to the cluster. This is the declarative way to manage Kipper resources, similar to kubectl apply.

bash
kip apply                                         # apply ./kipper.yaml
kip apply -f myapp.yaml                           # apply a specific file
kip apply -f deploy/                              # apply all manifests in a directory
kip apply --dry-run                               # preview changes without applying
kip apply --project yourr-name --environment prod  # override project/environment
FlagRequiredDefaultDescription
-f, --fileNokipper.yamlPath to manifest file or directory
--dry-runNofalsePrint what would be applied without making changes
--projectNofrom manifestOverride the project name
--environmentNofrom manifestOverride the environment

kip diff

Show differences between a kipper.yaml manifest and the live cluster state. Useful for reviewing changes before applying.

bash
kip diff                               # diff ./kipper.yaml against live state
kip diff -f myapp.yaml                 # diff a specific file
kip diff --project yourr-name          # override project
FlagRequiredDefaultDescription
-f, --fileNokipper.yamlPath to manifest file or directory
--projectNofrom manifestOverride the project name
--environmentNofrom manifestOverride the environment

kip export

Export the current cluster state as a kipper.yaml manifest. Useful for backing up configuration or migrating between clusters.

bash
kip export --project yourr-name                   # export to stdout
kip export --project yourr-name -o kipper.yaml   # export to file
kip export --project yourr-name --split           # export as directory with one file per environment
FlagRequiredDefaultDescription
--projectYesProject name
--environmentNoExport a specific environment only
-o, --outputNostdoutOutput file
--splitNofalseExport as a directory with one file per environment

kip init

Generate a kipper.yaml manifest from a blueprint template.

bash
kip init --blueprint wordpress --set domain=blog.example.com
kip init --blueprint nodejs -o kipper.yaml
FlagRequiredDefaultDescription
--blueprintYesBlueprint name to use as template
--setNoParameter values (key=value, repeatable)
-o, --outputNokipper.yamlOutput file

kip upgrade

Upgrades the cluster to the versions pinned in this kip build. Three things happen, in order:

  1. Kipper CRDs are updated (so newer console features that need new CRD fields work).
  2. Console and console-api are restarted to pull the latest images.
  3. Cluster system components (Traefik, Longhorn, KEDA, Loki, Prometheus, Grafana, Velero, Zot, security middleware) are reconciled. Each chart is re-applied at the version pinned in kip, and helm-controller upgrades it in place.

Your apps and services are not directly touched, but step 3 can briefly disrupt running workloads if a chart upgrade rolls pods. For that reason kip prompts before running step 3 and refuses to proceed in non-interactive contexts without --yes.

bash
kip upgrade                    # default, prompts before system components
kip upgrade --skip-system      # only steps 1 and 2 (Kipper console layer)
kip upgrade --yes              # all three steps, no prompt (for automation)

Flags

FlagDefaultDescription
--skip-systemfalseUpgrade only Kipper CRDs + console. Use this in production to avoid touching cluster components.
--yesfalseSkip the confirmation prompt before upgrading cluster components. Required in non-interactive contexts (CI, scripts)
--ssh-keyinherited from ~/.kip/config.yamlSSH private key for connecting to the cluster host. If unset, falls back to KIP_SSH_KEY env, then the saved cluster.ssh_key, then your ssh-agent. Only needed for system component upgrades

What is NOT upgraded

  • k3s itself. Control-plane upgrade is risky and not automated by Kipper. Use the upstream k3s install script with the desired version when you need to bump.
  • cert-manager, Dex, console deployment manifests. These install paths take state-bearing arguments (admin email, OAuth client secret) that kip upgrade does not re-supply. Their image versions can still be picked up by re-running kip install on the same host, which is idempotent.

Recovering from a failed chart upgrade

If a system component upgrade fails (helm release ends in failed state), subsequent kip upgrade runs may hang on helm uninstall --wait while helm-controller tries to reset the release. The fix is to drop the stale helm release secrets and let helm-controller do a fresh install:

bash
ssh root@<cluster-host>
kubectl -n kube-system delete job helm-install-<chart>
kubectl -n kube-system delete pods -l helmcharts.helm.cattle.io/chart=<chart> --force --grace-period=0
kubectl -n <chart-target-ns> delete secret -l owner=helm,name=<chart>
kubectl -n kube-system annotate helmchart <chart> kip.kipper/reapply="$(date +%s)" --overwrite

Released under the Apache 2.0 License.