Stateful Services
Kipper manages stateful services (databases, caches) separately from apps. Services use StatefulSets with persistent storage. They survive restarts and keep their data.
Adding a service
kip service add postgres --name mydb
kip service add mysql --name mydb
kip service add mongodb --name mydb
kip service add redis --name cache
kip service add rabbitmq --name queue
kip service add opensearch --name search
kip service add minio --name storageSupported types: postgres, mysql, mongodb, redis, rabbitmq, opensearch, minio
Per-environment services
Each environment can have its own database with separate credentials and storage:
kip service add postgres --name db --project yourr-name --environment test --storage 1Gi
kip service add postgres --name db --project yourr-name --environment acc --storage 2Gi
kip service add postgres --name db --project yourr-name --environment prod --storage 10GiEach environment's database is fully isolated. Test data never touches production. Test and acc can use smaller storage allocations to save resources.
What this creates
Options
| Flag | Default | Description |
|---|---|---|
--name | Required | Service name |
--project | default | Project name |
--environment | — | Target environment (e.g. test, acc, prod) |
--storage | 5Gi (postgres/mysql/mongodb/opensearch), 1Gi (redis/rabbitmq), 10Gi (minio) | Storage size |
Connection details
After creating a service, the connection details are displayed:
Host: mydb.default.svc.cluster.local
Port: 5432
Username: kipper
Password: a1b2c3d4e5f6...
Database: appRetrieve them later:
kip service info mydbThe hostname (mydb.default.svc.cluster.local) is a Kubernetes internal DNS name. Apps running on the same cluster can connect to it directly.
MinIO (S3-compatible object storage)
MinIO provides S3-compatible object storage for file uploads, media, documents, and other binary data.
kip service add minio --name storage --project yourr-name --environment test Host: storage.yourr-name-test.svc.cluster.local
Port: 9000
Username: kipper
Password: a1b2c3d4e5f6...
Endpoint: http://storage.yourr-name-test.svc.cluster.local:9000
Console: http://storage.yourr-name-test.svc.cluster.local:9001Bind MinIO to your app to inject credentials automatically:
kip service bind storage api --project yourr-name --environment testThis injects S3_HOST, S3_PORT, S3_USERNAME, and S3_PASSWORD into the app. Use them with any S3-compatible SDK (AWS SDK, MinIO SDK, boto3). See the Storage page for mc CLI examples and SDK code samples.
File explorer
MinIO services include a built-in file explorer in the web console. Navigate to Storage in the sidebar to browse buckets, upload and download files, delete objects, and generate share links (presigned URLs). See the Storage page for full details.
Browser-based database console
Postgres and MySQL services have a built-in client in the web console: SQL editor with schema-aware autocomplete, table browser with inline row editing, visual table and index designer, AI assistant that knows your schema, and per-user query history. No desktop tool needed.
Click the code icon on a Postgres or MySQL service row in the Services list (or open the side panel and click the same icon next to AI Diagnose) to open it. See the Database Console page for the full tour.
For other database types (MongoDB, Redis, OpenSearch, RabbitMQ), use a desktop client through kip tunnel as described below.
Connecting from your machine
Services run inside the cluster and are not exposed to the internet. To connect with a desktop database client (DBeaver, TablePlus, pgAdmin, RedisInsight, or any other tool), use kip tunnel to open a secure connection from your machine to the service:
kip tunnel mydb ✔ Tunnel open: localhost:5432 → mydb (postgres)
Press Ctrl+C to closeOpen your database client and connect to localhost:5432 with the credentials from kip service info mydb.
If the default port is already in use on your machine, pick a different one:
kip tunnel mydb --local-port 15432For services in a specific environment:
kip tunnel db --project yourr-name --environment stagingSee Team Access for the full tunnel documentation, including Redis examples and troubleshooting.
Binding to apps and functions
Bind a service to inject its connection details as environment variables. Both apps and functions accept bindings; the same prefix scheme applies.
# To an app
kip service bind db domain-service --project yourr-name --environment test
# To a function
kip function bind opensrs-sync db --project yourr-name --environment testPer-app databases
For database services (PostgreSQL, MySQL, MongoDB), Kipper automatically creates a dedicated database for each app. The database name is derived from the app name and environment:
| App | Environment | Database name |
|---|---|---|
domain-service | test | domain_service_test |
identity-service | prod | identity_service_prod |
exchange-service | acc | exchange_service_acc |
api | (none) | api |
This means multiple microservices can share a single PostgreSQL instance while keeping their data completely isolated.
Injected environment variables
The binding injects individual connection components with a type-based prefix. The prefix depends on the service type:
Database services (PostgreSQL, MySQL, MongoDB), prefix DB_:
| Variable | Example value |
|---|---|
DB_HOST | db.yourr-name-test.svc.cluster.local |
DB_PORT | 5432 |
DB_USERNAME | kipper |
DB_PASSWORD | a1b2c3d4e5f6... |
DB_NAME | domain_service_test |
MinIO, prefix S3_:
| Variable | Example value |
|---|---|
S3_HOST | storage.yourr-name-test.svc.cluster.local |
S3_PORT | 9000 |
S3_USERNAME | kipper |
S3_PASSWORD | a1b2c3d4e5f6... |
Redis, prefix REDIS_. RabbitMQ, prefix AMQP_. OpenSearch, prefix OPENSEARCH_.
Your app constructs a connection URL from these components in whatever format your framework needs:
- Node.js / Python / Ruby / Go:
postgres://${DB_USERNAME}:${DB_PASSWORD}@${DB_HOST}:${DB_PORT}/${DB_NAME} - Java / Spring Boot (JDBC):
jdbc:postgresql://${DB_HOST}:${DB_PORT}/${DB_NAME} - S3 endpoint (MinIO):
http://${S3_HOST}:${S3_PORT}
Unbinding
Remove a binding from the console (env tab → click X on the service) or the CLI:
kip service unbind db domain-service --project yourr-name --environment testDeleting a service automatically unbinds it from all apps. The per-app databases are not dropped. They remain in the PostgreSQL instance for manual cleanup if needed.
The app restarts automatically when a binding is added or removed.
Listing services
kip service list NAME TYPE STATUS READY STORAGE
mydb postgres running 1/1 5Gi
cache redis running 1/1 1GiServices also appear in the web console under the Services sidebar item, where you can view connection details with a masked URL and copy-to-clipboard.
Deleting a service
Deleting a service permanently destroys all data. Kipper requires an explicit flag to prevent accidents:
# This will be rejected
kip service delete mydb
# This works, data is permanently destroyed
kip service delete mydb --delete-dataDANGER
--delete-data is irreversible. The persistent volume and all data are permanently deleted. There is no undo.
Copying data between environments
When you copy an environment via the wizard, the new env's databases come up empty. To bring data over, open a service in the new env and switch to the Migrate data tab in the side panel. The tab lists every same-named, same-type service in other namespaces (e.g. the same backend postgres in your test env). Click Copy data here next to the source you want, type the service name to confirm, and the migration starts as a Kubernetes Job in the target namespace.
What it does, in postgres terms:
PGPASSWORD=$SOURCE pg_dump --clean --if-exists ... | PGPASSWORD=$TARGET psql --set ON_ERROR_STOP=1 ...The --clean --if-exists makes re-runs safe. Every object is dropped before being recreated. The ON_ERROR_STOP setting fails the job loudly on any restore error rather than leaving you with a half-restored database.
A few things worth knowing:
- The target database is wiped. Tables, sequences, views, everything. The wizard's confirm modal asks you to type the service name on purpose.
- Source credentials are mirrored, not exposed. The job mounts a temporary copy of the source service's credentials secret in the target namespace. The mirror is owned by the job and gets garbage-collected when the job is cleaned up (one hour after completion).
- No retries. If the dump or restore fails, the job stops there. Re-run it after fixing the underlying issue. Re-runs overwrite cleanly.
- Postgres only for now. MySQL, MongoDB, Redis, MinIO, RabbitMQ and OpenSearch follow in upcoming releases.
The status panel polls every couple of seconds while a migration is running and tails the last 50 lines of pod logs so you can see pg_dump progress as it happens.
Resource limits
Configure CPU and memory limits for your services from the Resources tab in the service detail panel. Click a service in the web console, switch to the Resources tab, and adjust the CPU and memory requests and limits.
Resource limits control how much CPU and memory the service pod is allowed to consume. Databases under heavy query load or caches handling high throughput may need higher limits than the defaults.
WARNING
Changing resource limits on a service triggers a pod restart. For databases (PostgreSQL, MySQL, MongoDB), this means a brief period of downtime while the pod restarts with the new limits. Plan resource changes during a maintenance window or low-traffic period.
How services differ from apps
| Apps | Services | |
|---|---|---|
| Kubernetes resource | Deployment | StatefulSet |
| Storage | None (stateless) | PersistentVolumeClaim |
| Restart | Rolling restart, safe | Warns, requires --force |
| Delete | Immediate | Requires --delete-data |
| Scaling | kip app scale | Single replica |
| External access | Via Ingress (public URL) | Internal only (cluster DNS) |