Add APP_DEPLOYMENT.md with step-by-step guide for deploying applications to the Talos Kubernetes cluster. Covers: - Directory structure and GitOps organization - Creating namespaces and deployments - Configuring services and ingress - Storage with PersistentVolumeClaims - Using Kustomize for manifest management - Examples for common application types 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
9.7 KiB
Application Deployment Guide
This guide explains how to deploy applications to your Talos Kubernetes cluster following the GitOps structure used in this repository.
Directory Structure
Applications are organized in the testing1/first-cluster/apps/ directory:
testing1/first-cluster/
├── cluster/
│ └── base/ # Cluster-level resources (namespaces, RBAC, etc.)
└── apps/
├── demo/ # Example nginx app
│ ├── nginx-deployment.yaml
│ └── nginx-service.yaml
└── gitlab/ # GitLab with Container Registry
├── namespace.yaml
├── pvc.yaml
├── configmap.yaml
├── deployment.yaml
├── service.yaml
├── runner-secret.yaml
├── runner-configmap.yaml
├── runner-deployment.yaml
└── kustomization.yaml
Deploying Applications
Method 1: Direct kubectl apply
Apply individual app manifests:
# Deploy a specific app
kubectl apply -f testing1/first-cluster/apps/gitlab/
# Or use kustomize
kubectl apply -k testing1/first-cluster/apps/gitlab/
Method 2: Using kustomize (Recommended)
Each app directory can contain a kustomization.yaml file that lists all resources:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- deployment.yaml
- service.yaml
Deploy with:
kubectl apply -k testing1/first-cluster/apps/<app-name>/
Adding a New Application
Follow these steps to add a new application to your cluster:
1. Create App Directory
mkdir -p testing1/first-cluster/apps/<app-name>
cd testing1/first-cluster/apps/<app-name>
2. Create Namespace (Optional but Recommended)
Create namespace.yaml:
apiVersion: v1
kind: Namespace
metadata:
name: <app-name>
3. Create Application Resources
Create the necessary Kubernetes resources. Common resources include:
Deployment
Create deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: <app-name>
namespace: <app-name>
labels:
app: <app-name>
spec:
replicas: 1
selector:
matchLabels:
app: <app-name>
template:
metadata:
labels:
app: <app-name>
spec:
containers:
- name: <container-name>
image: <image:tag>
ports:
- containerPort: <port>
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
Service
Create service.yaml:
apiVersion: v1
kind: Service
metadata:
name: <app-name>
namespace: <app-name>
spec:
type: NodePort # or ClusterIP, LoadBalancer
selector:
app: <app-name>
ports:
- port: 80
targetPort: <container-port>
nodePort: 30XXX # if using NodePort (30000-32767)
PersistentVolumeClaim (if needed)
Create pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: <app-name>-data
namespace: <app-name>
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
ConfigMap (if needed)
Create configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: <app-name>-config
namespace: <app-name>
data:
config.yml: |
# Your configuration here
Secret (if needed)
Create secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: <app-name>-secret
namespace: <app-name>
type: Opaque
stringData:
password: "change-me"
api-key: "your-api-key"
4. Create Kustomization File
Create kustomization.yaml to organize all resources:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- pvc.yaml
- configmap.yaml
- secret.yaml
- deployment.yaml
- service.yaml
5. Deploy the Application
# From the repository root
kubectl apply -k testing1/first-cluster/apps/<app-name>/
# Verify deployment
kubectl get all -n <app-name>
GitLab Deployment Example
Prerequisites
-
Ensure your cluster is running and healthy:
kubectl get nodes talosctl health -
IMPORTANT: Install a storage provisioner first:
# Check if storage class exists kubectl get storageclass # If no storage class found, install local-path-provisioner ./install-local-path-storage.shWithout a storage provisioner, GitLab's PersistentVolumeClaims will remain in Pending state and pods won't start.
Deploy GitLab
-
Update the runner registration token in
testing1/first-cluster/apps/gitlab/runner-secret.yaml:After GitLab is running, get the registration token from:
- GitLab UI:
Admin Area > CI/CD > Runners > Register an instance runner - Or for project runners:
Settings > CI/CD > Runners > New project runner
- GitLab UI:
-
Deploy GitLab and Runner:
kubectl apply -k testing1/first-cluster/apps/gitlab/ -
Wait for GitLab to be ready (this can take 5-10 minutes):
kubectl get pods -n gitlab -w -
Access GitLab:
- GitLab UI:
http://<any-node-ip>:30080 - SSH:
<any-node-ip>:30022 - Container Registry:
http://<any-node-ip>:30500
- GitLab UI:
-
Get initial root password:
kubectl exec -n gitlab deployment/gitlab -- grep 'Password:' /etc/gitlab/initial_root_password -
Configure GitLab Runner:
- Login to GitLab
- Get the runner registration token
- Update
runner-secret.yamlwith the token - Re-apply the secret:
kubectl apply -f testing1/first-cluster/apps/gitlab/runner-secret.yaml - Restart the runner:
kubectl rollout restart deployment/gitlab-runner -n gitlab
Using the Container Registry
-
Login to the registry:
docker login <node-ip>:30500 -
Tag and push images:
docker tag myapp:latest <node-ip>:30500/mygroup/myapp:latest docker push <node-ip>:30500/mygroup/myapp:latest -
Example
.gitlab-ci.ymlfor building Docker images:stages: - build - push variables: DOCKER_DRIVER: overlay2 DOCKER_TLS_CERTDIR: "" IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG build: stage: build image: docker:24-dind services: - docker:24-dind tags: - docker script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY - docker build -t $IMAGE_TAG . - docker push $IMAGE_TAG
Resource Sizing Guidelines
When adding applications, consider these resource guidelines:
Small Applications (web frontends, APIs)
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
Medium Applications (databases, caching)
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "2000m"
memory: "4Gi"
Large Applications (GitLab, monitoring stacks)
resources:
requests:
cpu: "1000m"
memory: "4Gi"
limits:
cpu: "4000m"
memory: "8Gi"
Service Types
ClusterIP (default)
- Only accessible within the cluster
- Use for internal services
NodePort
- Accessible on every node's IP at a static port (30000-32767)
- Use for services you need to access from outside the cluster
- Example: GitLab on port 30080
LoadBalancer
- Creates an external load balancer (if cloud provider supports it)
- On bare metal, requires MetalLB or similar
Storage Considerations
Access Modes
ReadWriteOnce(RWO): Single node read/write (most common)ReadOnlyMany(ROX): Multiple nodes read-onlyReadWriteMany(RWX): Multiple nodes read/write (requires special storage)
Storage Sizing
- Logs: 1-5 GB
- Application data: 10-50 GB
- Databases: 50-100+ GB
- Container registries: 100+ GB
Troubleshooting
Check Pod Status
kubectl get pods -n <namespace>
kubectl describe pod <pod-name> -n <namespace>
kubectl logs <pod-name> -n <namespace>
Check Events
kubectl get events -n <namespace> --sort-by='.lastTimestamp'
Check Resource Usage
kubectl top nodes
kubectl top pods -n <namespace>
Common Issues
-
ImagePullBackOff: Container image cannot be pulled
- Check image name and tag
- Verify registry credentials if using private registry
-
CrashLoopBackOff: Container keeps crashing
- Check logs:
kubectl logs <pod> -n <namespace> - Check resource limits
- Verify configuration
- Check logs:
-
Pending Pods: Pod cannot be scheduled
- Check node resources:
kubectl describe node - Check PVC status if using storage
- Verify node selectors/taints
- Check node resources:
-
PVC Stuck in Pending: Storage cannot be provisioned
- Most common issue on Talos: No storage provisioner installed
- Check if storage class exists:
kubectl get sc - If no storage class, install one:
./install-local-path-storage.sh - Check PVC events:
kubectl describe pvc <pvc-name> -n <namespace> - For GitLab specifically, use the redeploy script:
./redeploy-gitlab.sh - Verify storage is available on nodes
-
Storage Provisioner Issues
- Run diagnostics:
./diagnose-storage.sh - Check provisioner pods:
kubectl get pods -n local-path-storage - View provisioner logs:
kubectl logs -n local-path-storage deployment/local-path-provisioner
- Run diagnostics:
Next Steps
- Set up FluxCD for GitOps automation
- Configure ingress controller for HTTP/HTTPS routing
- Set up monitoring with Prometheus and Grafana
- Implement backup solutions for persistent data
- Configure network policies for security