docs: add comprehensive application deployment guide

Add APP_DEPLOYMENT.md with step-by-step guide for deploying applications
to the Talos Kubernetes cluster.

Covers:
- Directory structure and GitOps organization
- Creating namespaces and deployments
- Configuring services and ingress
- Storage with PersistentVolumeClaims
- Using Kustomize for manifest management
- Examples for common application types

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
0xWheatyz 2026-03-04 01:52:56 +00:00
parent ea415ba584
commit 2ed1e82953

445
APP_DEPLOYMENT.md Normal file
View File

@ -0,0 +1,445 @@
# Application Deployment Guide
This guide explains how to deploy applications to your Talos Kubernetes cluster following the GitOps structure used in this repository.
## Directory Structure
Applications are organized in the `testing1/first-cluster/apps/` directory:
```
testing1/first-cluster/
├── cluster/
│ └── base/ # Cluster-level resources (namespaces, RBAC, etc.)
└── apps/
├── demo/ # Example nginx app
│ ├── nginx-deployment.yaml
│ └── nginx-service.yaml
└── gitlab/ # GitLab with Container Registry
├── namespace.yaml
├── pvc.yaml
├── configmap.yaml
├── deployment.yaml
├── service.yaml
├── runner-secret.yaml
├── runner-configmap.yaml
├── runner-deployment.yaml
└── kustomization.yaml
```
## Deploying Applications
### Method 1: Direct kubectl apply
Apply individual app manifests:
```bash
# Deploy a specific app
kubectl apply -f testing1/first-cluster/apps/gitlab/
# Or use kustomize
kubectl apply -k testing1/first-cluster/apps/gitlab/
```
### Method 2: Using kustomize (Recommended)
Each app directory can contain a `kustomization.yaml` file that lists all resources:
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- deployment.yaml
- service.yaml
```
Deploy with:
```bash
kubectl apply -k testing1/first-cluster/apps/<app-name>/
```
## Adding a New Application
Follow these steps to add a new application to your cluster:
### 1. Create App Directory
```bash
mkdir -p testing1/first-cluster/apps/<app-name>
cd testing1/first-cluster/apps/<app-name>
```
### 2. Create Namespace (Optional but Recommended)
Create `namespace.yaml`:
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: <app-name>
```
### 3. Create Application Resources
Create the necessary Kubernetes resources. Common resources include:
#### Deployment
Create `deployment.yaml`:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: <app-name>
namespace: <app-name>
labels:
app: <app-name>
spec:
replicas: 1
selector:
matchLabels:
app: <app-name>
template:
metadata:
labels:
app: <app-name>
spec:
containers:
- name: <container-name>
image: <image:tag>
ports:
- containerPort: <port>
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
```
#### Service
Create `service.yaml`:
```yaml
apiVersion: v1
kind: Service
metadata:
name: <app-name>
namespace: <app-name>
spec:
type: NodePort # or ClusterIP, LoadBalancer
selector:
app: <app-name>
ports:
- port: 80
targetPort: <container-port>
nodePort: 30XXX # if using NodePort (30000-32767)
```
#### PersistentVolumeClaim (if needed)
Create `pvc.yaml`:
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: <app-name>-data
namespace: <app-name>
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
```
#### ConfigMap (if needed)
Create `configmap.yaml`:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: <app-name>-config
namespace: <app-name>
data:
config.yml: |
# Your configuration here
```
#### Secret (if needed)
Create `secret.yaml`:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: <app-name>-secret
namespace: <app-name>
type: Opaque
stringData:
password: "change-me"
api-key: "your-api-key"
```
### 4. Create Kustomization File
Create `kustomization.yaml` to organize all resources:
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- pvc.yaml
- configmap.yaml
- secret.yaml
- deployment.yaml
- service.yaml
```
### 5. Deploy the Application
```bash
# From the repository root
kubectl apply -k testing1/first-cluster/apps/<app-name>/
# Verify deployment
kubectl get all -n <app-name>
```
## GitLab Deployment Example
### Prerequisites
1. Ensure your cluster is running and healthy:
```bash
kubectl get nodes
talosctl health
```
2. **IMPORTANT**: Install a storage provisioner first:
```bash
# Check if storage class exists
kubectl get storageclass
# If no storage class found, install local-path-provisioner
./install-local-path-storage.sh
```
Without a storage provisioner, GitLab's PersistentVolumeClaims will remain in Pending state and pods won't start.
### Deploy GitLab
1. **Update the runner registration token** in `testing1/first-cluster/apps/gitlab/runner-secret.yaml`:
After GitLab is running, get the registration token from:
- GitLab UI: `Admin Area > CI/CD > Runners > Register an instance runner`
- Or for project runners: `Settings > CI/CD > Runners > New project runner`
2. **Deploy GitLab and Runner**:
```bash
kubectl apply -k testing1/first-cluster/apps/gitlab/
```
3. **Wait for GitLab to be ready** (this can take 5-10 minutes):
```bash
kubectl get pods -n gitlab -w
```
4. **Access GitLab**:
- GitLab UI: `http://<any-node-ip>:30080`
- SSH: `<any-node-ip>:30022`
- Container Registry: `http://<any-node-ip>:30500`
5. **Get initial root password**:
```bash
kubectl exec -n gitlab deployment/gitlab -- grep 'Password:' /etc/gitlab/initial_root_password
```
6. **Configure GitLab Runner**:
- Login to GitLab
- Get the runner registration token
- Update `runner-secret.yaml` with the token
- Re-apply the secret:
```bash
kubectl apply -f testing1/first-cluster/apps/gitlab/runner-secret.yaml
```
- Restart the runner:
```bash
kubectl rollout restart deployment/gitlab-runner -n gitlab
```
### Using the Container Registry
1. **Login to the registry**:
```bash
docker login <node-ip>:30500
```
2. **Tag and push images**:
```bash
docker tag myapp:latest <node-ip>:30500/mygroup/myapp:latest
docker push <node-ip>:30500/mygroup/myapp:latest
```
3. **Example `.gitlab-ci.yml` for building Docker images**:
```yaml
stages:
- build
- push
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
build:
stage: build
image: docker:24-dind
services:
- docker:24-dind
tags:
- docker
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
```
## Resource Sizing Guidelines
When adding applications, consider these resource guidelines:
### Small Applications (web frontends, APIs)
```yaml
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
```
### Medium Applications (databases, caching)
```yaml
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "2000m"
memory: "4Gi"
```
### Large Applications (GitLab, monitoring stacks)
```yaml
resources:
requests:
cpu: "1000m"
memory: "4Gi"
limits:
cpu: "4000m"
memory: "8Gi"
```
## Service Types
### ClusterIP (default)
- Only accessible within the cluster
- Use for internal services
### NodePort
- Accessible on every node's IP at a static port (30000-32767)
- Use for services you need to access from outside the cluster
- Example: GitLab on port 30080
### LoadBalancer
- Creates an external load balancer (if cloud provider supports it)
- On bare metal, requires MetalLB or similar
## Storage Considerations
### Access Modes
- `ReadWriteOnce` (RWO): Single node read/write (most common)
- `ReadOnlyMany` (ROX): Multiple nodes read-only
- `ReadWriteMany` (RWX): Multiple nodes read/write (requires special storage)
### Storage Sizing
- Logs: 1-5 GB
- Application data: 10-50 GB
- Databases: 50-100+ GB
- Container registries: 100+ GB
## Troubleshooting
### Check Pod Status
```bash
kubectl get pods -n <namespace>
kubectl describe pod <pod-name> -n <namespace>
kubectl logs <pod-name> -n <namespace>
```
### Check Events
```bash
kubectl get events -n <namespace> --sort-by='.lastTimestamp'
```
### Check Resource Usage
```bash
kubectl top nodes
kubectl top pods -n <namespace>
```
### Common Issues
1. **ImagePullBackOff**: Container image cannot be pulled
- Check image name and tag
- Verify registry credentials if using private registry
2. **CrashLoopBackOff**: Container keeps crashing
- Check logs: `kubectl logs <pod> -n <namespace>`
- Check resource limits
- Verify configuration
3. **Pending Pods**: Pod cannot be scheduled
- Check node resources: `kubectl describe node`
- Check PVC status if using storage
- Verify node selectors/taints
4. **PVC Stuck in Pending**: Storage cannot be provisioned
- **Most common issue on Talos**: No storage provisioner installed
- Check if storage class exists: `kubectl get sc`
- If no storage class, install one:
```bash
./install-local-path-storage.sh
```
- Check PVC events: `kubectl describe pvc <pvc-name> -n <namespace>`
- For GitLab specifically, use the redeploy script:
```bash
./redeploy-gitlab.sh
```
- Verify storage is available on nodes
5. **Storage Provisioner Issues**
- Run diagnostics: `./diagnose-storage.sh`
- Check provisioner pods: `kubectl get pods -n local-path-storage`
- View provisioner logs: `kubectl logs -n local-path-storage deployment/local-path-provisioner`
## Next Steps
- Set up FluxCD for GitOps automation
- Configure ingress controller for HTTP/HTTPS routing
- Set up monitoring with Prometheus and Grafana
- Implement backup solutions for persistent data
- Configure network policies for security