# Application Deployment Guide This guide explains how to deploy applications to your Talos Kubernetes cluster following the GitOps structure used in this repository. ## Directory Structure Applications are organized in the `testing1/first-cluster/apps/` directory: ``` testing1/first-cluster/ ├── cluster/ │ └── base/ # Cluster-level resources (namespaces, RBAC, etc.) └── apps/ ├── demo/ # Example nginx app │ ├── nginx-deployment.yaml │ └── nginx-service.yaml └── gitlab/ # GitLab with Container Registry ├── namespace.yaml ├── pvc.yaml ├── configmap.yaml ├── deployment.yaml ├── service.yaml ├── runner-secret.yaml ├── runner-configmap.yaml ├── runner-deployment.yaml └── kustomization.yaml ``` ## Deploying Applications ### Method 1: Direct kubectl apply Apply individual app manifests: ```bash # Deploy a specific app kubectl apply -f testing1/first-cluster/apps/gitlab/ # Or use kustomize kubectl apply -k testing1/first-cluster/apps/gitlab/ ``` ### Method 2: Using kustomize (Recommended) Each app directory can contain a `kustomization.yaml` file that lists all resources: ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - namespace.yaml - deployment.yaml - service.yaml ``` Deploy with: ```bash kubectl apply -k testing1/first-cluster/apps// ``` ## Adding a New Application Follow these steps to add a new application to your cluster: ### 1. Create App Directory ```bash mkdir -p testing1/first-cluster/apps/ cd testing1/first-cluster/apps/ ``` ### 2. Create Namespace (Optional but Recommended) Create `namespace.yaml`: ```yaml apiVersion: v1 kind: Namespace metadata: name: ``` ### 3. Create Application Resources Create the necessary Kubernetes resources. Common resources include: #### Deployment Create `deployment.yaml`: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: namespace: labels: app: spec: replicas: 1 selector: matchLabels: app: template: metadata: labels: app: spec: containers: - name: image: ports: - containerPort: resources: requests: cpu: "100m" memory: "128Mi" limits: cpu: "500m" memory: "512Mi" ``` #### Service Create `service.yaml`: ```yaml apiVersion: v1 kind: Service metadata: name: namespace: spec: type: NodePort # or ClusterIP, LoadBalancer selector: app: ports: - port: 80 targetPort: nodePort: 30XXX # if using NodePort (30000-32767) ``` #### PersistentVolumeClaim (if needed) Create `pvc.yaml`: ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: -data namespace: spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi ``` #### ConfigMap (if needed) Create `configmap.yaml`: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: -config namespace: data: config.yml: | # Your configuration here ``` #### Secret (if needed) Create `secret.yaml`: ```yaml apiVersion: v1 kind: Secret metadata: name: -secret namespace: type: Opaque stringData: password: "change-me" api-key: "your-api-key" ``` ### 4. Create Kustomization File Create `kustomization.yaml` to organize all resources: ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - namespace.yaml - pvc.yaml - configmap.yaml - secret.yaml - deployment.yaml - service.yaml ``` ### 5. Deploy the Application ```bash # From the repository root kubectl apply -k testing1/first-cluster/apps// # Verify deployment kubectl get all -n ``` ## GitLab Deployment Example ### Prerequisites 1. Ensure your cluster is running and healthy: ```bash kubectl get nodes talosctl health ``` 2. **IMPORTANT**: Install a storage provisioner first: ```bash # Check if storage class exists kubectl get storageclass # If no storage class found, install local-path-provisioner ./install-local-path-storage.sh ``` Without a storage provisioner, GitLab's PersistentVolumeClaims will remain in Pending state and pods won't start. ### Deploy GitLab 1. **Update the runner registration token** in `testing1/first-cluster/apps/gitlab/runner-secret.yaml`: After GitLab is running, get the registration token from: - GitLab UI: `Admin Area > CI/CD > Runners > Register an instance runner` - Or for project runners: `Settings > CI/CD > Runners > New project runner` 2. **Deploy GitLab and Runner**: ```bash kubectl apply -k testing1/first-cluster/apps/gitlab/ ``` 3. **Wait for GitLab to be ready** (this can take 5-10 minutes): ```bash kubectl get pods -n gitlab -w ``` 4. **Access GitLab**: - GitLab UI: `http://:30080` - SSH: `:30022` - Container Registry: `http://:30500` 5. **Get initial root password**: ```bash kubectl exec -n gitlab deployment/gitlab -- grep 'Password:' /etc/gitlab/initial_root_password ``` 6. **Configure GitLab Runner**: - Login to GitLab - Get the runner registration token - Update `runner-secret.yaml` with the token - Re-apply the secret: ```bash kubectl apply -f testing1/first-cluster/apps/gitlab/runner-secret.yaml ``` - Restart the runner: ```bash kubectl rollout restart deployment/gitlab-runner -n gitlab ``` ### Using the Container Registry 1. **Login to the registry**: ```bash docker login :30500 ``` 2. **Tag and push images**: ```bash docker tag myapp:latest :30500/mygroup/myapp:latest docker push :30500/mygroup/myapp:latest ``` 3. **Example `.gitlab-ci.yml` for building Docker images**: ```yaml stages: - build - push variables: DOCKER_DRIVER: overlay2 DOCKER_TLS_CERTDIR: "" IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG build: stage: build image: docker:24-dind services: - docker:24-dind tags: - docker script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY - docker build -t $IMAGE_TAG . - docker push $IMAGE_TAG ``` ## Resource Sizing Guidelines When adding applications, consider these resource guidelines: ### Small Applications (web frontends, APIs) ```yaml resources: requests: cpu: "100m" memory: "128Mi" limits: cpu: "500m" memory: "512Mi" ``` ### Medium Applications (databases, caching) ```yaml resources: requests: cpu: "500m" memory: "1Gi" limits: cpu: "2000m" memory: "4Gi" ``` ### Large Applications (GitLab, monitoring stacks) ```yaml resources: requests: cpu: "1000m" memory: "4Gi" limits: cpu: "4000m" memory: "8Gi" ``` ## Service Types ### ClusterIP (default) - Only accessible within the cluster - Use for internal services ### NodePort - Accessible on every node's IP at a static port (30000-32767) - Use for services you need to access from outside the cluster - Example: GitLab on port 30080 ### LoadBalancer - Creates an external load balancer (if cloud provider supports it) - On bare metal, requires MetalLB or similar ## Storage Considerations ### Access Modes - `ReadWriteOnce` (RWO): Single node read/write (most common) - `ReadOnlyMany` (ROX): Multiple nodes read-only - `ReadWriteMany` (RWX): Multiple nodes read/write (requires special storage) ### Storage Sizing - Logs: 1-5 GB - Application data: 10-50 GB - Databases: 50-100+ GB - Container registries: 100+ GB ## Troubleshooting ### Check Pod Status ```bash kubectl get pods -n kubectl describe pod -n kubectl logs -n ``` ### Check Events ```bash kubectl get events -n --sort-by='.lastTimestamp' ``` ### Check Resource Usage ```bash kubectl top nodes kubectl top pods -n ``` ### Common Issues 1. **ImagePullBackOff**: Container image cannot be pulled - Check image name and tag - Verify registry credentials if using private registry 2. **CrashLoopBackOff**: Container keeps crashing - Check logs: `kubectl logs -n ` - Check resource limits - Verify configuration 3. **Pending Pods**: Pod cannot be scheduled - Check node resources: `kubectl describe node` - Check PVC status if using storage - Verify node selectors/taints 4. **PVC Stuck in Pending**: Storage cannot be provisioned - **Most common issue on Talos**: No storage provisioner installed - Check if storage class exists: `kubectl get sc` - If no storage class, install one: ```bash ./install-local-path-storage.sh ``` - Check PVC events: `kubectl describe pvc -n ` - For GitLab specifically, use the redeploy script: ```bash ./redeploy-gitlab.sh ``` - Verify storage is available on nodes 5. **Storage Provisioner Issues** - Run diagnostics: `./diagnose-storage.sh` - Check provisioner pods: `kubectl get pods -n local-path-storage` - View provisioner logs: `kubectl logs -n local-path-storage deployment/local-path-provisioner` ## Next Steps - Set up FluxCD for GitOps automation - Configure ingress controller for HTTP/HTTPS routing - Set up monitoring with Prometheus and Grafana - Implement backup solutions for persistent data - Configure network policies for security