Replace all GitLab references with Gitea and add comprehensive FluxCD GitOps workflow documentation. Major changes: - CLAUDE.md: Replace GitLab sections with Gitea management - CLAUDE.md: Add FluxCD operations and troubleshooting - CLAUDE.md: Update repository structure and GitOps workflow - CLAUDE.md: Add Gitea Actions runner configuration guide - APP_DEPLOYMENT.md: Replace GitLab examples with Gitea - APP_DEPLOYMENT.md: Add FluxCD deployment workflow - APP_DEPLOYMENT.md: Include Gitea Actions CI/CD examples - README.md: Complete rewrite with project overview - README.md: Add GitOps workflow explanation - README.md: Include architecture and common commands Removed: - All GitLab-specific commands and examples - References to removed scripts (redeploy-gitlab.sh) Added: - Gitea Actions runner setup and configuration - FluxCD sync monitoring and troubleshooting - GitOps best practices and workflow guides 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
14 KiB
Application Deployment Guide
This guide explains how to deploy applications to your Talos Kubernetes cluster using the GitOps workflow with FluxCD.
Directory Structure
Applications are organized in the testing1/first-cluster/ directory:
testing1/first-cluster/
├── cluster/
│ ├── base/ # Cluster-level resources (namespaces, RBAC, etc.)
│ ├── flux/ # FluxCD GitOps configuration
│ ├── metallb/ # MetalLB load balancer
│ └── nfs-provisioner/ # NFS storage provisioner
└── apps/
├── demo/ # Example nginx app
│ ├── nginx-deployment.yaml
│ └── nginx-service.yaml
└── gitea/ # Gitea with CI/CD Runner
├── namespace.yaml
├── pvc.yaml
├── configmap.yaml
├── deployment.yaml
├── service.yaml
├── runner-secret.yaml
├── runner-deployment.yaml
└── kustomization.yaml
Deploying Applications
IMPORTANT: This cluster uses FluxCD for GitOps automation. All changes committed to the main branch in Gitea are automatically deployed to the cluster.
Method 1: GitOps (Recommended)
This is the preferred method for all deployments:
- Add or modify manifests in
testing1/first-cluster/apps/<app-name>/ - Commit and push to Gitea:
git add testing1/first-cluster/apps/<app-name>/ git commit -m "feat: deploy <app-name>" git push origin main - Flux automatically applies changes within 1-5 minutes
- Monitor deployment:
flux get kustomizations kubectl get all -n <namespace> -w
Method 2: Manual kubectl apply (For Testing Only)
For testing changes before committing to Git:
# Deploy a specific app
kubectl apply -f testing1/first-cluster/apps/<app-name>/
# Or use kustomize
kubectl apply -k testing1/first-cluster/apps/<app-name>/
Remember: Manual changes will be overwritten when Flux next reconciles. Always commit working configurations to Git.
Using Kustomize
Each app directory should contain a kustomization.yaml file that lists all resources:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- deployment.yaml
- service.yaml
Adding a New Application
Follow these steps to add a new application to your cluster:
1. Create App Directory
mkdir -p testing1/first-cluster/apps/<app-name>
cd testing1/first-cluster/apps/<app-name>
2. Create Namespace (Optional but Recommended)
Create namespace.yaml:
apiVersion: v1
kind: Namespace
metadata:
name: <app-name>
3. Create Application Resources
Create the necessary Kubernetes resources. Common resources include:
Deployment
Create deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: <app-name>
namespace: <app-name>
labels:
app: <app-name>
spec:
replicas: 1
selector:
matchLabels:
app: <app-name>
template:
metadata:
labels:
app: <app-name>
spec:
containers:
- name: <container-name>
image: <image:tag>
ports:
- containerPort: <port>
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
Service
Create service.yaml:
apiVersion: v1
kind: Service
metadata:
name: <app-name>
namespace: <app-name>
spec:
type: NodePort # or ClusterIP, LoadBalancer
selector:
app: <app-name>
ports:
- port: 80
targetPort: <container-port>
nodePort: 30XXX # if using NodePort (30000-32767)
PersistentVolumeClaim (if needed)
Create pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: <app-name>-data
namespace: <app-name>
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
ConfigMap (if needed)
Create configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: <app-name>-config
namespace: <app-name>
data:
config.yml: |
# Your configuration here
Secret (if needed)
Create secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: <app-name>-secret
namespace: <app-name>
type: Opaque
stringData:
password: "change-me"
api-key: "your-api-key"
4. Create Kustomization File
Create kustomization.yaml to organize all resources:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- pvc.yaml
- configmap.yaml
- secret.yaml
- deployment.yaml
- service.yaml
5. Deploy the Application
# From the repository root
kubectl apply -k testing1/first-cluster/apps/<app-name>/
# Verify deployment
kubectl get all -n <app-name>
Gitea Deployment Example
Prerequisites
-
Ensure your cluster is running and healthy:
kubectl get nodes talosctl health -
IMPORTANT: Install a storage provisioner first:
# Check if storage class exists kubectl get storageclass # If no storage class found, install local-path-provisioner ./install-local-path-storage.shWithout a storage provisioner, Gitea's PersistentVolumeClaims will remain in Pending state and pods won't start.
Deploy Gitea (GitOps Method)
-
Verify Gitea manifests in
testing1/first-cluster/apps/gitea/:namespace.yaml- Gitea namespacepvc.yaml- Persistent storage for Git datadeployment.yaml- Gitea applicationservice.yaml- LoadBalancer servicerunner-secret.yaml- Runner registration token (update after Gitea is running)runner-deployment.yaml- Gitea Actions runnerkustomization.yaml- Kustomize configuration
-
Commit and push (if not already in Git):
git add testing1/first-cluster/apps/gitea/ git commit -m "feat: deploy Gitea with Actions runner" git push origin main -
Monitor deployment (Flux will auto-deploy):
# Watch Flux sync flux get kustomizations -w # Watch pods come up kubectl get pods -n gitea -w -
Access Gitea (after pods are ready):
- Gitea UI:
http://10.0.1.10(via MetalLB) orhttp://<node-ip>:30300 - SSH:
10.0.1.10:22or<node-ip>:30222
- Gitea UI:
-
Complete Gitea Installation:
- Access the UI
- Complete the installation wizard (use SQLite for simplicity)
- Create an admin account
- Create your first repository
-
Enable Gitea Actions:
- Go to: Site Administration > Configuration
- Find "Actions" section
- Enable Actions
- Save configuration
Configure Gitea Actions Runner
-
Get Runner Registration Token:
- Go to Gitea UI: Site Administration > Actions > Runners
- Click "Create new Runner"
- Copy the registration token
-
Update Runner Secret (GitOps method):
# Edit the secret file nano testing1/first-cluster/apps/gitea/runner-secret.yaml # Replace REPLACE_WITH_GITEA_RUNNER_TOKEN with your actual token # Then commit and push git add testing1/first-cluster/apps/gitea/runner-secret.yaml git commit -m "chore: update Gitea runner token" git push origin main # Or update directly (non-GitOps): kubectl create secret generic runner-secret \ --from-literal=token='YOUR_TOKEN' \ -n gitea --dry-run=client -o yaml | kubectl apply -f - -
Restart Runner (to register with token):
kubectl rollout restart deployment/gitea-runner -n gitea -
Verify Runner Registration:
# Check runner logs kubectl logs -n gitea deployment/gitea-runner -c runner -f # Check Gitea UI # Go to: Site Administration > Actions > Runners # Should see "kubernetes-runner" with status "Idle"
Using Gitea Actions for CI/CD
Gitea Actions uses GitHub Actions-compatible workflow syntax. Create .gitea/workflows/ in your repository.
Example workflow .gitea/workflows/build.yaml:
name: Build and Test
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install dependencies
run: npm install
- name: Run tests
run: npm test
- name: Build application
run: npm run build
Example Docker build workflow .gitea/workflows/docker.yaml:
name: Build Docker Image
on:
push:
branches:
- main
jobs:
docker-build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Build Docker image
run: |
docker build -t myapp:${{ gitea.sha }} .
docker tag myapp:${{ gitea.sha }} myapp:latest
- name: Save image
run: docker save myapp:latest -o myapp.tar
Resource Sizing Guidelines
When adding applications, consider these resource guidelines:
Small Applications (web frontends, APIs)
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
Medium Applications (databases, caching)
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "2000m"
memory: "4Gi"
Large Applications (GitLab, monitoring stacks)
resources:
requests:
cpu: "1000m"
memory: "4Gi"
limits:
cpu: "4000m"
memory: "8Gi"
Service Types
ClusterIP (default)
- Only accessible within the cluster
- Use for internal services
NodePort
- Accessible on every node's IP at a static port (30000-32767)
- Use for services you need to access from outside the cluster
- Example: GitLab on port 30080
LoadBalancer
- Creates an external load balancer (if cloud provider supports it)
- On bare metal, requires MetalLB or similar
Storage Considerations
Access Modes
ReadWriteOnce(RWO): Single node read/write (most common)ReadOnlyMany(ROX): Multiple nodes read-onlyReadWriteMany(RWX): Multiple nodes read/write (requires special storage)
Storage Sizing
- Logs: 1-5 GB
- Application data: 10-50 GB
- Databases: 50-100+ GB
- Container registries: 100+ GB
Troubleshooting
Check Pod Status
kubectl get pods -n <namespace>
kubectl describe pod <pod-name> -n <namespace>
kubectl logs <pod-name> -n <namespace>
Check Events
kubectl get events -n <namespace> --sort-by='.lastTimestamp'
Check Resource Usage
kubectl top nodes
kubectl top pods -n <namespace>
Common Issues
-
ImagePullBackOff: Container image cannot be pulled
- Check image name and tag
- Verify registry credentials if using private registry
-
CrashLoopBackOff: Container keeps crashing
- Check logs:
kubectl logs <pod> -n <namespace> - Check resource limits
- Verify configuration
- Check logs:
-
Pending Pods: Pod cannot be scheduled
- Check node resources:
kubectl describe node - Check PVC status if using storage
- Verify node selectors/taints
- Check node resources:
-
PVC Stuck in Pending: Storage cannot be provisioned
- Most common issue on Talos: No storage provisioner installed
- Check if storage class exists:
kubectl get sc - If no storage class, install one:
./install-local-path-storage.sh - Check PVC events:
kubectl describe pvc <pvc-name> -n <namespace> - Verify storage is available on nodes
-
Storage Provisioner Issues
- Run diagnostics:
./diagnose-storage.sh - Check provisioner pods:
kubectl get pods -n local-path-storage - View provisioner logs:
kubectl logs -n local-path-storage deployment/local-path-provisioner
- Run diagnostics:
-
FluxCD Not Syncing Changes
- Check Flux status:
flux get all - Check GitRepository status:
flux get sources git - Force reconciliation:
flux reconcile kustomization cluster-sync --with-source - View Flux logs:
flux logs --level=error - Verify Git connectivity:
kubectl describe gitrepository talos-gitops -n flux-system
- Check Flux status:
-
Gitea Actions Runner Not Registering
- Check runner logs:
kubectl logs -n gitea deployment/gitea-runner -c runner -f - Verify runner token is correct in secret:
kubectl get secret runner-secret -n gitea -o yaml - Ensure Gitea Actions is enabled in Gitea UI
- Check Docker daemon in runner pod:
kubectl logs -n gitea deployment/gitea-runner -c daemon
- Check runner logs:
FluxCD GitOps Workflow
This cluster uses FluxCD for automated deployments:
-
Making Changes:
- Edit manifests in
testing1/first-cluster/ - Commit and push to
mainbranch in Gitea - Flux detects changes within 1 minute
- Changes applied within 5 minutes
- Edit manifests in
-
Monitoring Deployments:
# Overall Flux status flux get all # Watch for reconciliation flux get kustomizations -w # Check specific resources kubectl get all -n <namespace> -
Force Immediate Sync:
flux reconcile source git talos-gitops flux reconcile kustomization cluster-sync -
Troubleshooting Flux:
# Check Flux logs flux logs # Check specific controller logs kubectl logs -n flux-system deployment/source-controller kubectl logs -n flux-system deployment/kustomize-controller
Next Steps
- Configure ingress controller for HTTP/HTTPS routing
- Set up monitoring with Prometheus and Grafana
- Implement backup solutions for persistent data
- Configure network policies for security
- Set up SSL/TLS certificates with cert-manager