Talos/APP_DEPLOYMENT.md
0xWheatyz f8870e59f4 docs: update all documentation to reflect Gitea and FluxCD
Replace all GitLab references with Gitea and add comprehensive
FluxCD GitOps workflow documentation.

Major changes:
- CLAUDE.md: Replace GitLab sections with Gitea management
- CLAUDE.md: Add FluxCD operations and troubleshooting
- CLAUDE.md: Update repository structure and GitOps workflow
- CLAUDE.md: Add Gitea Actions runner configuration guide
- APP_DEPLOYMENT.md: Replace GitLab examples with Gitea
- APP_DEPLOYMENT.md: Add FluxCD deployment workflow
- APP_DEPLOYMENT.md: Include Gitea Actions CI/CD examples
- README.md: Complete rewrite with project overview
- README.md: Add GitOps workflow explanation
- README.md: Include architecture and common commands

Removed:
- All GitLab-specific commands and examples
- References to removed scripts (redeploy-gitlab.sh)

Added:
- Gitea Actions runner setup and configuration
- FluxCD sync monitoring and troubleshooting
- GitOps best practices and workflow guides

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-03-04 02:20:10 +00:00

579 lines
14 KiB
Markdown

# Application Deployment Guide
This guide explains how to deploy applications to your Talos Kubernetes cluster using the GitOps workflow with FluxCD.
## Directory Structure
Applications are organized in the `testing1/first-cluster/` directory:
```
testing1/first-cluster/
├── cluster/
│ ├── base/ # Cluster-level resources (namespaces, RBAC, etc.)
│ ├── flux/ # FluxCD GitOps configuration
│ ├── metallb/ # MetalLB load balancer
│ └── nfs-provisioner/ # NFS storage provisioner
└── apps/
├── demo/ # Example nginx app
│ ├── nginx-deployment.yaml
│ └── nginx-service.yaml
└── gitea/ # Gitea with CI/CD Runner
├── namespace.yaml
├── pvc.yaml
├── configmap.yaml
├── deployment.yaml
├── service.yaml
├── runner-secret.yaml
├── runner-deployment.yaml
└── kustomization.yaml
```
## Deploying Applications
**IMPORTANT**: This cluster uses FluxCD for GitOps automation. All changes committed to the `main` branch in Gitea are automatically deployed to the cluster.
### Method 1: GitOps (Recommended)
This is the preferred method for all deployments:
1. Add or modify manifests in `testing1/first-cluster/apps/<app-name>/`
2. Commit and push to Gitea:
```bash
git add testing1/first-cluster/apps/<app-name>/
git commit -m "feat: deploy <app-name>"
git push origin main
```
3. Flux automatically applies changes within 1-5 minutes
4. Monitor deployment:
```bash
flux get kustomizations
kubectl get all -n <namespace> -w
```
### Method 2: Manual kubectl apply (For Testing Only)
For testing changes before committing to Git:
```bash
# Deploy a specific app
kubectl apply -f testing1/first-cluster/apps/<app-name>/
# Or use kustomize
kubectl apply -k testing1/first-cluster/apps/<app-name>/
```
**Remember**: Manual changes will be overwritten when Flux next reconciles. Always commit working configurations to Git.
### Using Kustomize
Each app directory should contain a `kustomization.yaml` file that lists all resources:
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- deployment.yaml
- service.yaml
```
## Adding a New Application
Follow these steps to add a new application to your cluster:
### 1. Create App Directory
```bash
mkdir -p testing1/first-cluster/apps/<app-name>
cd testing1/first-cluster/apps/<app-name>
```
### 2. Create Namespace (Optional but Recommended)
Create `namespace.yaml`:
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: <app-name>
```
### 3. Create Application Resources
Create the necessary Kubernetes resources. Common resources include:
#### Deployment
Create `deployment.yaml`:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: <app-name>
namespace: <app-name>
labels:
app: <app-name>
spec:
replicas: 1
selector:
matchLabels:
app: <app-name>
template:
metadata:
labels:
app: <app-name>
spec:
containers:
- name: <container-name>
image: <image:tag>
ports:
- containerPort: <port>
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
```
#### Service
Create `service.yaml`:
```yaml
apiVersion: v1
kind: Service
metadata:
name: <app-name>
namespace: <app-name>
spec:
type: NodePort # or ClusterIP, LoadBalancer
selector:
app: <app-name>
ports:
- port: 80
targetPort: <container-port>
nodePort: 30XXX # if using NodePort (30000-32767)
```
#### PersistentVolumeClaim (if needed)
Create `pvc.yaml`:
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: <app-name>-data
namespace: <app-name>
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
```
#### ConfigMap (if needed)
Create `configmap.yaml`:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: <app-name>-config
namespace: <app-name>
data:
config.yml: |
# Your configuration here
```
#### Secret (if needed)
Create `secret.yaml`:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: <app-name>-secret
namespace: <app-name>
type: Opaque
stringData:
password: "change-me"
api-key: "your-api-key"
```
### 4. Create Kustomization File
Create `kustomization.yaml` to organize all resources:
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- pvc.yaml
- configmap.yaml
- secret.yaml
- deployment.yaml
- service.yaml
```
### 5. Deploy the Application
```bash
# From the repository root
kubectl apply -k testing1/first-cluster/apps/<app-name>/
# Verify deployment
kubectl get all -n <app-name>
```
## Gitea Deployment Example
### Prerequisites
1. Ensure your cluster is running and healthy:
```bash
kubectl get nodes
talosctl health
```
2. **IMPORTANT**: Install a storage provisioner first:
```bash
# Check if storage class exists
kubectl get storageclass
# If no storage class found, install local-path-provisioner
./install-local-path-storage.sh
```
Without a storage provisioner, Gitea's PersistentVolumeClaims will remain in Pending state and pods won't start.
### Deploy Gitea (GitOps Method)
1. **Verify Gitea manifests** in `testing1/first-cluster/apps/gitea/`:
- `namespace.yaml` - Gitea namespace
- `pvc.yaml` - Persistent storage for Git data
- `deployment.yaml` - Gitea application
- `service.yaml` - LoadBalancer service
- `runner-secret.yaml` - Runner registration token (update after Gitea is running)
- `runner-deployment.yaml` - Gitea Actions runner
- `kustomization.yaml` - Kustomize configuration
2. **Commit and push** (if not already in Git):
```bash
git add testing1/first-cluster/apps/gitea/
git commit -m "feat: deploy Gitea with Actions runner"
git push origin main
```
3. **Monitor deployment** (Flux will auto-deploy):
```bash
# Watch Flux sync
flux get kustomizations -w
# Watch pods come up
kubectl get pods -n gitea -w
```
4. **Access Gitea** (after pods are ready):
- Gitea UI: `http://10.0.1.10` (via MetalLB) or `http://<node-ip>:30300`
- SSH: `10.0.1.10:22` or `<node-ip>:30222`
5. **Complete Gitea Installation**:
- Access the UI
- Complete the installation wizard (use SQLite for simplicity)
- Create an admin account
- Create your first repository
6. **Enable Gitea Actions**:
- Go to: Site Administration > Configuration
- Find "Actions" section
- Enable Actions
- Save configuration
### Configure Gitea Actions Runner
1. **Get Runner Registration Token**:
- Go to Gitea UI: Site Administration > Actions > Runners
- Click "Create new Runner"
- Copy the registration token
2. **Update Runner Secret** (GitOps method):
```bash
# Edit the secret file
nano testing1/first-cluster/apps/gitea/runner-secret.yaml
# Replace REPLACE_WITH_GITEA_RUNNER_TOKEN with your actual token
# Then commit and push
git add testing1/first-cluster/apps/gitea/runner-secret.yaml
git commit -m "chore: update Gitea runner token"
git push origin main
# Or update directly (non-GitOps):
kubectl create secret generic runner-secret \
--from-literal=token='YOUR_TOKEN' \
-n gitea --dry-run=client -o yaml | kubectl apply -f -
```
3. **Restart Runner** (to register with token):
```bash
kubectl rollout restart deployment/gitea-runner -n gitea
```
4. **Verify Runner Registration**:
```bash
# Check runner logs
kubectl logs -n gitea deployment/gitea-runner -c runner -f
# Check Gitea UI
# Go to: Site Administration > Actions > Runners
# Should see "kubernetes-runner" with status "Idle"
```
### Using Gitea Actions for CI/CD
Gitea Actions uses GitHub Actions-compatible workflow syntax. Create `.gitea/workflows/` in your repository.
**Example workflow** `.gitea/workflows/build.yaml`:
```yaml
name: Build and Test
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install dependencies
run: npm install
- name: Run tests
run: npm test
- name: Build application
run: npm run build
```
**Example Docker build workflow** `.gitea/workflows/docker.yaml`:
```yaml
name: Build Docker Image
on:
push:
branches:
- main
jobs:
docker-build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Build Docker image
run: |
docker build -t myapp:${{ gitea.sha }} .
docker tag myapp:${{ gitea.sha }} myapp:latest
- name: Save image
run: docker save myapp:latest -o myapp.tar
```
## Resource Sizing Guidelines
When adding applications, consider these resource guidelines:
### Small Applications (web frontends, APIs)
```yaml
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
```
### Medium Applications (databases, caching)
```yaml
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "2000m"
memory: "4Gi"
```
### Large Applications (GitLab, monitoring stacks)
```yaml
resources:
requests:
cpu: "1000m"
memory: "4Gi"
limits:
cpu: "4000m"
memory: "8Gi"
```
## Service Types
### ClusterIP (default)
- Only accessible within the cluster
- Use for internal services
### NodePort
- Accessible on every node's IP at a static port (30000-32767)
- Use for services you need to access from outside the cluster
- Example: GitLab on port 30080
### LoadBalancer
- Creates an external load balancer (if cloud provider supports it)
- On bare metal, requires MetalLB or similar
## Storage Considerations
### Access Modes
- `ReadWriteOnce` (RWO): Single node read/write (most common)
- `ReadOnlyMany` (ROX): Multiple nodes read-only
- `ReadWriteMany` (RWX): Multiple nodes read/write (requires special storage)
### Storage Sizing
- Logs: 1-5 GB
- Application data: 10-50 GB
- Databases: 50-100+ GB
- Container registries: 100+ GB
## Troubleshooting
### Check Pod Status
```bash
kubectl get pods -n <namespace>
kubectl describe pod <pod-name> -n <namespace>
kubectl logs <pod-name> -n <namespace>
```
### Check Events
```bash
kubectl get events -n <namespace> --sort-by='.lastTimestamp'
```
### Check Resource Usage
```bash
kubectl top nodes
kubectl top pods -n <namespace>
```
### Common Issues
1. **ImagePullBackOff**: Container image cannot be pulled
- Check image name and tag
- Verify registry credentials if using private registry
2. **CrashLoopBackOff**: Container keeps crashing
- Check logs: `kubectl logs <pod> -n <namespace>`
- Check resource limits
- Verify configuration
3. **Pending Pods**: Pod cannot be scheduled
- Check node resources: `kubectl describe node`
- Check PVC status if using storage
- Verify node selectors/taints
4. **PVC Stuck in Pending**: Storage cannot be provisioned
- **Most common issue on Talos**: No storage provisioner installed
- Check if storage class exists: `kubectl get sc`
- If no storage class, install one:
```bash
./install-local-path-storage.sh
```
- Check PVC events: `kubectl describe pvc <pvc-name> -n <namespace>`
- Verify storage is available on nodes
5. **Storage Provisioner Issues**
- Run diagnostics: `./diagnose-storage.sh`
- Check provisioner pods: `kubectl get pods -n local-path-storage`
- View provisioner logs: `kubectl logs -n local-path-storage deployment/local-path-provisioner`
6. **FluxCD Not Syncing Changes**
- Check Flux status: `flux get all`
- Check GitRepository status: `flux get sources git`
- Force reconciliation: `flux reconcile kustomization cluster-sync --with-source`
- View Flux logs: `flux logs --level=error`
- Verify Git connectivity: `kubectl describe gitrepository talos-gitops -n flux-system`
7. **Gitea Actions Runner Not Registering**
- Check runner logs: `kubectl logs -n gitea deployment/gitea-runner -c runner -f`
- Verify runner token is correct in secret: `kubectl get secret runner-secret -n gitea -o yaml`
- Ensure Gitea Actions is enabled in Gitea UI
- Check Docker daemon in runner pod: `kubectl logs -n gitea deployment/gitea-runner -c daemon`
## FluxCD GitOps Workflow
This cluster uses FluxCD for automated deployments:
1. **Making Changes**:
- Edit manifests in `testing1/first-cluster/`
- Commit and push to `main` branch in Gitea
- Flux detects changes within 1 minute
- Changes applied within 5 minutes
2. **Monitoring Deployments**:
```bash
# Overall Flux status
flux get all
# Watch for reconciliation
flux get kustomizations -w
# Check specific resources
kubectl get all -n <namespace>
```
3. **Force Immediate Sync**:
```bash
flux reconcile source git talos-gitops
flux reconcile kustomization cluster-sync
```
4. **Troubleshooting Flux**:
```bash
# Check Flux logs
flux logs
# Check specific controller logs
kubectl logs -n flux-system deployment/source-controller
kubectl logs -n flux-system deployment/kustomize-controller
```
## Next Steps
- Configure ingress controller for HTTP/HTTPS routing
- Set up monitoring with Prometheus and Grafana
- Implement backup solutions for persistent data
- Configure network policies for security
- Set up SSL/TLS certificates with cert-manager