Getting Started with Kubernetes —— Go Microservice Deployment and Orchestration
I. Kubernetes Core Concepts
1.1 What is Kubernetes
Kubernetes (K8s for short) is Google's open-source container orchestration platform used for automating the deployment, scaling, and management of containerized applications. If Docker solved the problem of "how to package and run a single container," then K8s solves the problem of "how to manage hundreds or thousands of containers."
K8s' core capabilities:
- Automatic Scheduling: Automatically allocates containers to suitable nodes based on resource requirements
- Self-healing Capabilities: Automatically restarts crashed containers and migrates applications from failed nodes
- Horizontal Scaling: Automatically increases or decreases the number of instances based on load
- Rolling Updates: Updates application versions with zero downtime
- Service Discovery and Load Balancing: Built-in DNS and service discovery mechanisms
- Configuration and Secret Management: Unified management of configurations and sensitive information
1.2 Cluster Architecture
┌─────────────────────────────────────────────────────────────┐
│ Kubernetes Cluster │
│ │
│ ┌─────────────────────────────────────┐ │
│ │ Control Plane (Master Node) │ │
│ │ │ │
│ │ ┌────────────┐ ┌──────────────┐ │ │
│ │ │ API Server │ │ Scheduler │ │ │
│ │ └────────────┘ └──────────────┘ │ │
│ │ ┌────────────┐ ┌──────────────┐ │ │
│ │ │ etcd │ │ Controller │ │ │
│ │ │ (Data Storage) │ │ Manager │ │ │
│ │ └────────────┘ └──────────────┘ │ │
│ └─────────────────────────────────────┘ │
│ │
│ ┌──────────────────┐ ┌──────────────────┐ │
│ │ Worker Node 1 │ │ Worker Node 2 │ ... │
│ │ │ │ │ │
│ │ ┌────┐ ┌────┐ │ │ ┌────┐ ┌────┐ │ │
│ │ │Pod │ │Pod │ │ │ │Pod │ │Pod │ │ │
│ │ └────┘ └────┘ │ │ └────┘ └────┘ │ │
│ │ │ │ │ │
│ │ ┌──────────┐ │ │ ┌──────────┐ │ │
│ │ │ kubelet │ │ │ │ kubelet │ │ │
│ │ └──────────┘ │ │ └──────────┘ │ │
│ │ ┌──────────┐ │ │ ┌──────────┐ │ │
│ │ │kube-proxy│ │ │ │kube-proxy│ │ │
│ │ └──────────┘ │ │ └──────────┘ │ │
│ └──────────────────┘ └──────────────────┘ │
└─────────────────────────────────────────────────────────────┘
1.3 Core Resource Objects
Pod —— Smallest Scheduling Unit
A Pod is the smallest deployable unit in K8s, containing one or more containers. Containers within the same Pod:
- Share network namespace (can access each other via localhost)
- Share storage volumes
- Share IPC namespace
- Are scheduled together on the same node
# Simplest Pod definition (rarely created directly in practice)
apiVersion: v1
kind: Pod
metadata:
name: my-go-app
labels:
app: my-go-app
spec:
containers:
- name: app
image: myapp:v1.0
ports:
- containerPort: 8080
Important: In production, Pods are not managed directly, but through Deployments.
Node —— Worker Node
A Node is a machine in the cluster (physical or virtual), and each Node runs:
- kubelet: Responsible for managing the Pod lifecycle on that node
- kube-proxy: Responsible for network proxying and load balancing
- Container Runtime: e.g., containerd, CRI-O (Docker has been removed in newer versions)
Cluster —— Cluster
The entire system composed of one or more Control Plane nodes and multiple Worker Nodes.
Deployment —— Deployment Controller
Deployment is the most commonly used workload controller, managing the number of Pod replicas, rolling updates, and rollbacks:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-go-app
spec:
replicas: 3 # Desired number of Pod replicas
selector:
matchLabels:
app: my-go-app # Selects which Pods to manage
template: # Pod template
metadata:
labels:
app: my-go-app
spec:
containers:
- name: app
image: myapp:v1.0
Service —— Service Discovery and Load Balancing
A Service provides a stable network endpoint and load balancing for a set of Pods:
apiVersion: v1
kind: Service
metadata:
name: my-go-app-svc
spec:
selector:
app: my-go-app # Selects backend Pods
ports:
- port: 80 # Service port
targetPort: 8080 # Container port
type: ClusterIP # Service type
Service Type Descriptions:
| Type | Description | Access Scope |
|---|---|---|
| ClusterIP | Default type, cluster-internal IP | Cluster-internal only |
| NodePort | Exposes a port on each node | Accessible from outside the cluster via node IP |
| LoadBalancer | Cloud provider's load balancer | External access (cloud environment) |
| ExternalName | DNS CNAME mapping | Access external services |
Ingress —— External Access Entrypoint
Ingress provides HTTP/HTTPS routing, forwarding external requests to different Services:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: api.example.com
http:
paths:
- path: /users
pathType: Prefix
backend:
service:
name: user-service
port:
number: 80
- path: /orders
pathType: Prefix
backend:
service:
name: order-service
port:
number: 80
tls:
- hosts:
- api.example.com
secretName: tls-secret
ConfigMap —— Configuration Management
Decouples configuration information from container images:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
# Simple key-value pairs
LOG_LEVEL: "info"
DB_HOST: "mysql-svc"
DB_PORT: "3306"
# Configuration file
config.yaml: |
server:
port: 8080
mode: production
database:
max_open_conns: 100
max_idle_conns: 10
Secret —— Sensitive Information Management
Stores sensitive data such as passwords, tokens, and certificates (Base64 encoded):
apiVersion: v1
kind: Secret
metadata:
name: app-secret
type: Opaque
data:
# Values need to be Base64 encoded: echo -n "mypassword" | base64
DB_PASSWORD: bXlwYXNzd29yZA==
REDIS_PASSWORD: cmVkaXNwYXNz
JWT_SECRET: c3VwZXJzZWNyZXRrZXk=
Note: Base64 encoding for K8s Secrets is not encryption. In production, it is recommended to use solutions like Vault or Sealed Secrets to enhance security.
Namespace —— Namespace
Used to logically isolate cluster resources, suitable for multi-team or multi-environment scenarios:
# Common namespace divisions
kubectl create namespace development
kubectl create namespace staging
kubectl create namespace production
# View all namespaces
kubectl get namespaces
II. Common kubectl Commands
2.1 Cluster and Context Management
# View cluster information
kubectl cluster-info
# View current context
kubectl config current-context
# Switch context (multi-cluster management)
kubectl config use-context my-cluster
# Set default namespace
kubectl config set-context --current --namespace=production
2.2 Resource Viewing
# View resources (general format)
kubectl get <resource-type> [-n namespace]
# Common view commands
kubectl get pods # View Pods in the current namespace
kubectl get pods -A # View Pods in all namespaces
kubectl get pods -o wide # Display more information (IP, node, etc.)
kubectl get pods -l app=my-go-app # Filter by label
kubectl get deployments # View Deployments
kubectl get services # View Services
kubectl get ingress # View Ingress
kubectl get configmaps # View ConfigMaps
kubectl get secrets # View Secrets
kubectl get nodes # View nodes
kubectl get all # View all resources
# Detailed information
kubectl describe pod my-go-app-xxx # View Pod details (including event logs)
kubectl describe deployment my-go-app # View Deployment details
# Output format
kubectl get pods -o yaml # YAML format
kubectl get pods -o json # JSON format
kubectl get pods -o jsonpath='{.items[*].metadata.name}' # Custom output
2.3 Resource Operations
# Create/Update resources
kubectl apply -f deployment.yaml # Declarative (recommended)
kubectl create -f deployment.yaml # Imperative
# Delete resources
kubectl delete -f deployment.yaml # Delete resources defined in YAML
kubectl delete pod my-go-app-xxx # Delete specified Pod
kubectl delete deployment my-go-app # Delete Deployment
# Edit resources (online editing)
kubectl edit deployment my-go-app
# Quick creation (without YAML file)
kubectl create deployment my-app --image=myapp:v1.0
kubectl expose deployment my-app --port=80 --target-port=8080
2.4 Debugging and Troubleshooting
# View Pod logs
kubectl logs my-go-app-xxx # Current logs
kubectl logs -f my-go-app-xxx # Real-time tailing
kubectl logs my-go-app-xxx -c app # Specify container (multi-container Pod)
kubectl logs my-go-app-xxx --previous # Logs from the previous crash
# Enter Pod container
kubectl exec -it my-go-app-xxx -- sh
kubectl exec -it my-go-app-xxx -c app -- sh # Specify container
# Port forwarding (local debugging)
kubectl port-forward pod/my-go-app-xxx 8080:8080
kubectl port-forward svc/my-go-app-svc 8080:80
# View events (important for troubleshooting)
kubectl get events --sort-by='.lastTimestamp'
kubectl get events -n production
# Resource usage (requires metrics-server)
kubectl top pods
kubectl top nodes
III. Detailed Explanation of YAML Resource Configuration Files
3.1 Basic YAML Structure
Each K8s resource YAML file contains four core sections:
apiVersion: apps/v1 # API Version —— determines available fields
kind: Deployment # Resource Type —— Pod/Service/Deployment, etc.
metadata: # Metadata —— name, labels, annotations, etc.
name: my-app
namespace: production
labels:
app: my-app
env: production
annotations:
description: "My Go Application"
spec: # Specification —— desired state of the resource (the most critical part)
replicas: 3
# ...
3.2 Common apiVersion Mapping
| Resource Type | apiVersion |
|---|---|
| Pod, Service, ConfigMap, Secret | v1 |
| Deployment, StatefulSet, DaemonSet | apps/v1 |
| Ingress | networking.k8s.io/v1 |
| HPA | autoscaling/v2 |
| CronJob, Job | batch/v1 |
3.3 Labels and Selectors
Labels are the core mechanism for resource association in K8s:
# Deployment uses selector to choose which Pods to manage
# Service uses selector to choose backend Pods
# Both selectors must match the Pod's labels
# Deployment
spec:
selector:
matchLabels:
app: my-go-app # Must match template.labels
template:
metadata:
labels:
app: my-go-app # Pod's labels
# Service
spec:
selector:
app: my-go-app # Matches Pod's labels
IV. Deploying Go Microservices to K8s (Complete YAML Example)
4.1 Complete Deployment Solution
Taking a Go Web service as an example, here's a complete K8s deployment configuration.
File Structure
k8s/
├── namespace.yaml
├── configmap.yaml
├── secret.yaml
├── deployment.yaml
├── service.yaml
├── ingress.yaml
└── hpa.yaml
namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: go-app
labels:
name: go-app
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: go-app-config
namespace: go-app
data:
LOG_LEVEL: "info"
GIN_MODE: "release"
DB_HOST: "mysql-svc"
DB_PORT: "3306"
DB_NAME: "myapp"
REDIS_ADDR: "redis-svc:6379"
CONSUL_ADDR: "consul-svc:8500"
config.yaml: |
server:
port: 8080
read_timeout: 10s
write_timeout: 10s
log:
level: info
format: json
secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: go-app-secret
namespace: go-app
type: Opaque
data:
DB_PASSWORD: cm9vdHBhc3N3b3Jk # rootpassword
REDIS_PASSWORD: cmVkaXNwYXNzd29yZA== # redispassword
JWT_SECRET: bXktand0LXNlY3JldC1rZXk= # my-jwt-secret-key
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-app
namespace: go-app
labels:
app: go-app
version: v1.0.0
spec:
replicas: 3
selector:
matchLabels:
app: go-app
# Rolling update strategy
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # Max 1 Pod above desired replicas during rolling update
maxUnavailable: 0 # No unavailable Pods allowed during update
template:
metadata:
labels:
app: go-app
version: v1.0.0
spec:
# Graceful termination period
terminationGracePeriodSeconds: 30
# Init container (optional - waits for dependent services to be ready)
initContainers:
- name: wait-for-mysql
image: busybox:1.36
command:
- sh
- -c
- |
until nc -z mysql-svc 3306; do
echo "Waiting for MySQL..."
sleep 2
done
containers:
- name: app
image: registry.example.com/myns/go-app:v1.0.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
protocol: TCP
# Inject environment variables from ConfigMap
envFrom:
- configMapRef:
name: go-app-config
# Inject sensitive environment variables from Secret
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: go-app-secret
key: DB_PASSWORD
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: go-app-secret
key: REDIS_PASSWORD
- name: JWT_SECRET
valueFrom:
secretKeyRef:
name: go-app-secret
key: JWT_SECRET
# Mount configuration file
volumeMounts:
- name: config-volume
mountPath: /app/config
readOnly: true
# Resource limits
resources:
requests:
cpu: 100m # Minimum guaranteed 0.1 CPU core
memory: 128Mi # Minimum guaranteed 128MB memory
limits:
cpu: 500m # Maximum 0.5 CPU core usage
memory: 256Mi # Maximum 256MB memory usage
# Liveness probe —— checks if the container needs to be restarted
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10 # Wait 10 seconds after container starts
periodSeconds: 15 # Check every 15 seconds
timeoutSeconds: 3 # 3-second timeout is considered a failure
failureThreshold: 3 # Restart after 3 consecutive failures
# Readiness probe —— checks if the container can receive traffic
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 3
failureThreshold: 3
# Startup probe —— protects slow-starting applications
startupProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 0
periodSeconds: 5
failureThreshold: 30 # Max 150 seconds wait for startup
volumes:
- name: config-volume
configMap:
name: go-app-config
items:
- key: config.yaml
path: config.yaml
# Credentials for pulling private images
imagePullSecrets:
- name: registry-credentials
service.yaml
apiVersion: v1
kind: Service
metadata:
name: go-app-svc
namespace: go-app
labels:
app: go-app
spec:
selector:
app: go-app
ports:
- name: http
port: 80 # Port exposed by the Service
targetPort: 8080 # Port forwarded to the container
protocol: TCP
type: ClusterIP
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: go-app-ingress
namespace: go-app
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "10m"
nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
# Rate limiting
nginx.ingress.kubernetes.io/limit-rps: "100"
spec:
ingressClassName: nginx
tls:
- hosts:
- api.example.com
secretName: tls-secret
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: go-app-svc
port:
number: 80
4.2 One-Click Deployment
# Deploy all resources in order
kubectl apply -f k8s/namespace.yaml
kubectl apply -f k8s/configmap.yaml
kubectl apply -f k8s/secret.yaml
kubectl apply -f k8s/deployment.yaml
kubectl apply -f k8s/service.yaml
kubectl apply -f k8s/ingress.yaml
# Or deploy the entire directory at once
kubectl apply -f k8s/
# View deployment status
kubectl get all -n go-app
kubectl rollout status deployment/go-app -n go-app
V. Health Checks (Liveness / Readiness Probe)
5.1 Purpose of the Three Probes
| Probe | Purpose | Consequence of Failure |
|---|---|---|
| livenessProbe | Checks if the container is alive | Restarts the container |
| readinessProbe | Checks if the container is ready | Removes from Service endpoints, stops receiving traffic |
| startupProbe | Checks if the container has finished starting up | Prevents liveness from killing during startup phase |
5.2 Probe Types
# 1. HTTP GET Probe (most common)
livenessProbe:
httpGet:
path: /health
port: 8080
httpHeaders:
- name: X-Custom-Header
value: "probe"
# 2. TCP Socket Probe (suitable for non-HTTP services)
livenessProbe:
tcpSocket:
port: 8080
# 3. Exec Command Probe (custom detection logic)
livenessProbe:
exec:
command:
- /bin/sh
- -c
- "wget -q --spider http://localhost:8080/health"
# 4. gRPC Probe (K8s 1.24+)
livenessProbe:
grpc:
port: 50051
5.3 Implementing Health Check Endpoints in Go
package main
import (
"database/sql"
"encoding/json"
"net/http"
"sync/atomic"
)
// Global readiness status flag
var isReady atomic.Bool
type HealthResponse struct {
Status string `json:"status"`
Details map[string]string `json:"details,omitempty"`
}
func healthHandler(db *sql.DB, redisClient RedisClient) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
resp := HealthResponse{
Status: "ok",
Details: make(map[string]string),
}
statusCode := http.StatusOK
// Check database connection
if err := db.Ping(); err != nil {
resp.Status = "degraded"
resp.Details["mysql"] = "unhealthy: " + err.Error()
statusCode = http.StatusServiceUnavailable
} else {
resp.Details["mysql"] = "healthy"
}
// Check Redis connection
if err := redisClient.Ping(r.Context()).Err(); err != nil {
resp.Status = "degraded"
resp.Details["redis"] = "unhealthy: " + err.Error()
statusCode = http.StatusServiceUnavailable
} else {
resp.Details["redis"] = "healthy"
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(statusCode)
json.NewEncoder(w).Encode(resp)
}
}
// readinessHandler readiness check —— whether the service can receive traffic
func readinessHandler() http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if !isReady.Load() {
http.Error(w, "not ready", http.StatusServiceUnavailable)
return
}
w.WriteHeader(http.StatusOK)
w.Write([]byte("ready"))
}
}
func main() {
// Initialize database, Redis, etc...
// db, redisClient := initDeps()
mux := http.NewServeMux()
// Liveness probe —— simply return 200 OK
mux.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
})
// Readiness probe —— check dependent services
mux.HandleFunc("/readyz", readinessHandler())
// Detailed health information (internal use)
// mux.HandleFunc("/health", healthHandler(db, redisClient))
// Mark as ready after initialization is complete
isReady.Store(true)
http.ListenAndServe(":8080", mux)
}
Corresponding K8s configuration:
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 10
periodSeconds: 15
readinessProbe:
httpGet:
path: /readyz
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
VI. Horizontal Pod Autoscaler (HPA)
6.1 HPA Working Principle
HPA (Horizontal Pod Autoscaler) automatically adjusts the number of replicas based on Pod CPU/memory utilization or custom metrics:
┌─────────┐
Monitoring Metrics ────────▶│ HPA │
(CPU/Memory/Custom) │ Controller │
└────┬────┘
│ Adjust replicas
▼
┌─────────┐
│Deployment│
└────┬────┘
│
┌──────────┼──────────┐
▼ ▼ ▼
┌─────┐ ┌─────┐ ┌─────┐
│Pod 1│ │Pod 2│ │Pod 3│ ← Auto-scaling
└─────┘ └─────┘ └─────┘
6.2 Prerequisites
HPA requires metrics-server to obtain resource usage data:
# Install metrics-server (Minikube includes it, cloud environments usually have it installed)
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
# Verify
kubectl top pods
kubectl top nodes
6.3 HPA Configuration
# hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: go-app-hpa
namespace: go-app
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: go-app
minReplicas: 2 # Minimum 2 replicas
maxReplicas: 10 # Maximum 10 replicas
metrics:
# Based on CPU utilization
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70 # Scale out when CPU utilization exceeds 70%
# Based on memory utilization
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80 # Scale out when memory utilization exceeds 80%
behavior:
scaleUp:
stabilizationWindowSeconds: 60 # Scale-up stabilization window
policies:
- type: Pods
value: 2 # Max 2 Pods scaled up at a time
periodSeconds: 60
scaleDown:
stabilizationWindowSeconds: 300 # Scale-down stabilization window (5 minutes)
policies:
- type: Percent
value: 25 # Max 25% scaled down at a time
periodSeconds: 60
# Create simple HPA from command line
kubectl autoscale deployment go-app \
--min=2 --max=10 --cpu-percent=70 \
-n go-app
# View HPA status
kubectl get hpa -n go-app
kubectl describe hpa go-app-hpa -n go-app
# Stress test to observe auto-scaling (using tools like hey or wrk)
hey -n 10000 -c 100 http://api.example.com/
VII. Rolling Updates and Rollbacks
7.1 Rolling Update Strategy
Configure in the strategy field of the Deployment:
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # Max 1 extra Pod during update
maxUnavailable: 0 # No unavailable Pods allowed during update (zero downtime)
Update process illustration (3 replicas, maxSurge=1, maxUnavailable=0):
Initial state: [v1] [v1] [v1] Total=3, Available=3
Step 1: [v1] [v1] [v1] [v2] Create new version Pod Total=4, Available=3
Step 2: [v1] [v1] [v2] Terminate old Pod after new Pod is ready Total=3, Available=3
Step 3: [v1] [v1] [v2] [v2] Total=4, Available=3
Step 4: [v1] [v2] [v2] Total=3, Available=3
Step 5: [v1] [v2] [v2] [v2] Total=4, Available=3
Step 6: [v2] [v2] [v2] Complete! Total=3, Available=3
7.2 Triggering Updates
# Method 1: Update image version
kubectl set image deployment/go-app app=myapp:v2.0 -n go-app
# Method 2: Modify YAML and then apply
kubectl apply -f k8s/deployment.yaml
# Method 3: Use patch
kubectl patch deployment go-app -n go-app \
-p '{"spec":{"template":{"spec":{"containers":[{"name":"app","image":"myapp:v2.0"}]}}}}'
# View update status
kubectl rollout status deployment/go-app -n go-app
# View update history
kubectl rollout history deployment/go-app -n go-app
kubectl rollout history deployment/go-app -n go-app --revision=2
7.3 Rollback
# Rollback to the previous version
kubectl rollout undo deployment/go-app -n go-app
# Rollback to a specific version
kubectl rollout undo deployment/go-app -n go-app --to-revision=1
# Pause update (for canary release scenarios)
kubectl rollout pause deployment/go-app -n go-app
# Resume update
kubectl rollout resume deployment/go-app -n go-app
7.4 Graceful Shutdown of Go Applications
For rolling updates to achieve true zero downtime, Go applications need to correctly handle SIGTERM signals:
package main
import (
"context"
"log"
"net/http"
"os"
"os/signal"
"syscall"
"time"
)
func main() {
mux := http.NewServeMux()
mux.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("Hello, K8s!"))
})
mux.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
})
server := &http.Server{
Addr: ":8080",
Handler: mux,
ReadTimeout: 10 * time.Second,
WriteTimeout: 10 * time.Second,
}
// Start the server in a goroutine
go func() {
log.Println("Server starting on :8080")
if err := server.ListenAndServe(); err != nil && err != http.ErrServerClosed {
log.Fatalf("Server error: %v", err)
}
}()
// Wait for termination signal
quit := make(chan os.Signal, 1)
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
sig := <-quit
log.Printf("Received signal: %v, shutting down gracefully...", sig)
// Give ongoing requests a window to complete
ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
defer cancel()
if err := server.Shutdown(ctx); err != nil {
log.Fatalf("Server forced to shutdown: %v", err)
}
log.Println("Server exited gracefully")
}
Set the graceful termination period in the corresponding Deployment configuration:
spec:
template:
spec:
terminationGracePeriodSeconds: 30 # Give the Pod 30 seconds to gracefully exit
VIII. Differences and Relationship Between K8s and Docker Compose
8.1 Different Positioning
| Aspect | Docker Compose | Kubernetes |
|---|---|---|
| Positioning | Single-host multi-container orchestration | Cluster-level container orchestration |
| Scale | Single host | Tens to thousands of hosts |
| High Availability | Not supported | Built-in (Pod auto-restart/migration) |
| Auto Scaling | Not supported | Built-in HPA/VPA |
| Rolling Updates | Limited support | Comprehensive rolling updates and rollbacks |
| Service Discovery | Via container name DNS | Built-in Service and DNS |
| Load Balancing | Basic round-robin | Multiple strategies, supports Ingress |
| Configuration Management | Environment variables, file mounts | ConfigMap, Secret |
| Learning Curve | Low | High |
| Use Cases | Local development, small projects | Production environments, large projects |
8.2 Mapping from Docker Compose to K8s
Docker Compose Kubernetes
────────────── ──────────
docker-compose.yml → Multiple YAML files (or Helm Chart)
service (definition) → Deployment + Service
ports → Service(NodePort/LoadBalancer) + Ingress
environment → ConfigMap + Secret
volumes → PersistentVolumeClaim (PVC)
depends_on → initContainers / readiness probes
build → CI/CD pipeline pre-builds images
networks → Namespace + NetworkPolicy
restart: always → Pod restartPolicy + Deployment controller
8.3 Migration Path
Recommended learning and migration path:
Phase 1 (Beginner):
Docker Basics → Dockerfile → docker-compose → This chapter's content
Phase 2 (Intermediate):
Helm Chart Package Management → K8s Storage (PV/PVC) → StatefulSet (Stateful Services)
→ RBAC Permission Management → NetworkPolicy
Phase 3 (Production Practice):
CI/CD Pipeline (GitLab CI / GitHub Actions / ArgoCD)
→ Monitoring and Alerting (Prometheus + Grafana)
→ Log Collection (EFK/ELK Stack)
→ Service Mesh (Istio / Linkerd)
Phase 4 (Advanced Operations):
Cluster Management and Operations → Security Hardening → Multi-cluster Management
→ Custom Controller / Operator
8.4 Kompose —— Quick Conversion Tool
You can use the kompose tool to convert docker-compose.yml to K8s YAML:
# Install kompose
brew install kompose # macOS
# Or download the binary
# Convert docker-compose.yml to K8s resource files
kompose convert
# Will generate deployment.yaml, service.yaml, etc.
# Deploy directly
kompose up
# Note: Files generated by kompose usually require manual adjustments and optimization
IX. Local Development Environment
9.1 Minikube
Minikube is the most popular local K8s tool, creating a single-node cluster on your local machine:
# Installation (macOS)
brew install minikube
# Start cluster
minikube start
minikube start --cpus=4 --memory=8192 --driver=docker
# View status
minikube status
# Common add-ons
minikube addons enable ingress # Nginx Ingress
minikube addons enable metrics-server # Resource monitoring
minikube addons enable dashboard # Web management interface
# Open Dashboard
minikube dashboard
# Use Minikube's built-in Docker (to avoid pushing to remote repositories)
eval $(minikube docker-env)
docker build -t myapp:v1.0 . # Build directly into Minikube's Docker
# Set imagePullPolicy: Never in Deployment
# Access Service
minikube service go-app-svc -n go-app # Automatically opens browser
minikube tunnel # Enable LoadBalancer support
# Stop/Delete
minikube stop
minikube delete
9.2 Kind (Kubernetes in Docker)
Kind uses Docker containers to simulate K8s nodes, making it lighter and suitable for CI/CD environments:
# Installation (macOS)
brew install kind
# Create cluster (default single-node)
kind create cluster --name my-cluster
# Create multi-node cluster
cat <<EOF | kind create cluster --name multi-node --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
EOF
# Load local image into Kind cluster (no need to push to registry)
kind load docker-image myapp:v1.0 --name my-cluster
# View clusters
kind get clusters
kubectl cluster-info --context kind-my-cluster
# Delete cluster
kind delete cluster --name my-cluster
9.3 Minikube vs Kind Comparison
| Feature | Minikube | Kind |
|---|---|---|
| Implementation | VM or Docker container | Docker container |
| Multi-node support | Limited | Native support |
| Startup speed | Slower (30-60s) | Fast (20-30s) |
| Resource consumption | Higher | Lower |
| Built-in plugins | Rich (dashboard, etc.) | Fewer |
| CI/CD suitability | General | Excellent |
| Learning friendliness | High (with dashboard) | Medium |
| Recommended scenarios | Local development and learning | CI/CD and testing |
9.4 Complete Local Development Workflow
# 1. Start local K8s cluster
minikube start --cpus=4 --memory=8192
minikube addons enable ingress
minikube addons enable metrics-server
# 2. Build image using Minikube's Docker environment
eval $(minikube docker-env)
docker build -t go-app:dev .
# 3. Deploy application (set imagePullPolicy to Never)
kubectl apply -f k8s/
# 4. View status
kubectl get all -n go-app
# 5. Access application
minikube service go-app-svc -n go-app
# Or
kubectl port-forward svc/go-app-svc 8080:80 -n go-app
# 6. View logs
kubectl logs -f -l app=go-app -n go-app
# 7. Rebuild and redeploy after code changes
docker build -t go-app:dev .
kubectl rollout restart deployment/go-app -n go-app
# 8. Development complete, clean up environment
kubectl delete -f k8s/
minikube stop
X. Summary and Learning Path
10.1 Core Knowledge Review
K8s Core Concepts
├── Scheduling Unit: Pod
├── Workloads: Deployment / StatefulSet / DaemonSet
├── Service Discovery: Service / Ingress
├── Configuration Management: ConfigMap / Secret
├── Storage: PersistentVolume / PVC
├── Auto Scaling: HPA / VPA
├── Isolation: Namespace / NetworkPolicy
└── Update Strategy: RollingUpdate / Recreate
10.2 Recommended Learning Path
Phase 1 (Beginner):
Docker Basics → Dockerfile → docker-compose → This chapter's content
Phase 2 (Intermediate):
Helm Chart Package Management → K8s Storage (PV/PVC) → StatefulSet (Stateful Services)
→ RBAC Permission Management → NetworkPolicy
Phase 3 (Production Practice):
CI/CD Pipeline (GitLab CI / GitHub Actions / ArgoCD)
→ Monitoring and Alerting (Prometheus + Grafana)
→ Log Collection (EFK/ELK Stack)
→ Service Mesh (Istio / Linkerd)
Phase 4 (Advanced Operations):
Cluster Management and Operations → Security Hardening → Multi-cluster Management
→ Custom Controller / Operator
10.3 Recommended Tools
| Tool | Purpose | Installation Method |
|---|---|---|
| kubectl | K8s command-line | brew install kubectl |
| minikube | Local cluster | brew install minikube |
| kind | Lightweight local cluster | brew install kind |
| helm | K8s package manager | brew install helm |
| k9s | Terminal UI management tool | brew install k9s |
| lens | Desktop GUI management tool | Download from official website |
| kubectx/kubens | Quickly switch context/namespace | brew install kubectx |
| stern | Multi-Pod log aggregation | brew install stern |
| skaffold | Development workflow automation | brew install skaffold |
Previous Article: 015 - Docker Containerization
主题测试文章,只做测试使用。发布者:Walker,转转请注明出处:https://walker-learn.xyz/archives/6763