Go Engineer System Course 016

Table of Contents

Getting Started with Kubernetes —— Go Microservice Deployment and Orchestration


I. Kubernetes Core Concepts

1.1 What is Kubernetes

Kubernetes (K8s for short) is Google's open-source container orchestration platform for automating the deployment, scaling, and management of containerized applications. If Docker solved the problem of "how to package and run a single container," then K8s solves the problem of "how to manage hundreds or thousands of containers."

Core capabilities of K8s:

  • Automatic Scheduling: Automatically assigns containers to suitable nodes based on resource requirements
  • Self-healing Capabilities: Automatically restarts crashed containers, automatically migrates Pods from failed nodes
  • Horizontal Scaling: Automatically increases or decreases the number of instances based on load
  • Rolling Updates: Updates application versions with zero downtime
  • Service Discovery and Load Balancing: Built-in DNS and service discovery mechanisms
  • Configuration Management and Secret Management: Unified management of configurations and sensitive information

1.2 Cluster Architecture

┌─────────────────────────────────────────────────────────────┐
│                    Kubernetes Cluster                        │
│                                                             │
│  ┌─────────────────────────────────────┐                   │
│  │          Control Plane (主节点)      │                   │
│  │                                     │                   │
│  │  ┌────────────┐  ┌──────────────┐  │                   │
│  │  │ API Server │  │  Scheduler   │  │                   │
│  │  └────────────┘  └──────────────┘  │                   │
│  │  ┌────────────┐  ┌──────────────┐  │                   │
│  │  │   etcd     │  │ Controller   │  │                   │
│  │  │ (数据存储)  │  │  Manager     │  │                   │
│  │  └────────────┘  └──────────────┘  │                   │
│  └─────────────────────────────────────┘                   │
│                                                             │
│  ┌──────────────────┐  ┌──────────────────┐               │
│  │   Worker Node 1  │  │   Worker Node 2  │  ...          │
│  │                  │  │                  │               │
│  │  ┌────┐ ┌────┐  │  │  ┌────┐ ┌────┐  │               │
│  │  │Pod │ │Pod │  │  │  │Pod │ │Pod │  │               │
│  │  └────┘ └────┘  │  │  └────┘ └────┘  │               │
│  │                  │  │                  │               │
│  │  ┌──────────┐   │  │  ┌──────────┐   │               │
│  │  │ kubelet  │   │  │  │ kubelet  │   │               │
│  │  └──────────┘   │  │  └──────────┘   │               │
│  │  ┌──────────┐   │  │  ┌──────────┐   │               │
│  │  │kube-proxy│   │  │  │kube-proxy│   │               │
│  │  └──────────┘   │  │  └──────────┘   │               │
│  └──────────────────┘  └──────────────────┘               │
└─────────────────────────────────────────────────────────────┘

1.3 Core Resource Objects

Pod —— Smallest Scheduling Unit

A Pod is the smallest deployable unit in K8s, and a Pod contains one or more containers. Containers within the same Pod:

  • Share network namespace (can access each other via localhost)
  • Share storage volumes
  • Share IPC namespace
  • Are scheduled together onto the same node
# 最简单的 Pod 定义(实际中很少直接创建 Pod)
apiVersion: v1
kind: Pod
metadata:
  name: my-go-app
  labels:
    app: my-go-app
spec:
  containers:
    - name: app
      image: myapp:v1.0
      ports:
        - containerPort: 8080

Important: In actual production, Pods are not managed directly, but through Deployments.

Node —— Worker Node

A Node is a machine in the cluster (physical or virtual), and each Node runs:

  • kubelet: Responsible for managing the Pod lifecycle on that node
  • kube-proxy: Responsible for network proxying and load balancing
  • Container runtime: e.g., containerd, CRI-O (Docker has been removed in newer versions)

Cluster —— Cluster

The entire entity composed of one or more Control Plane nodes and multiple Worker Nodes.

Deployment —— Deployment Controller

Deployment is the most commonly used workload controller, managing the number of Pod replicas, rolling updates, and rollbacks:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-go-app
spec:
  replicas: 3              # 期望的 Pod 副本数
  selector:
    matchLabels:
      app: my-go-app       # 选择管理哪些 Pod
  template:                 # Pod 模板
    metadata:
      labels:
        app: my-go-app
    spec:
      containers:
        - name: app
          image: myapp:v1.0

Service —— Service Discovery and Load Balancing

Service provides a stable network entry point and load balancing for a group of Pods:

apiVersion: v1
kind: Service
metadata:
  name: my-go-app-svc
spec:
  selector:
    app: my-go-app         # 选择后端 Pod
  ports:
    - port: 80             # Service 端口
      targetPort: 8080     # 容器端口
  type: ClusterIP          # 服务类型

Service Type Descriptions:

Type Description Access Scope
ClusterIP Default type, cluster-internal IP Cluster internal only
NodePort Opens a port on each node Accessible from outside the cluster via node IP
LoadBalancer Cloud provider load balancer External access (cloud environment)
ExternalName DNS CNAME mapping Access external services

Ingress —— External Access Entry Point

Ingress provides HTTP/HTTPS routing, forwarding external requests to different Services:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
    - host: api.example.com
      http:
        paths:
          - path: /users
            pathType: Prefix
            backend:
              service:
                name: user-service
                port:
                  number: 80
          - path: /orders
            pathType: Prefix
            backend:
              service:
                name: order-service
                port:
                  number: 80
  tls:
    - hosts:
        - api.example.com
      secretName: tls-secret

ConfigMap —— Configuration Management

Decouples configuration information from container images:

apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  # 简单键值对
  LOG_LEVEL: "info"
  DB_HOST: "mysql-svc"
  DB_PORT: "3306"
  # 配置文件
  config.yaml: |
    server:
      port: 8080
      mode: production
    database:
      max_open_conns: 100
      max_idle_conns: 10

Secret —— Sensitive Information Management

Stores sensitive data such as passwords, tokens, and certificates (Base64 encoded storage):

apiVersion: v1
kind: Secret
metadata:
  name: app-secret
type: Opaque
data:
  # 值需要 Base64 编码: echo -n "mypassword" | base64
  DB_PASSWORD: bXlwYXNzd29yZA==
  REDIS_PASSWORD: cmVkaXNwYXNz
  JWT_SECRET: c3VwZXJzZWNyZXRrZXk=

Note: K8s Secret's Base64 encoding is not encryption; in production environments, it is recommended to use solutions like Vault or Sealed Secrets to enhance security.

Namespace —— Namespace

Used to logically isolate cluster resources, suitable for multi-team or multi-environment scenarios:

# 常见的命名空间划分
kubectl create namespace development
kubectl create namespace staging
kubectl create namespace production

# 查看所有命名空间
kubectl get namespaces

II. Common kubectl Commands

2.1 Cluster and Context Management

# 查看集群信息
kubectl cluster-info

# 查看当前上下文
kubectl config current-context

# 切换上下文(多集群管理)
kubectl config use-context my-cluster

# 设置默认命名空间
kubectl config set-context --current --namespace=production

2.2 Resource Viewing

# 查看资源(通用格式)
kubectl get <resource-type> [-n namespace]

# 常用查看命令
kubectl get pods                       # 查看当前命名空间的 Pod
kubectl get pods -A                    # 查看所有命名空间的 Pod
kubectl get pods -o wide               # 显示更多信息(IP、节点等)
kubectl get pods -l app=my-go-app      # 按标签筛选
kubectl get deployments                # 查看 Deployment
kubectl get services                   # 查看 Service
kubectl get ingress                    # 查看 Ingress
kubectl get configmaps                 # 查看 ConfigMap
kubectl get secrets                    # 查看 Secret
kubectl get nodes                      # 查看节点
kubectl get all                        # 查看所有资源

# 详细信息
kubectl describe pod my-go-app-xxx     # 查看 Pod 详情(含事件日志)
kubectl describe deployment my-go-app  # 查看 Deployment 详情

# 输出格式
kubectl get pods -o yaml               # YAML 格式
kubectl get pods -o json               # JSON 格式
kubectl get pods -o jsonpath='{.items[*].metadata.name}'  # 自定义输出

2.3 Resource Operations

# 创建/更新资源
kubectl apply -f deployment.yaml       # 声明式(推荐)
kubectl create -f deployment.yaml      # 命令式

# 删除资源
kubectl delete -f deployment.yaml      # 删除 YAML 中定义的资源
kubectl delete pod my-go-app-xxx       # 删除指定 Pod
kubectl delete deployment my-go-app    # 删除 Deployment

# 编辑资源(在线编辑)
kubectl edit deployment my-go-app

# 快速创建(无需 YAML 文件)
kubectl create deployment my-app --image=myapp:v1.0
kubectl expose deployment my-app --port=80 --target-port=8080

2.4 Debugging and Troubleshooting

# 查看 Pod 日志
kubectl logs my-go-app-xxx             # 当前日志
kubectl logs -f my-go-app-xxx          # 实时跟踪
kubectl logs my-go-app-xxx -c app      # 指定容器(多容器 Pod)
kubectl logs my-go-app-xxx --previous  # 上一次崩溃的日志

# 进入 Pod 容器
kubectl exec -it my-go-app-xxx -- sh
kubectl exec -it my-go-app-xxx -c app -- sh  # 指定容器

# 端口转发(本地调试)
kubectl port-forward pod/my-go-app-xxx 8080:8080
kubectl port-forward svc/my-go-app-svc 8080:80

# 查看事件(排查问题的重要手段)
kubectl get events --sort-by='.lastTimestamp'
kubectl get events -n production

# 资源使用情况(需要 metrics-server)
kubectl top pods
kubectl top nodes

III. Detailed Explanation of YAML Resource Configuration Files

3.1 YAML Basic Structure

Each K8s resource YAML file contains four core parts:

apiVersion: apps/v1       # API 版本 —— 决定可用的字段
kind: Deployment          # 资源类型 —— Pod/Service/Deployment 等
metadata:                 # 元数据 —— 名称、标签、注解等
  name: my-app
  namespace: production
  labels:
    app: my-app
    env: production
  annotations:
    description: "My Go Application"
spec:                     # 规格 —— 资源的期望状态(最核心的部分)
  replicas: 3
  # ...

3.2 Common apiVersion Mapping

Resource Type apiVersion
Pod, Service, ConfigMap, Secret v1
Deployment, StatefulSet, DaemonSet apps/v1
Ingress networking.k8s.io/v1
HPA autoscaling/v2
CronJob, Job batch/v1

3.3 Labels and Selectors

Labels are the core mechanism for resource association in K8s:

# Deployment 通过 selector 选择管理哪些 Pod
# Service 通过 selector 选择后端 Pod
# 两者的 selector 必须与 Pod 的 labels 匹配

# Deployment
spec:
  selector:
    matchLabels:
      app: my-go-app      # 必须与 template.labels 匹配
  template:
    metadata:
      labels:
        app: my-go-app     # Pod 的标签

# Service
spec:
  selector:
    app: my-go-app         # 匹配 Pod 的标签

IV. Deploying Go Microservices to K8s (Complete YAML Example)

4.1 Complete Deployment Solution

Taking a Go Web service as an example, providing a complete K8s deployment configuration.

File Structure

k8s/
├── namespace.yaml
├── configmap.yaml
├── secret.yaml
├── deployment.yaml
├── service.yaml
├── ingress.yaml
└── hpa.yaml

namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: go-app
  labels:
    name: go-app

configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: go-app-config
  namespace: go-app
data:
  LOG_LEVEL: "info"
  GIN_MODE: "release"
  DB_HOST: "mysql-svc"
  DB_PORT: "3306"
  DB_NAME: "myapp"
  REDIS_ADDR: "redis-svc:6379"
  CONSUL_ADDR: "consul-svc:8500"
  config.yaml: |
    server:
      port: 8080
      read_timeout: 10s
      write_timeout: 10s
    log:
      level: info
      format: json

secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: go-app-secret
  namespace: go-app
type: Opaque
data:
  DB_PASSWORD: cm9vdHBhc3N3b3Jk          # rootpassword
  REDIS_PASSWORD: cmVkaXNwYXNzd29yZA==    # redispassword
  JWT_SECRET: bXktand0LXNlY3JldC1rZXk=   # my-jwt-secret-key

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: go-app
  namespace: go-app
  labels:
    app: go-app
    version: v1.0.0
spec:
  replicas: 3
  selector:
    matchLabels:
      app: go-app
  # 滚动更新策略
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1           # 滚动更新时最多超出期望副本数 1 个
      maxUnavailable: 0     # 更新过程中不允许有不可用的 Pod
  template:
    metadata:
      labels:
        app: go-app
        version: v1.0.0
    spec:
      # 优雅终止等待时间
      terminationGracePeriodSeconds: 30

      # 初始化容器(可选 —— 等待依赖服务就绪)
      initContainers:
        - name: wait-for-mysql
          image: busybox:1.36
          command:
            - sh
            - -c
            - |
              until nc -z mysql-svc 3306; do
                echo "Waiting for MySQL..."
                sleep 2
              done

      containers:
        - name: app
          image: registry.example.com/myns/go-app:v1.0.0
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 8080
              protocol: TCP

          # 从 ConfigMap 注入环境变量
          envFrom:
            - configMapRef:
                name: go-app-config

          # 从 Secret 注入敏感环境变量
          env:
            - name: DB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: go-app-secret
                  key: DB_PASSWORD
            - name: REDIS_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: go-app-secret
                  key: REDIS_PASSWORD
            - name: JWT_SECRET
              valueFrom:
                secretKeyRef:
                  name: go-app-secret
                  key: JWT_SECRET

          # 挂载配置文件
          volumeMounts:
            - name: config-volume
              mountPath: /app/config
              readOnly: true

          # 资源限制
          resources:
            requests:
              cpu: 100m          # 最低保证 0.1 核 CPU
              memory: 128Mi      # 最低保证 128MB 内存
            limits:
              cpu: 500m          # 最多使用 0.5 核 CPU
              memory: 256Mi      # 最多使用 256MB 内存

          # 存活探针 —— 检测容器是否需要重启
          livenessProbe:
            httpGet:
              path: /health
              port: 8080
            initialDelaySeconds: 10    # 容器启动后等待 10 秒
            periodSeconds: 15          # 每 15 秒检查一次
            timeoutSeconds: 3          # 超时 3 秒视为失败
            failureThreshold: 3        # 连续 3 次失败则重启

          # 就绪探针 —— 检测容器是否可以接收流量
          readinessProbe:
            httpGet:
              path: /health
              port: 8080
            initialDelaySeconds: 5
            periodSeconds: 10
            timeoutSeconds: 3
            failureThreshold: 3

          # 启动探针 —— 保护慢启动的应用
          startupProbe:
            httpGet:
              path: /health
              port: 8080
            initialDelaySeconds: 0
            periodSeconds: 5
            failureThreshold: 30       # 最多等待 150 秒启动

      volumes:
        - name: config-volume
          configMap:
            name: go-app-config
            items:
              - key: config.yaml
                path: config.yaml

      # 拉取私有镜像的凭证
      imagePullSecrets:
        - name: registry-credentials

service.yaml

apiVersion: v1
kind: Service
metadata:
  name: go-app-svc
  namespace: go-app
  labels:
    app: go-app
spec:
  selector:
    app: go-app
  ports:
    - name: http
      port: 80               # Service 暴露的端口
      targetPort: 8080        # 转发到容器的端口
      protocol: TCP
  type: ClusterIP

ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: go-app-ingress
  namespace: go-app
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/proxy-body-size: "10m"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
    # 限流
    nginx.ingress.kubernetes.io/limit-rps: "100"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - api.example.com
      secretName: tls-secret
  rules:
    - host: api.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: go-app-svc
                port:
                  number: 80

4.2 One-Click Deployment

# 按顺序部署所有资源
kubectl apply -f k8s/namespace.yaml
kubectl apply -f k8s/configmap.yaml
kubectl apply -f k8s/secret.yaml
kubectl apply -f k8s/deployment.yaml
kubectl apply -f k8s/service.yaml
kubectl apply -f k8s/ingress.yaml

# 或者一次性部署整个目录
kubectl apply -f k8s/

# 查看部署状态
kubectl get all -n go-app
kubectl rollout status deployment/go-app -n go-app

V. Health Checks (Liveness / Readiness Probe)

5.1 Purpose of the Three Probes

Probe Purpose Consequence of Failure
livenessProbe Checks if the container is alive Restarts the container
readinessProbe Checks if the container is ready Removes from Service endpoints, no longer receives traffic
startupProbe Checks if the container has finished starting Prevents liveness from killing during startup phase

5.2 Probe Types

# 1. HTTP GET 探针(最常用)
livenessProbe:
  httpGet:
    path: /health
    port: 8080
    httpHeaders:
      - name: X-Custom-Header
        value: "probe"

# 2. TCP Socket 探针(适合非 HTTP 服务)
livenessProbe:
  tcpSocket:
    port: 8080

# 3. Exec 命令探针(自定义检测逻辑)
livenessProbe:
  exec:
    command:
      - /bin/sh
      - -c
      - "wget -q --spider http://localhost:8080/health"

# 4. gRPC 探针(K8s 1.24+)
livenessProbe:
  grpc:
    port: 50051

5.3 Implementing Health Check Endpoints in Go

package main

import (
    "database/sql"
    "encoding/json"
    "net/http"
    "sync/atomic"
)

// 全局就绪状态标志
var isReady atomic.Bool

type HealthResponse struct {
    Status   string            `json:"status"`
    Details  map[string]string `json:"details,omitempty"`
}

func healthHandler(db *sql.DB, redisClient RedisClient) http.HandlerFunc {
    return func(w http.ResponseWriter, r *http.Request) {
        resp := HealthResponse{
            Status:  "ok",
            Details: make(map[string]string),
        }
        statusCode := http.StatusOK

        // 检查数据库连接
        if err := db.Ping(); err != nil {
            resp.Status = "degraded"
            resp.Details["mysql"] = "unhealthy: " + err.Error()
            statusCode = http.StatusServiceUnavailable
        } else {
            resp.Details["mysql"] = "healthy"
        }

        // 检查 Redis 连接
        if err := redisClient.Ping(r.Context()).Err(); err != nil {
            resp.Status = "degraded"
            resp.Details["redis"] = "unhealthy: " + err.Error()
            statusCode = http.StatusServiceUnavailable
        } else {
            resp.Details["redis"] = "healthy"
        }

        w.Header().Set("Content-Type", "application/json")
        w.WriteHeader(statusCode)
        json.NewEncoder(w).Encode(resp)
    }
}

// readinessHandler 就绪检查 —— 服务是否可以接收流量
func readinessHandler() http.HandlerFunc {
    return func(w http.ResponseWriter, r *http.Request) {
        if !isReady.Load() {
            http.Error(w, "not ready", http.StatusServiceUnavailable)
            return
        }
        w.WriteHeader(http.StatusOK)
        w.Write([]byte("ready"))
    }
}

func main() {
    // 初始化数据库、Redis 等...
    // db, redisClient := initDeps()

    mux := http.NewServeMux()

    // liveness 探针 —— 简单返回 200 即可
    mux.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) {
        w.WriteHeader(http.StatusOK)
    })

    // readiness 探针 —— 检查依赖服务
    mux.HandleFunc("/readyz", readinessHandler())

    // 详细健康信息(内部使用)
    // mux.HandleFunc("/health", healthHandler(db, redisClient))

    // 完成初始化后标记为就绪
    isReady.Store(true)

    http.ListenAndServe(":8080", mux)
}

Corresponding K8s configuration:

livenessProbe:
  httpGet:
    path: /healthz
    port: 8080
  initialDelaySeconds: 10
  periodSeconds: 15

readinessProbe:
  httpGet:
    path: /readyz
    port: 8080
  initialDelaySeconds: 5
  periodSeconds: 10

VI. Horizontal Pod Autoscaler (HPA)

6.1 HPA Working Principle

HPA (Horizontal Pod Autoscaler) automatically adjusts the number of replicas based on Pod CPU/memory utilization or custom metrics:

                    ┌─────────┐
  监控指标 ────────▶│   HPA   │
  (CPU/内存/自定义)  │ 控制器   │
                    └────┬────┘
                         │ 调整 replicas
                         ▼
                    ┌─────────┐
                    │Deployment│
                    └────┬────┘
                         │
              ┌──────────┼──────────┐
              ▼          ▼          ▼
           ┌─────┐   ┌─────┐   ┌─────┐
           │Pod 1│   │Pod 2│   │Pod 3│  ← 自动扩缩
           └─────┘   └─────┘   └─────┘

6.2 Prerequisites

HPA requires metrics-server to obtain resource usage data:

# 安装 metrics-server(Minikube 自带,云环境通常已安装)
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

# 验证
kubectl top pods
kubectl top nodes

6.3 HPA Configuration

# hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: go-app-hpa
  namespace: go-app
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: go-app
  minReplicas: 2            # 最少 2 个副本
  maxReplicas: 10           # 最多 10 个副本
  metrics:
    # 基于 CPU 使用率
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70    # CPU 使用率超过 70% 时扩容
    # 基于内存使用率
    - type: Resource
      resource:
        name: memory
        target:
          type: Utilization
          averageUtilization: 80    # 内存使用率超过 80% 时扩容
  behavior:
    scaleUp:
      stabilizationWindowSeconds: 60    # 扩容冷却时间
      policies:
        - type: Pods
          value: 2                      # 每次最多扩 2 个 Pod
          periodSeconds: 60
    scaleDown:
      stabilizationWindowSeconds: 300   # 缩容冷却时间(5 分钟)
      policies:
        - type: Percent
          value: 25                     # 每次最多缩容 25%
          periodSeconds: 60
# 命令行创建简单 HPA
kubectl autoscale deployment go-app \
  --min=2 --max=10 --cpu-percent=70 \
  -n go-app

# 查看 HPA 状态
kubectl get hpa -n go-app
kubectl describe hpa go-app-hpa -n go-app

# 压测观察自动伸缩(使用 hey 或 wrk 等工具)
hey -n 10000 -c 100 http://api.example.com/

VII. Rolling Updates and Rollbacks

7.1 Rolling Update Strategy

Configure in the strategy field of the Deployment:

spec:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1           # 更新时最多多出 1 个 Pod
      maxUnavailable: 0     # 更新时不允许不可用的 Pod(零停机)

Update process illustration (3 replicas, maxSurge=1, maxUnavailable=0):

初始状态:   [v1] [v1] [v1]           总数=3, 可用=3
步骤 1:     [v1] [v1] [v1] [v2]     创建新版 Pod   总数=4, 可用=3
步骤 2:     [v1] [v1]      [v2]     新 Pod 就绪后终止旧 Pod  总数=3, 可用=3
步骤 3:     [v1] [v1]      [v2] [v2]  总数=4, 可用=3
步骤 4:     [v1]           [v2] [v2]  总数=3, 可用=3
步骤 5:     [v1]           [v2] [v2] [v2]  总数=4, 可用=3
步骤 6:                    [v2] [v2] [v2]  完成!总数=3, 可用=3

7.2 Triggering Updates

# 方式 1: 更新镜像版本
kubectl set image deployment/go-app app=myapp:v2.0 -n go-app

# 方式 2: 修改 YAML 后 apply
kubectl apply -f k8s/deployment.yaml

# 方式 3: 使用 patch
kubectl patch deployment go-app -n go-app \
  -p '{"spec":{"template":{"spec":{"containers":[{"name":"app","image":"myapp:v2.0"}]}}}}'

# 查看更新状态
kubectl rollout status deployment/go-app -n go-app

# 查看更新历史
kubectl rollout history deployment/go-app -n go-app
kubectl rollout history deployment/go-app -n go-app --revision=2

7.3 Rollbacks

# 回滚到上一个版本
kubectl rollout undo deployment/go-app -n go-app

# 回滚到指定版本
kubectl rollout undo deployment/go-app -n go-app --to-revision=1

# 暂停更新(用于金丝雀发布场景)
kubectl rollout pause deployment/go-app -n go-app

# 恢复更新
kubectl rollout resume deployment/go-app -n go-app

7.4 Graceful Shutdown of Go Applications

To achieve true zero-downtime rolling updates, Go applications need to correctly handle SIGTERM signals:

package main

import (
    "context"
    "log"
    "net/http"
    "os"
    "os/signal"
    "syscall"
    "time"
)

func main() {
    mux := http.NewServeMux()
    mux.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
        w.Write([]byte("Hello, K8s!"))
    })
    mux.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) {
        w.WriteHeader(http.StatusOK)
    })

    server := &http.Server{
        Addr:         ":8080",
        Handler:      mux,
        ReadTimeout:  10 * time.Second,
        WriteTimeout: 10 * time.Second,
    }

    // 在 goroutine 中启动服务器
    go func() {
        log.Println("Server starting on :8080")
        if err := server.ListenAndServe(); err != nil && err != http.ErrServerClosed {
            log.Fatalf("Server error: %v", err)
        }
    }()

    // 等待终止信号
    quit := make(chan os.Signal, 1)
    signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
    sig := <-quit
    log.Printf("Received signal: %v, shutting down gracefully...", sig)

    // 给正在处理的请求一个完成的窗口期
    ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
    defer cancel()

    if err := server.Shutdown(ctx); err != nil {
        log.Fatalf("Server forced to shutdown: %v", err)
    }

    log.Println("Server exited gracefully")
}

Set the graceful termination period in the corresponding Deployment configuration:

spec:
  template:
    spec:
      terminationGracePeriodSeconds: 30   # 给 Pod 30 秒时间优雅退出

VIII. Differences and Relationship Between K8s and Docker Compose

8.1 Different Orientations

Dimension Docker Compose Kubernetes
Orientation Single-host multi-container orchestration Cluster-level container orchestration
Scale Single host Tens to thousands of hosts
High Availability Not supported Built-in (Pod automatic restart/migration)
Auto Scaling Not supported Built-in HPA/VPA
Rolling Updates Limited support Complete rolling updates and rollbacks
Service Discovery Via container name DNS Built-in Service and DNS
Load Balancing Basic round-robin Multiple strategies, supports Ingress
Configuration Management Environment variables, file mounts ConfigMap, Secret
Learning Curve Low High
Use Cases Local development, small projects Production environments, large projects

8.2 Mapping from Docker Compose to K8s

Docker Compose               Kubernetes
──────────────               ──────────
docker-compose.yml    →    多个 YAML 文件(或 Helm Chart)
service (定义)         →    Deployment + Service
ports                  →    Service(NodePort/LoadBalancer) + Ingress
environment            →    ConfigMap + Secret
volumes                →    PersistentVolumeClaim (PVC)
depends_on             →    initContainers / 就绪探针
build                  →    CI/CD 流水线预先构建镜像
networks               →    Namespace + NetworkPolicy
restart: always        →    Pod restartPolicy + Deployment 控制器

8.3 Migration Path

Recommended learning and migration path:

1. 本地开发     → Docker Compose
2. 测试环境     → Docker Compose 或 Minikube
3. 预发布环境   → K8s(可以用 Kind 或云厂商托管 K8s)
4. 生产环境     → K8s(推荐使用托管服务:AKS/EKS/GKE/ACK)

8.4 Kompose —— Quick Conversion Tool

You can use the kompose tool to convert docker-compose.yml to K8s YAML:

# 安装 kompose
brew install kompose         # macOS
# 或者下载二进制文件

# 将 docker-compose.yml 转换为 K8s 资源文件
kompose convert
# 会生成 deployment.yaml、service.yaml 等文件

# 直接部署
kompose up

# 注意:kompose 生成的文件通常需要手动调整优化

IX. Local Development Environment

9.1 Minikube

Minikube is the most popular local K8s tool, creating a single-node cluster on your machine:

# 安装(macOS)
brew install minikube

# 启动集群
minikube start
minikube start --cpus=4 --memory=8192 --driver=docker

# 查看状态
minikube status

# 常用附加组件
minikube addons enable ingress           # Nginx Ingress
minikube addons enable metrics-server    # 资源监控
minikube addons enable dashboard         # Web 管理界面

# 打开 Dashboard
minikube dashboard

# 使用 Minikube 内置的 Docker(避免推送到远程仓库)
eval $(minikube docker-env)
docker build -t myapp:v1.0 .            # 直接构建到 Minikube 的 Docker 中
# Deployment 中设置 imagePullPolicy: Never

# 访问 Service
minikube service go-app-svc -n go-app    # 自动打开浏览器
minikube tunnel                          # 启用 LoadBalancer 支持

# 停止/删除
minikube stop
minikube delete

9.2 Kind (Kubernetes in Docker)

Kind uses Docker containers to simulate K8s nodes, making it lighter and suitable for CI/CD environments:

# 安装(macOS)
brew install kind

# 创建集群(默认单节点)
kind create cluster --name my-cluster

# 创建多节点集群
cat <<EOF | kind create cluster --name multi-node --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
  - role: worker
  - role: worker
EOF

# 将本地镜像加载到 Kind 集群(不需要推送到仓库)
kind load docker-image myapp:v1.0 --name my-cluster

# 查看集群
kind get clusters
kubectl cluster-info --context kind-my-cluster

# 删除集群
kind delete cluster --name my-cluster

9.3 Minikube vs Kind Comparison

Feature Minikube Kind
Implementation VM or Docker container Docker container
Multi-node support Limited Native support
Startup speed Slower (30-60s) Faster (20-30s)
Resource usage Higher Lower
Built-in plugins Rich (dashboard, etc.) Fewer
CI/CD suitability General Excellent
Learning friendliness High (with dashboard) Medium
Recommended scenarios Local development and learning CI/CD and testing

9.4 Complete Local Development Workflow

# 1. 启动本地 K8s 集群
minikube start --cpus=4 --memory=8192
minikube addons enable ingress
minikube addons enable metrics-server

# 2. 使用 Minikube 的 Docker 环境构建镜像
eval $(minikube docker-env)
docker build -t go-app:dev .

# 3. 部署应用(imagePullPolicy 设为 Never)
kubectl apply -f k8s/

# 4. 查看状态
kubectl get all -n go-app

# 5. 访问应用
minikube service go-app-svc -n go-app
# 或者
kubectl port-forward svc/go-app-svc 8080:80 -n go-app

# 6. 查看日志
kubectl logs -f -l app=go-app -n go-app

# 7. 修改代码后重新构建部署
docker build -t go-app:dev .
kubectl rollout restart deployment/go-app -n go-app

# 8. 开发完成,清理环境
kubectl delete -f k8s/
minikube stop

X. Summary and Learning Path

10.1 Core Knowledge Review

K8s 核心概念
├── 调度单元:Pod
├── 工作负载:Deployment / StatefulSet / DaemonSet
├── 服务发现:Service / Ingress
├── 配置管理:ConfigMap / Secret
├── 存储:PersistentVolume / PVC
├── 自动伸缩:HPA / VPA
├── 隔离:Namespace / NetworkPolicy
└── 更新策略:RollingUpdate / Recreate

10.2 Recommended Learning Path

第一阶段(入门):
  Docker 基础 → Dockerfile → docker-compose → 本章内容

第二阶段(进阶):
  Helm Chart 包管理 → K8s 存储 (PV/PVC) → StatefulSet (有状态服务)
  → RBAC 权限管理 → NetworkPolicy 网络策略

第三阶段(生产实践):
  CI/CD 流水线 (GitLab CI / GitHub Actions / ArgoCD)
  → 监控告警 (Prometheus + Grafana)
  → 日志收集 (EFK/ELK Stack)
  → 服务网格 (Istio / Linkerd)

第四阶段(运维深入):
  集群管理与运维 → 安全加固 → 多集群管理
  → 自定义 Controller / Operator

10.3 Recommended Tools

Tool Purpose Installation Method
kubectl K8s command line brew install kubectl
minikube Local cluster brew install minikube
kind Lightweight local cluster brew install kind
helm K8s package manager brew install helm
k9s Terminal UI management tool brew install k9s
lens Desktop GUI management tool Official website download
kubectx/kubens Quick context/namespace switching brew install kubectx
stern Multi-Pod log aggregation brew install stern
skaffold Development workflow automation brew install skaffold

Previous: 015 - Docker Containerization

主题测试文章,只做测试使用。发布者:Walker,转转请注明出处:https://walker-learn.xyz/archives/6782

(0)
Walker的头像Walker
上一篇 12 hours ago
下一篇 1 day ago

Related Posts

EN
简体中文 繁體中文 English