Project Structure Description: user-web Module
user-web is the user service Web layer module in the joyshop_api project, responsible for handling user-related HTTP requests, parameter validation, business routing, and calling backend interfaces. Below is the directory structure description:
user-web/
├── api/ # 控制器层,定义业务接口处理逻辑
├── config/ # 配置模块,包含系统配置结构体及读取逻辑
├── forms/ # 请求参数结构体与校验逻辑,主要用于表单解析与绑定
├── global/ # 全局对象,如数据库连接、配置、日志等全局变量定义
├── initialize/ # 系统初始化模块,如数据库、路由、配置加载等初始化逻辑
├── middlewares/ # 中间件定义,如鉴权、日志记录、跨域处理等
├── proto/ # gRPC 生成的 protobuf 文件,用于与后端服务通信
├── router/ # 路由注册模块,将 API 绑定到具体路径
├── utils/ # 工具函数模块,包含通用方法,如分页、加解密、转换等
├── validator/ # 自定义参数验证器,用于配合表单验证规则
├── main.go # 启动入口,负责加载配置、初始化组件并启动服务
Quick Start
# 编译并运行 user-web 服务
go run user-web/main.go
Notes
- Please check the configuration file path and format in
initialize/config.go. - The routing entry point is located in
router/router.go, where you can understand API grouping and binding. - If gRPC is used, please ensure that the
protofiles are correctly referenced after generation.
Go Logging Library zap Usage Guide
zap is an open-source high-performance structured logging library from Uber, widely used in Go projects.
📦 Installation
go get -u go.uber.org/zap
🚀 Basic Usage
package main
import (
"go.uber.org/zap"
)
func main() {
logger, _ := zap.NewProduction()
defer logger.Sync() // 确保缓存日志写入文件
logger.Info("这是一个 Info 日志",
zap.String("url", "http://example.com"),
zap.Int("attempt", 3),
zap.Duration("backoff", 200),
)
}
🛠️ Custom Log Configuration
config := zap.NewDevelopmentConfig()
config.OutputPaths = []string{"stdout", "./log/zap.log"}
logger, err := config.Build()
if err != nil {
panic(err)
}
defer logger.Sync()
logger.Debug("自定义配置日志")
🧩 Common Field Types
zap.String(key, val string)zap.Int(key string, val int)zap.Bool(key string, val bool)zap.Time(key string, val time.Time)zap.Any(key string, val interface{})
📚 More Documentation
Official Documentation:https://pkg.go.dev/go.uber.org/zap
GitHub Repository:https://github.com/uber-go/zap
package main
import (
"time"
"go.uber.org/zap"
)
// 自定义生产环境 Logger 配置
func NewLogger() (*zap.Logger, error) {
cfg := zap.NewProductionConfig()
cfg.OutputPaths = []string{
"./myproject.log", // 输出日志到当前目录下的 myproject.log 文件
}
return cfg.Build()
}
func main() {
// 初始化 logger
logger, err := NewLogger()
if err != nil {
panic(err)
}
defer logger.Sync()
// 获取 SugarLogger(提供更简洁的格式化输出)
su := logger.Sugar()
defer su.Sync()
url := "https://imooc.com"
su.Info("failed to fetch URL",
zap.String("url", url),
zap.Int("attempt", 3),
zap.Duration("backoff", time.Second),
)
}
Go Configuration Management - Viper
1. Introduction
Viper is a complete configuration solution for Go applications. It is designed to work within applications and can handle all types of configuration needs and formats. It supports the following features:
- Setting default values
- Reading configuration information from
JSON,TOML,YAML,HCL,.envfile, andJava propertiesformat configuration files - Live watching and re-reading of config files (optional)
- Reading from environment variables
- Reading from remote config systems (like etcd or Consul) and watching for changes
- Reading from command-line arguments
- Reading from a buffer
- Explicitly setting values
2. YAML Tutorial
Tutorial URL: [Not yet provided]
3. Installation
go get github.com/spf13/viper
GitHub Address: spf13/viper
4. Usage Example
package main
import (
"fmt"
"github.com/spf13/viper"
)
func main() {
// 设置配置文件名和类型
viper.SetConfigName("config")
viper.SetConfigType("yaml")
viper.AddConfigPath(".") // 配置文件路径
// 读取配置
if err := viper.ReadInConfig(); err != nil {
panic(fmt.Errorf("fatal error config file: %w", err))
}
// 访问配置值
port := viper.GetInt("server.port")
fmt.Printf("Server Port: %d\n", port)
}
package main
import "github.com/spf13/viper"
type ServerConfig struct {
ServiceName string `mapstructure:"name"`
Port int `mapstructure:"port"`
}
func main() {
v := viper.New()
v.SetConfigName("config")
v.SetConfigType("yaml")
v.AddConfigPath("./viper_test/ch01")
err := v.ReadInConfig()
if err != nil {
panic(err)
}
// Get the value of the "name" key
name := v.GetString("name")
println(name)
// Get the value of the "age" key
age := v.GetInt("age")
println(age)
sCfig := &ServerConfig{}
if err := v.Unmarshal(sCfig); err != nil {
panic(err)
}
println(sCfig.ServiceName)
println(sCfig.Port)
}
package main
import (
"fmt"
"github.com/spf13/viper"
)
type MysqlConfig struct {
Host string `mapstructure:"host"`
Port int `mapstructure:"port"`
}
type ServerConfig struct {
ServiceName string `mapstructure:"name"`
MysqlInfo MysqlConfig `mapstructure:"mysql"`
}
func main() {
v := viper.New()
v.SetConfigName("config")
v.SetConfigType("yaml")
v.AddConfigPath("./viper_test/ch02")
err := v.ReadInConfig()
if err != nil {
panic(err)
}
sCfig := &ServerConfig{}
if err := v.Unmarshal(sCfig); err != nil {
panic(err)
}
fmt.Println(sCfig)
}
Without changing any code, differentiate between offline development and online production configuration files and environments, and also dynamically monitor configuration changes withv.WatchConfig(), then viav.OnConfigChange(func(e fsnotify.Event){fmt.Println("config file change",e.Name)})
What is JWT?
JWT (JSON Web Token) is an open standard (RFC 7519) for securely transmitting information between network application environments. A JWT is a string composed of three parts:
- Header
- Payload
- Signature
Structure example:
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9
.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ
.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c
Use Cases
- Frontend-backend separation login authentication
- User identity authentication
- Access control
JWT Login Authentication Process
1. User Login
- The user submits their username and password to the backend.
2. Server Validates User Information
- Upon successful validation, a JWT is generated and returned to the frontend.
- The JWT typically contains information such as user ID and expiration time.
3. Frontend Stores Token
- Usually stored in localStorage or sessionStorage, or it can be stored in a Cookie.
4. Sending Requests with Token
- When the frontend sends subsequent requests, the JWT is placed in the HTTP request header, for example:
Authorization: Bearer <your_token>
5. Backend Validates Token
- Backend middleware extracts and validates the JWT.
- If validation passes, the request is processed; otherwise, a 401 Unauthorized response is returned.
Usage Example (Node.js + Express + jsonwebtoken)
Install Dependencies
npm install express jsonwebtoken body-parser
Login API (Generate Token)
const express = require('express');
const jwt = require('jsonwebtoken');
const bodyParser = require('body-parser');
const app = express();
app.use(bodyParser.json());
const SECRET_KEY = 'your_secret_key';
app.post('/login', (req, res) => {
const { username, password } = req.body;
if (username === 'admin' && password === '123456') {
const token = jwt.sign({ username }, SECRET_KEY, { expiresIn: '1h' });
res.json({ token });
} else {
res.status(401).json({ message: 'Login失败' });
}
});
Authentication Middleware
function authMiddleware(req, res, next) {
const authHeader = req.headers['authorization'];
const token = authHeader && authHeader.split(' ')[1];
if (!token) return res.sendStatus(401);
jwt.verify(token, SECRET_KEY, (err, user) => {
if (err) return res.sendStatus(403);
req.user = user;
next();
});
}
Protected API
app.get('/protected', authMiddleware, (req, res) => {
res.json({ message: '访问成功', user: req.user });
});
Start Server
app.listen(3000, () => {
console.log('Server running on http://localhost:3000');
});
Notes
- Do not put sensitive information into the JWT Payload.
- Regularly update the key (SECRET_KEY) to enhance security.
- Control the Token's expiration time to avoid risks associated with long-term validity.
Graphical Captcha
mojotv.cn/go/refactor-base64-captcha
Configuration File-Based Microservice Solution (Service Registry)
What is Service Registration and Discovery
Suppose this product is already running online, and one day operations wants to launch a promotional campaign. Then our corresponding [User Service] might need to spin up several new microservice instances to support this campaign. At the same time, as a 'high-wall' programmer, you would have to manually add the IP and port of each newly added microservice instance to the API gateway. A truly online microservice system might have hundreds or thousands of microservices; would you really have to add them one by one manually? Is there a way for the system to operate automatically? The answer, of course, is yes.
When we add a microservice instance, the microservice sends its IP and port to the service registry, where they are recorded. When the API gateway needs to access these microservices, it will find the corresponding IP and port in the service registry, thereby achieving automated operations.
Technology Selection
Comparison of Consul with other common service discovery frameworks:
| Name | Advantages | Disadvantages | Interface | Consistency Algorithm |
|---|---|---|---|---|
| zookeeper | 1. Powerful, not just for service discovery 2. Provides watcher mechanism to get real-time status of service providers 3. Supported by frameworks like Dubbo |
1. No health checks 2. Requires SDK integration in services, high complexity 3. Does not support multi-datacenter |
sdk | Paxos |
| consul | 1. Simple and easy to use, no SDK integration required 2. Built-in health checks 3. Supports multi-datacenter 4. Provides a web management interface |
1. Cannot get real-time notifications of service information changes | http/dns | Raft |
| etcd | 1. Simple and easy to use, no SDK integration required 2. Highly configurable |
1. No health checks 2. Requires third-party tools for service discovery 3. Does not support multi-datacenter |
http | Raft |
Deploying Consul with Docker Compose (Latest Stable Version)
I. Prerequisites
It is recommended to prepare the following structure in your project directory for persisting Consul data and supporting configuration mounting:
.
├── docker-compose.yaml
└── consul
├── config # 放置 JSON 或 HCL 配置文件
└── data # Consul 数据将持久化到这里
II. docker-compose.yaml Configuration Content
version: '3.8' # 说明:此字段在 Compose V2 中不是必须的,但保留并不会影响使用
services:
consul:
image: hashicorp/consul:latest
container_name: consul
restart: unless-stopped
ports:
- '8500:8500' # Web UI 和 HTTP API
- '8600:8600/udp' # DNS(UDP)
- '8600:8600' # DNS(TCP)
volumes:
- ./consul/data:/consul/data
- ./consul/config:/consul/config
command: agent -server -bootstrap -ui -client=0.0.0.0 -data-dir=/consul/data -config-dir=/consul/config
III. Start Consul
Run the following command in the current directory to start the container:
docker-compose up -d
After startup, the Consul Web UI can be accessed at the following address:
http://localhost:8500
IV. Explanation
version: '3.8': This field is no longer required in Compose V2 and can be omitted.-client=0.0.0.0: Allows external hosts to access the Consul service.-bootstrap: Enables single-node bootstrap mode, suitable for development or testing environments.
For production deployment, use
-bootstrap-expect=Nto configure the number of cluster nodes and disable bootstrap.
The DNS service must be usable. We use dig to test
dig @127.0.0.1 -p 8600 consul.service.consul SRv
; <<>> DiG 9.10.6 <<>> @127.0.0.1 -p 8600 consul.service.consul SRv
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19421
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 4
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;consul.service.consul. IN SRV
;; ANSWER SECTION:
consul.service.consul. 0 IN SRV 1 1 8300 d3fd490264e2.node.dc1.consul.
;; ADDITIONAL SECTION:
d3fd490264e2.node.dc1.consul. 0 IN A 172.21.0.2
d3fd490264e2.node.dc1.consul. 0 IN TXT "consul-network-segment="
d3fd490264e2.node.dc1.consul. 0 IN TXT "consul-version=1.21.0"
;; Query time: 0 msec
;; SERVER: 127.0.0.1#8600(127.0.0.1)
;; WHEN: Thu May 08 11:52:56 +07 2025
;; MSG SIZE rcvd: 184
-
Add service
https://www.consul.io/api-docs/agent/service#register-service -
Delete service
https://www.consul.io/api-docs/agent/service#deregister-service -
Set up health checks
https://www.consul.io/api-docs/agent/check -
Register multiple instances of the same service
(Can use different IDs when registering services) -
Get service
https://www.consul.io/api-docs/agent/check#list-checks
package main
import (
"fmt"
"github.com/hashicorp/consul/api"
)
func main() {
// 1. 创建一个新的Consul客户端
//_ = Register("192.168.1.7", 8022, "user-web", []string{"joyshop", "test", "walker"}, "user-web")
//AllService()
FilterService()
}
func Register(address string, port int, name string, tags []string, id string) error {
cfg := api.DefaultConfig()
cfg.Address = "192.168.1.7:8500"
client, err := api.NewClient(cfg)
if err != nil {
panic(err)
}
registration := new(api.AgentServiceRegistration)
registration.ID = id
registration.Name = name
registration.Address = address
registration.Port = port
registration.Tags = tags
// 生成对应的检查对象
check := new(api.AgentServiceCheck)
check.HTTP = fmt.Sprintf("http://%s:%d/health", address, port)
check.Interval = "5s"
check.Timeout = "5s"
check.DeregisterCriticalServiceAfter = "10s"
registration.Check = check
err = client.Agent().ServiceRegister(registration)
if err != nil {
panic(err)
}
return nil
}
func AllService() {
cfg := api.DefaultConfig()
cfg.Address = "192.168.1.7:8500"
client, err := api.NewClient(cfg)
if err != nil {
panic(err)
}
services, err := client.Agent().Services()
if err != nil {
panic(err)
}
for _, service := range services {
fmt.Println(service.Service)
}
}
func FilterService() {
cfg := api.DefaultConfig()
cfg.Address = "192.168.1.7:8500"
client, err := api.NewClient(cfg)
if err != nil {
panic(err)
}
services, err := client.Agent().ServicesWithFilter(`Service == "user-web"`)
if err != nil {
panic(err)
}
for _, service := range services {
fmt.Println(service.Service)
}
}
Dynamically Get Available Ports
grpc-consul-resolver
/*
* @Author: error: error: git config user.name & please set dead value or install git && error: git config user.email & please set dead value or install git & please set dead value or install git
* @Date: 2025-05-10 13:47:24
* @LastEditors: error: error: git config user.name & please set dead value or install git && error: git config user.email & please set dead value or install git & please set dead value or install git
* @LastEditTime: 2025-05-10 13:59:13
* @FilePath: /GormStart/grpclb/main.go
* @Description: 这是默认设置,请设置`customMade`, 打开koroFileHeader查看配置 进行设置: https://github.com/OBKoro1/koro1FileHeader/wiki/%E9%85%8D%E7%BD%AE
*/
package main
import (
"GormStart/grpclb/proto"
"context"
"log"
_ "github.com/mbobakov/grpc-consul-resolver" // It's important
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
)
func main() {
conn, err := grpc.NewClient(
"consul://192.168.1.7:8500/user-srv?wait=14s&tag=joyshop",
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithDefaultServiceConfig(`{"loadBalancingPolicy": "round_robin"}`),
)
if err != nil {
log.Fatal(err)
}
defer conn.Close()
userSrvClient := proto.NewUserClient(conn)
rsp, err := userSrvClient.GetUserList(context.Background(), &proto.PageInfo{
Page: 1,
PageSize: 2,
})
if err != nil {
log.Fatal(err)
}
for index, data := range rsp.Data {
log.Printf("第%d条数据: %v", index, data)
}
}
Distributed Configuration Center
1. Why a Distributed Configuration Center is Needed
We currently have a project developed using `gin`, and we know the configuration file is named config.yaml.
We also know that this configuration file will be loaded into memory and used when the project starts.
Consider Two Scenarios
a. Adding Configuration Items
i. If your current user service has 10 deployment instances, then to add a configuration item, you would have to modify the configuration file in ten places and then restart them, etc.
ii. Even if Go's `viper` can automatically apply changes to configuration files, you need to consider whether other languages can also do this, and whether other services will necessarily use `viper`?
b. Modifying Configuration Items
Many services might use the same configuration. For example, if I want to change the `jwt` secret, what do I do with so many instances?
c. How to Isolate Development, Testing, and Production Environments
Although `viper` has been introduced earlier, the same problem remains: how to unify so many services? What are the considerations?
nacos
version: '3.8'
services:
nacos:
image: nacos/nacos-server:v2.3.2
container_name: nacos-standalone
ports:
- '8848:8848' # Web UI & API
- '9848:9848' # gRPC 通信端口(2.x 版本启用)
- '9849:9849' # gRPC 通信端口
environment:
MODE: standalone
NACOS_AUTH_ENABLE: 'false'
JVM_XMS: 256m
JVM_XMX: 512m
JVM_XMN: 128m
volumes:
- ./nacos-data:/home/nacos/data
restart: unless-stopped
Namespace: Can isolate configuration sets, placing certain configurations under a specific namespace. Namespaces are used to differentiate microservices.
Group: Differentiates environments (dev test prod)
dataId: Can be understood as a configuration file
Go language for obtaining configuration information (can retrieve configurations, can listen for changes,)
主题测试文章,只做测试使用。发布者:Walker,转转请注明出处:https://walker-learn.xyz/archives/6753