Go accumulate over time go-s3-upload-example

Go Language Example for File Upload to AWS S3

This example demonstrates how to upload local files to Amazon S3 using Go and AWS SDK v2.

🧾 Prerequisites

  • Have an AWS account;
  • An S3 Bucket has been created;
  • AWS credentials have been configured (via aws configure or by setting environment variables);
  • A local file is prepared (e.g., test.jpg);

📦 Install Dependencies

go mod init s3uploadtest
go get github.com/aws/aws-sdk-go-v2/config
go get github.com/aws/aws-sdk-go-v2/service/s3

🧑‍💻 Example Code

package main

import (
    "context"
    "fmt"
    "log"
    "os"
    "path/filepath"

    "github.com/aws/aws-sdk-go-v2/config"
    "github.com/aws/aws-sdk-go-v2/service/s3"
)

func main() {
    bucket := "your-bucket-name"         // 替换为你的 S3 桶名
    region := "ap-southeast-1"           // 替换为你的区域
    key := "uploads/test.jpg"            // 上传后在 S3 中的路径
    filePath := "./test.jpg"             // 本地文件路径

    // 加载 AWS 配置
    cfg, err := config.LoadDefaultConfig(context.TODO(), config.WithRegion(region))
    if err != nil {
        log.Fatalf("无法加载 AWS 配置: %v", err)
    }

    // 创建 S3 客户端
    client := s3.NewFromConfig(cfg)

    // 打开文件
    file, err := os.Open(filePath)
    if err != nil {
        log.Fatalf("无法打开文件: %v", err)
    }
    defer file.Close()

    // 获取文件大小与内容类型
    fileInfo, _ := file.Stat()
    size := fileInfo.Size()
    contentType := detectContentType(filePath)

    // 执行上传
    _, err = client.PutObject(context.TODO(), &s3.PutObjectInput{
        Bucket:        &bucket,
        Key:           &key,
        Body:          file,
        ContentLength: size,
        ContentType:   &contentType,
    })

    if err != nil {
        log.Fatalf("上传失败: %v", err)
    }

    fmt.Println("上传成功 ✅")
    fmt.Printf("访问地址: https://%s.s3.amazonaws.com/%s\n", bucket, key)
}

func detectContentType(path string) string {
    ext := filepath.Ext(path)
    switch ext {
    case ".jpg", ".jpeg":
        return "image/jpeg"
    case ".png":
        return "image/png"
    case ".gif":
        return "image/gif"
    case ".txt":
        return "text/plain"
    default:
        return "application/octet-stream"
    }
}

✅ Upload Verification

After the upload is complete, visit the URL:

https://your-bucket-name.s3.amazonaws.com/uploads/test.jpg

⚡ Frontend Direct Upload + S3 Lambda Callback Workflow

Frontend direct upload (Pre-signed URL) combined with S3 Lambda callback is often used to securely upload files without backend intermediation, and automatically trigger subsequent processing after upload. The typical workflow is as follows:

  1. Frontend requests backend to generate a pre-signed upload URL

  2. The frontend requests an S3 pre-signed upload URL (PutObject) with an expiration date from the backend.

  3. The backend generates the URL based on business logic and returns it to the frontend.

  4. Frontend directly uploads file to S3

  5. The frontend uses this URL to directly upload the file to S3, without passing through the backend server.

  6. S3 triggers Lambda callback

  7. S3 configures event notifications (e.g., s3:ObjectCreated:*) to automatically trigger a specified Lambda function when a new object is uploaded.

  8. Lambda processes uploaded file

  9. Lambda retrieves event information (e.g., Bucket, Key, file metadata).

  10. Subsequent processing such as image thumbnail generation, format conversion, virus scanning, and database writing can be performed.

  11. (Optional) Lambda notifies business system

  12. After Lambda processing is complete, it can notify the business system or users via API, message queues, or other methods.

Example Flowchart

前端
  │
  ├─ 请求预签名URL ──► 后端
  │                    │
  │◄── 返回URL ────────┘
  │
  ├─ 直传文件 ─────────► S3
  │
  └────────────────────► S3 触发 Lambda
                           │
                           └─► Lambda 处理/通知

Key Points Explanation

  • Pre-signed URLs have a short validity period, controllable permissions, and avoid exposing keys.
  • S3 event notifications can be configured with prefix/suffix filtering for precise triggering.
  • Lambda permissions must allow reading S3 objects and accessing subsequent resources.
  • Can be combined with SNS/SQS for asynchronous notifications or batch processing.

📚 Reference Documentation

Suggestion Description
✅ Default to block public access S3 provides a block public access feature (should be enabled by default)
✅ Use pre-signed URLs for upload/download Avoid permanent public access for objects
✅ Do not hardcode keys in code or Git Use .env or AWS credentials file
✅ Set minimum permissions for IAM users e.g., only allow GetObject and specific Bucket
✅ Enable S3 Access Logging Record who accessed your files
✅ Configure lifecycle rules Automatically clean up temporarily uploaded files
✅ Set S3 Cross-Origin (CORS) restrictions Restrict accessible origin websites

主题测试文章,只做测试使用。发布者:Walker,转转请注明出处:https://walker-learn.xyz/archives/6787

(0)
Walker的头像Walker
上一篇 12 hours ago
下一篇 1 day ago

Related Posts

EN
简体中文 繁體中文 English