Node: In-depth Yet Easy to Understand (Shengsi Garden Education) 002 [Study Notes]

Node's package management and loading mechanisms: npm search xxx, npm view xxx, npm install xxx. Node.js file system operation APIs: Node.js's `fs` module provides synchronous (Sync) and callback/Promise-based asynchronous APIs for operating on local files and directories. Commonly used capabilities in daily development include reading, writing, appending, deleting, traversing directories, listening for changes, and so on. The following examples are based on C...

Node's Package Management and Loading Mechanisms

npm search xxx
npm view xxx
npm install xxx

Node.js File System Operation APIs

Node.js's fs module provides synchronous (Sync) and callback/Promise-based asynchronous APIs for operating on local files and directories. Commonly used capabilities in daily development include reading, writing, appending, deleting, traversing directories, and monitoring changes. The following examples are based on CommonJS syntax; if used in an ES Module, they need to be changed to import.

Quick Overview of Common APIs

  • fs.readFile / fs.promises.readFile: Reads file content in one go.
  • fs.writeFile / fs.promises.writeFile: Writes to overwrite a file, automatically creating the file if it doesn't exist.
  • fs.appendFile / fs.promises.appendFile: Appends content to the end of a file.
  • fs.mkdir / fs.promises.mkdir: Creates directories, can create recursively.
  • fs.readdir / fs.promises.readdir: Reads the list of filenames in a directory.
  • fs.stat / fs.promises.stat: Views detailed information about a file/directory (size, type, permissions, etc.).
  • fs.access / fs.promises.access: Checks if a path exists and if it has specified permissions.
  • fs.realpath / fs.promises.realpath: Gets the absolute path after resolving symbolic links.
  • fs.unlink / fs.promises.unlink: Deletes a file.
  • fs.rm / fs.promises.rm: Deletes files or directories, can be used with recursive/force.
  • fs.watch: Monitors changes in a directory or file.
  • fs.createReadStream / fs.createWriteStream: Stream-based reading and writing suitable for large files or pipes.

Reading and Writing

const fs = require('node:fs/promises');

async function readAndWrite() {
  const content = await fs.readFile('./data.txt', 'utf8');
  console.log('原始内容:', content);

  await fs.writeFile('./output.txt', content.toUpperCase(), 'utf8');
  await fs.appendFile(
    './output.txt',
    '\n-- appended at ' + new Date().toISOString()
  );
}

readAndWrite().catch(console.error);

Directory Traversal and Details

const fs = require('node:fs/promises');
const path = require('node:path');

async function listDir(dir) {
  const entries = await fs.readdir(dir, { withFileTypes: true });
  for (const entry of entries) {
    const fullPath = path.join(dir, entry.name);
    const stats = await fs.stat(fullPath);
    console.log({
      name: entry.name,
      isDirectory: entry.isDirectory(),
      size: stats.size,
      modified: stats.mtime,
    });
  }
}

listDir('./logs').catch(console.error);

Ensuring Directory Existence

const fs = require('node:fs/promises');

async function ensureDir(dir) {
  await fs.mkdir(dir, { recursive: true }); // 嵌套的方式创建目录
}

ensureDir('./uploads/images').catch(console.error);

Permission Check fs.access

fs.access(path[, mode]) can be used to check if a target path exists and what permissions the calling process has for it before actual reading and writing. mode defaults to fs.constants.F_OK (existence check only), and can also be bitwise combined with R_OK (readable), W_OK (writable), X_OK (executable). The asynchronous callback convention is "no error means success," and the Promise version will throw errors like ENOENT (does not exist) or EACCES (no permission) if the check fails.

const fs = require('node:fs/promises');

async function ensureWritableConfig() {
  try {
    await fs.access('./config/app.json', fs.constants.R_OK | fs.constants.W_OK);
    console.log('配置文件存在且可读写');
  } catch (err) {
    if (err.code === 'ENOENT') {
      console.log('文件不存在,准备创建...');
      await fs.writeFile('./config/app.json', '{}');
      return;
    }
    throw err; // 由调用方决定是否提示权限不足等
  }
}

ensureWritableConfig().catch((err) => {
  console.error('权限检查失败:', err);
});

Note: fs.access only reflects the state at the moment of the check. Subsequent actual read/write operations may still fail due to changing conditions, so error handling is still required for critical write operations.

Resolving Real Path fs.realpath

fs.realpath(path[, options]) resolves relative paths, symbolic links, . / .. segments, etc., and returns the normalized absolute path. By default, it returns a UTF-8 string; you can set options.encoding to 'buffer' to get a Buffer. The Promise version will throw an error if the path does not exist (ENOENT) or if there is a link loop (ELOOP).

const fs = require('node:fs/promises');

async function resolveUpload(pathLike) {
  const resolved = await fs.realpath(pathLike);
  if (!resolved.startsWith('/var/www/uploads')) {
    throw new Error('访问越界');
  }
  return resolved;
}

resolveUpload('./uploads/../uploads/avatar.jpg')
  .then((absPath) => console.log('真实路径:', absPath))
  .catch(console.error);

fs.realpath.native uses the native implementation provided by the operating system, which might be faster on some platforms but behave slightly differently (especially on Windows UNC paths). Unless there's a performance bottleneck, the regular version is generally preferred.

Deleting Files and Directories fs.rm

fs.rm(target[, options]) is the recommended deletion API for Node 14.14+, capable of deleting single files, symbolic links, and non-empty directories when options.recursive === true is configured. Common options:

  • recursive: Defaults to false; setting it to true will recursively delete the directory tree.
  • force: Ignores non-existent paths (does not throw ENOENT) and attempts to continue deleting inaccessible files, defaults to false.
  • maxRetries / retryDelay: Allows automatic retries when dealing with handle contention on Windows.
const fs = require('node:fs/promises');

async function cleanUploadTmp() {
  await fs.rm('./uploads/tmp', {
    recursive: true,
    force: true, // 不存在也不报错
  });
  console.log('临时目录已清理');
}

cleanUploadTmp().catch((err) => {
  console.error('删除失败:', err);
});

The historical fs.rmdir(path, { recursive: true }) has been deprecated; it is recommended to uniformly use fs.rm. When deleting directories that will be rebuilt later, if concurrent write operations exist, error handling with fs.mkdir should be combined to avoid race conditions.

Renaming and Moving Files

fs.rename / fs.promises.rename can rename files or directories within the same file system. The target path can include a new directory structure (if the directory does not exist, it needs to be created beforehand).

const fs = require('node:fs/promises');
const path = require('node:path');

async function renameLog() {
  const src = path.resolve('./logs/app.log');
  const destDir = path.resolve('./logs/archive');
  await fs.mkdir(destDir, { recursive: true });

  const dest = path.join(destDir, `app-${Date.now()}.log`);
  await fs.rename(src, dest);
  console.log(`已移动到: ${dest}`);
}

renameLog().catch((err) => {
  if (err.code === 'ENOENT') {
    console.error('源文件不存在');
    return;
  }
  console.error('重命名失败:', err);
});

fs.rename may fail when moving files between different disks or partitions (EXDEV). In such cases, a combination of streams or fs.copyFile + fs.unlink should be used to achieve copy-then-delete.

Streaming Large Files

const fs = require('node:fs');
const path = require('node:path');

function copyLargeFile(src, dest) {
  return new Promise((resolve, reject) => {
    const readable = fs.createReadStream(src);
    const writable = fs.createWriteStream(dest);

    readable.on('error', reject);
    writable.on('error', reject);
    writable.on('finish', resolve);

    readable.pipe(writable);
  });
}

copyLargeFile(path.resolve('videos/big.mp4'), path.resolve('backup/big.mp4'))
  .then(() => console.log('复制完成'))
  .catch(console.error);

File Streams Explained

Node.js file streams are based on the core stream module. fs.createReadStream and fs.createWriteStream return readable and writable stream objects, respectively. They do not load content into memory all at once but maintain an internal buffer (default 64 KB) to read or write as needed, making them suitable for processing large files or continuous data streams.

  • Common events: open (file descriptor ready), data (data block read), end (readable stream ended), finish (writable stream flushed), error (error occurred), close (resources released).
  • Important parameters:
  • highWaterMark: Buffer size, used to control backpressure.
  • encoding: Readable streams output Buffer by default; a default character encoding can be set.
  • flags, mode: Control file opening method and permissions.
  • Backpressure: When the write target cannot keep up, the writable stream will return false, and the readable stream should pause until the drain event is triggered. The built-in pipe and stream/promises.pipeline will handle this for you.

Reading File in Chunks and Counting Bytes

const fs = require('node:fs');

function inspectFile(path) {
  return new Promise((resolve, reject) => {
    let total = 0;
    const reader = fs.createReadStream(path, { highWaterMark: 16 * 1024 });

    reader.on('open', (fd) => {
      console.log('文件描述符:', fd);
    });

    reader.on('data', (chunk) => {
      total += chunk.length;
      console.log('读取块大小:', chunk.length);
    });

    reader.on('end', () => {
      console.log('读取结束,总字节数:', total);
      resolve(total);
    });

    reader.on('error', (err) => {
      console.error('读取失败', err);
      reject(err);
    });
  });
}

inspectFile('./logs/app.log').catch(console.error);

Using pipeline to Chain Transformations and Writes

const fs = require('node:fs');
const zlib = require('node:zlib');
const { pipeline } = require('node:stream/promises');

async function compressLog() {
  await pipeline(
    fs.createReadStream('./logs/app.log', { encoding: 'utf8' }),
    zlib.createGzip({ level: 9 }),
    fs.createWriteStream('./logs/app.log.gz')
  );

  console.log('压缩完成');
}

compressLog().catch(console.error);

pipeline has built-in backpressure handling and error propagation, making it recommended for complex stream combinations. When processing binary files or audio/video, you can switch to processing Buffers by not setting an encoding.

Monitoring File Changes

const fs = require('node:fs');

const watcher = fs.watch('./config.json', (eventType, filename) => {
  console.log('文件变化:', eventType, filename);
});

process.on('SIGINT', () => {
  watcher.close();
  console.log('监听已停止');
});

Promise Style (.then/.catch) Writing Example

If you don't want to use async/await, you can directly chain calls to the Promise returned by fs.promises:

const fs = require('node:fs/promises');

fs.readFile('./input.txt', 'utf8')
  .then((text) => {
    console.log('读取成功:', text);
    return fs.writeFile('./result.txt', text.trim() + '\nProcessed');
  })
  .then(() => fs.stat('./result.txt'))
  .then((stats) => {
    console.log('写入完成,文件大小:', stats.size);
  })
  .catch((err) => {
    console.error('操作失败:', err);
  });

When multiple operations need to run in parallel, use Promise.all:

const fs = require('node:fs/promises');

Promise.all([
  fs.readFile('./a.txt', 'utf8'),
  fs.readFile('./b.txt', 'utf8'),
  fs.readFile('./c.txt', 'utf8'),
])
  .then(([a, b, c]) => fs.writeFile('./merged.txt', [a, b, c].join('\n')))
  .then(() => console.log('并行读取并合并完成'))
  .catch((err) => console.error('并行操作失败:', err));

Tip: When handling a large number of asynchronous file operations, combine with Promise.all or a task queue to limit concurrency and avoid EMFILE errors caused by opening too many file descriptors simultaneously.

Comparison of Character Streams and Binary Streams in File Streams

In Java, "character streams (Reader/Writer)" and "byte streams (InputStream/OutputStream)" are clearly distinguished. Node.js does not have separate character stream classes; all file streams are essentially byte streams (based on Buffer). Whether they behave as "character" streams depends on whether an encoding is set. The following examples demonstrate two common patterns:

Text Stream (Specified Encoding)

const fs = require('node:fs');

const textReader = fs.createReadStream('./poem.txt', {
  encoding: 'utf8', // 指定编码后 data 事件直接得到字符串
});

textReader.on('data', (chunk) => {
  console.log('文本块:', chunk);
});

textReader.on('end', () => {
  console.log('文本读取完成');
});

encoding only affects the form of the data read out; it does not change how the underlying Buffer is read. When no encoding is set, the chunk will be a Buffer object.

Binary Stream (Default Buffer)

const fs = require('node:fs');

const binaryReader = fs.createReadStream('./images/logo.png'); // 不设置 encoding
const chunks = [];

binaryReader.on('data', (chunk) => {
  chunks.push(chunk);
});

binaryReader.on('end', () => {
  const buffer = Buffer.concat(chunks);
  console.log('PNG 头部签名:', buffer.slice(0, 8));
});

For binary data, it is usually processed in Buffer form or written to other writable streams (e.g., network, compression streams).

Writing Characters and Binary Data

const fs = require('node:fs');

// 写入文本,指定 UTF-8 编码
const textWriter = fs.createWriteStream('./output/hello.txt', {
  encoding: 'utf8',
});
textWriter.write('你好,世界\n');
textWriter.end();

// 写入原始字节
const binaryWriter = fs.createWriteStream('./output/raw.bin');
binaryWriter.write(Buffer.from([0x00, 0xff, 0x10, 0x7a]));
binaryWriter.end();

Summary: Node.js file streams process bytes by default; with encoding, they can simulate "character stream" effects. When dealing with large objects or needing precise byte control, keeping Buffer is safer.

Buffer Module Explained

Buffer is a block of native memory outside the V8 heap in Node.js, used for handling binary data. Common scenarios include file I/O, network communication, encryption, and compression. Buffer and Uint8Array are interoperable, and Node 18+ Buffer instances also inherit from Uint8Array by default.

  • Creation methods:
  • Buffer.from(string[, encoding])
  • Buffer.from(array|ArrayBuffer)
  • Buffer.alloc(size[, fill[, encoding]])
  • Buffer.allocUnsafe(size) (skips initialization, high performance but needs to be filled immediately)
  • Common encodings: utf8 (default), base64, hex, latin1, ascii.
  • Recommended to use with TextEncoder/TextDecoder for more granular character processing.

Creation and Encoding Conversion

const bufUtf8 = Buffer.from('Node.js', 'utf8');
const bufHex = Buffer.from('e4bda0e5a5bd', 'hex'); // “你好”

console.log(bufUtf8); // <Buffer 4e 6f 64 65 2e 6a 73>
console.log(bufHex.toString('utf8')); // 你好

const base64 = bufUtf8.toString('base64');
console.log('Base64:', base64);
console.log('还原:', Buffer.from(base64, 'base64').toString('utf8'));

Byte-by-Byte Writing and Reading

const buf = Buffer.alloc(8);
buf.writeUInt16BE(0x1234, 0); // 大端
buf.writeUInt16LE(0x5678, 2); // 小端
buf.writeInt32BE(-1, 4);

console.log(buf); // <Buffer 12 34 78 56 ff ff ff ff>
console.log(buf.readUInt16BE(0)); // 4660
console.log(buf.readInt32BE(4)); // -1

Slicing, Copying, and Concatenating

const part1 = Buffer.from('Hello ');
const part2 = Buffer.from('World');
const full = Buffer.concat([part1, part2]);

console.log(full.toString()); // Hello World

const slice = full.slice(6); // 共用内存
console.log(slice.toString()); // World

const copyTarget = Buffer.alloc(5);
full.copy(copyTarget, 0, 6);
console.log(copyTarget.toString()); // World

Buffer and TypedArray Interoperability

const arr = new Uint8Array([1, 2, 3, 4]);
const buf = Buffer.from(arr.buffer); // 共享底层 ArrayBuffer

buf[0] = 99;
console.log(arr[0]); // 99

const view = new Uint32Array(buf.buffer, buf.byteOffset, buf.byteLength / 4);
console.log(view); // Uint32Array(1) [...]

JSON Serialization and Base64 Transmission

Buffer implements toJSON by default, so JSON.stringify(buffer) will result in a { type: 'Buffer', data: [...] } structure. After deserialization, it can be directly passed to Buffer.from for restoration:

const buffer = Buffer.from('你好世界');
const jsonString = JSON.stringify(buffer);
console.log(jsonString); // {"type":"Buffer","data":[228,189,160,229,165,189,228,184,150,231,149,140]}

const jsonObject = JSON.parse(jsonString);
console.log(jsonObject); // { type: 'Buffer', data: [ 228, 189, 160, 229, 165, 189, 228, 184, 150, 231, 149, 140 ] }

const buffer2 = Buffer.from(jsonObject);
console.log(buffer2.toString('utf8')); // 你好世界

When needing to transmit Buffers via a JSON channel, base64 can be used to reduce volume (JSON arrays significantly increase volume):

const payload = Buffer.from(JSON.stringify({ id: 1, msg: 'hi' }), 'utf8');
const transport = payload.toString('base64');

// 接收方
const decoded = Buffer.from(transport, 'base64');
console.log(JSON.parse(decoded.toString('utf8'))); // { id: 1, msg: 'hi' }

Note: Buffers created with Buffer.allocUnsafe contain old memory data and must be written to before use. Repeatedly creating a large number of Buffers can trigger GC pressure; consider reusing or using a pooling strategy.

Node's Network Modules

net Module Overview

  • net.createServer(): Creates a TCP server instance, returns net.Server, and obtains the client socket via the connection event.
  • net.createConnection(options) / net.connect(): Client entry point, establishes a net.Socket to actively connect to the server, can set host, port, timeout, etc.
  • net.Socket is both a readable and writable stream, with common events data, end, error, close, and common methods write(), end(), setEncoding(), setKeepAlive(), etc.
  • server.address(), server.getConnections(cb) are used for debugging listening addresses and connection counts.

Viewing Local/Remote Connection Information

const net = require('net');

const server = net.createServer((socket) => {
  console.log('local port:', socket.localPort);
  console.log('local address:', socket.localAddress);
  console.log('remote port:', socket.remotePort);
  console.log('remote family:', socket.remoteFamily);
  console.log('remote address:', socket.remoteAddress);
});

server.listen(8888, () => console.log('server is listening'));

socket.local* properties indicate the port/address the current server is listening on, while socket.remote* points to client information, which is very convenient for debugging multi-client access or troubleshooting NAT issues.

net Getting Started Example

// server.js
const net = require('net');

const server = net.createServer((socket) => {
  console.log('client connected:', socket.remoteAddress, socket.remotePort);
  socket.setEncoding('utf8');

  socket.write('Hello from TCP server, type "bye" to quit.\n');

  socket.on('data', (chunk) => {
    const message = chunk.trim();
    console.log('receive:', message);
    if (message.toLowerCase() === 'bye') {
      socket.end('Server closing connection.\n');
    } else {
      socket.write(`Server echo: ${message}\n`);
    }
  });

  socket.on('end', () => console.log('client disconnected'));
  socket.on('error', (err) => console.error('socket error:', err.message));
});

server.on('error', (err) => console.error('server error:', err.message));

server.listen(4000, () => {
  const addr = server.address();
  console.log(`TCP server listening on ${addr.address}:${addr.port}`);
});
// client.js
const net = require('net');

const client = net.createConnection({ host: '127.0.0.1', port: 4000 }, () => {
  console.log('connected to server');
  client.write('ping');
});

client.setEncoding('utf8');

client.on('data', (data) => {
  console.log('server says:', data.trim());
  if (data.includes('echo')) {
    client.write('bye');
  }
});

client.on('end', () => console.log('disconnected from server'));
client.on('error', (err) => console.error('client error:', err.message));

After running node server.js, execute node client.js to see the question-and-answer interaction process.

nc (netcat) Tool

nc is a common network debugging tool in Unix-like systems, capable of quickly establishing TCP/UDP connections, often used to test port listening, transfer text, and forward traffic. Combined with the server above, it can be quickly verified without a Node client:

# 启动 server.js 后,用 nc 充当客户端
nc 127.0.0.1 4000
# 看到提示后输入文本,例如:
ping
hello
bye

nc sends keyboard input to the server as a TCP stream, which is very suitable for troubleshooting net service logic or protocol formats, equivalent to a lightweight TCP terminal.

socket.write Usage Instructions

socket.write(chunk[, encoding][, callback]) is used to send data to the peer and is the most commonly used output method of net.Socket:

  • chunk can be a Buffer, Uint8Array, or string; if it's a string, the encoding can be specified via encoding (default utf8).
  • The return value is a boolean; false indicates that the underlying buffer is full and writing should wait for the drain event, otherwise backpressure might be triggered.
  • The optional callback is invoked after the data has been flushed to the underlying system, suitable for tracking send completion or error handling.

Common usage is as follows:

socket.write('hello', 'utf8', (err) => {
  if (err) {
    console.error('send failed:', err);
    return;
  }
  console.log('send success');
});

if (!socket.write(Buffer.from([0x01, 0x02]))) {
  socket.once('drain', () => {
    console.log('buffer drained, continue writing');
  });
}

When needing to end a connection, socket.end() can be used to send the last chunk of data and trigger FIN, which is more elegant than calling write() followed by a manual destroy():

const net = require('net');

const server = net.createServer((socket) => {
  socket.on('data', (msg) => {
    if (msg.toString().trim() === 'bye') {
      socket.end('Goodbye!\n'); // 发送最后一条消息并优雅关闭
      return;
    }
    socket.write('Say "bye" to end connection.\n');
  });
});

server.listen(4000, () => console.log('listening on 4000'));

After the client sends bye, the server immediately returns Goodbye! and calls socket.end(). The underlying TCP completes the FIN/ACK handshake, the remote end event is triggered, and the connection closes normally.

Complete TCP Server/Client Example

// tcp-server.js
const net = require('net');

const server = net.createServer((socket) => {
  console.log(`new connection: ${socket.remoteAddress}:${socket.remotePort}`);
  socket.setEncoding('utf8');
  socket.write('Welcome! Type "quit" to close.\n');

  socket.on('data', (chunk) => {
    const msg = chunk.trim();
    if (!msg) return;
    if (msg.toLowerCase() === 'quit') {
      socket.end('Bye!\n');
      return;
    }
    socket.write(`Echo(${new Date().toLocaleTimeString()}): ${msg}\n`);
  });

  socket.on('end', () => console.log('client closed:', socket.remoteAddress));
  socket.on('error', (err) => console.error('socket error:', err.message));
});

server.listen(5000, () => console.log('TCP server listening on port 5000'));
// tcp-client.js
const net = require('net');
const readline = require('readline');

const client = net.createConnection({ host: '127.0.0.1', port: 5000 }, () => {
  console.log('connected to TCP server, type message then Enter');
});

client.setEncoding('utf8');

client.on('data', (data) => {
  console.log(data.trim());
});

client.on('end', () => {
  console.log('server closed connection');
  rl.close();
});

client.on('error', (err) => console.error('client error:', err.message));

const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout,
});

rl.on('line', (line) => {
  client.write(line);
  if (line.toLowerCase() === 'quit') {
    rl.pause();
  }
});
  1. First run node tcp-server.js; the server listens on port 5000.
  2. In a second terminal, run node tcp-client.js and send any message via the keyboard.
  3. When quit is entered, the client sends a termination command, and the server calls socket.end() to gracefully close the connection.

Complete UDP Server/Client Example

// udp-server.js
const dgram = require('dgram');
const server = dgram.createSocket('udp4');

server.on('message', (msg, rinfo) => {
  console.log(`recv ${msg} from ${rinfo.address}:${rinfo.port}`);
  const reply = Buffer.from(`ack:${msg.toString().toUpperCase()}`);
  server.send(reply, rinfo.port, rinfo.address, (err) => {
    if (err) console.error('send error:', err);
  });
});

server.on('listening', () => {
  const address = server.address();
  console.log(`UDP server listening on ${address.address}:${address.port}`);
});

server.bind(41234);
// udp-client.js
const dgram = require('dgram');
const client = dgram.createSocket('udp4');

client.on('message', (msg) => {
  console.log('server reply:', msg.toString());
  client.close();
});

const payload = Buffer.from('hello udp');
client.send(payload, 41234, '127.0.0.1', (err) => {
  if (err) {
    console.error('send error:', err);
    client.close();
    return;
  }
  console.log('datagram sent');
});
  • UDP creates connectionless sockets via dgram.createSocket; messages are sent as datagrams, which may be lost or out of order, and reliability is not guaranteed.
  • After running node udp-server.js, execute node udp-client.js. The client sends a datagram once, the server immediately sends back ack:* upon receipt, and the client prints the reply and then closes.

主题测试文章,只做测试使用。发布者:Walker,转转请注明出处:https://walker-learn.xyz/archives/4766

(0)
Walker的头像Walker
上一篇 Mar 10, 2026 00:00
下一篇 Mar 8, 2026 15:40

Related Posts

  • Go Engineer's Comprehensive Course 017: Learning Notes

    Introduction to Rate Limiting, Circuit Breaking, and Degradation (with Sentinel Practical Application)
    Based on the key video points from Chapter 3 (3-1 to 3-9) of the courseware, this guide compiles a service protection introduction for beginners, helping them understand "why rate limiting, circuit breaking, and degradation are needed," and how to quickly get started with Sentinel.
    Learning Path at a Glance
    3-1 Understanding Service Avalanche and the Background of Rate Limiting, Circuit Breaking, and Degradation
    3-2 Comparing Sentinel and Hystrix to clarify technology selection
    3-3 Sen...

    Personal Nov 25, 2025
    23900
  • Go Engineer System Course 005 [Learning Notes]

    For microservice development, create a microservice project where all project microservices will reside. Create `joyshop_srv`. We need to create user login and registration services, so we will create another directory `user_srv` under the project directory, along with `user_srv/global` (for global object creation and initialization), `user_srv/handler` (for business logic code), `user_srv/model` (for user-related models), `user_srv/pro...`

    Personal Nov 25, 2025
    29000
  • From 0 to 1: Implementing Micro-frontend Architecture 001 [Study Notes]

    Micro-frontends, JS isolation, CSS isolation, element isolation, lifecycle, preloading, data communication, application navigation, multi-level nesting. Note: This uses Mermaid's flowchart syntax, which is supported by Markdown renderers such as Typora, VitePress, and some Git platforms. Retained: Host application main-vue3; child applications: child-nuxt2-home, child-vue2-job, child-vu...

    Apr 20, 2025
    1.6K00
  • In-depth Understanding of ES6 013 [Study Notes]

    Code Encapsulation with Modules

    JavaScript loads code using a "share everything" approach to loading code, which is one of the most error-prone and confusing aspects of the language. Other languages use concepts like packages to define code scope. Before ES6, everything defined in every JavaScript file within an application shared a single global scope. As web applications became more complex and the amount of JavaScript code grew, this practice led to issues such as naming conflicts and security concerns. One of ES6's goals was to address the scoping issue…

    Personal Mar 8, 2025
    1.2K00
  • In-depth Understanding of ES6 009 [Learning Notes]

    Classes in JavaScript function PersonType(name){ this.name = name; } PersonType.prototype.sayName = function(){ console.log(this.name) } var person = new PersonType("Nicholas") p…

    Personal Mar 8, 2025
    1.3K00
EN
简体中文 繁體中文 English