Node's Package Management and Loading Mechanisms
npm search xxxnpm view xxxnpm install xxx
Node.js File System Operation APIs
Node.js's fs module provides synchronous (Sync) and callback/Promise-based asynchronous APIs for operating on local files and directories. Commonly used capabilities in daily development include reading, writing, appending, deleting, traversing directories, and monitoring changes. The following examples are based on CommonJS syntax; if used in an ES Module, they need to be changed to import.
Quick Overview of Common APIs
fs.readFile / fs.promises.readFile: Reads file content in one go.fs.writeFile / fs.promises.writeFile: Writes to overwrite a file, automatically creating the file if it doesn't exist.fs.appendFile / fs.promises.appendFile: Appends content to the end of a file.fs.mkdir / fs.promises.mkdir: Creates directories, can create recursively.fs.readdir / fs.promises.readdir: Reads the list of filenames in a directory.fs.stat / fs.promises.stat: Views detailed information about a file/directory (size, type, permissions, etc.).fs.access / fs.promises.access: Checks if a path exists and if it has specified permissions.fs.realpath / fs.promises.realpath: Gets the absolute path after resolving symbolic links.fs.unlink / fs.promises.unlink: Deletes a file.fs.rm / fs.promises.rm: Deletes files or directories, can be used withrecursive/force.fs.watch: Monitors changes in a directory or file.fs.createReadStream / fs.createWriteStream: Stream-based reading and writing suitable for large files or pipes.
Reading and Writing
const fs = require('node:fs/promises');
async function readAndWrite() {
const content = await fs.readFile('./data.txt', 'utf8');
console.log('原始内容:', content);
await fs.writeFile('./output.txt', content.toUpperCase(), 'utf8');
await fs.appendFile(
'./output.txt',
'\n-- appended at ' + new Date().toISOString()
);
}
readAndWrite().catch(console.error);
Directory Traversal and Details
const fs = require('node:fs/promises');
const path = require('node:path');
async function listDir(dir) {
const entries = await fs.readdir(dir, { withFileTypes: true });
for (const entry of entries) {
const fullPath = path.join(dir, entry.name);
const stats = await fs.stat(fullPath);
console.log({
name: entry.name,
isDirectory: entry.isDirectory(),
size: stats.size,
modified: stats.mtime,
});
}
}
listDir('./logs').catch(console.error);
Ensuring Directory Existence
const fs = require('node:fs/promises');
async function ensureDir(dir) {
await fs.mkdir(dir, { recursive: true }); // 嵌套的方式创建目录
}
ensureDir('./uploads/images').catch(console.error);
Permission Check fs.access
fs.access(path[, mode]) can be used to check if a target path exists and what permissions the calling process has for it before actual reading and writing. mode defaults to fs.constants.F_OK (existence check only), and can also be bitwise combined with R_OK (readable), W_OK (writable), X_OK (executable). The asynchronous callback convention is "no error means success," and the Promise version will throw errors like ENOENT (does not exist) or EACCES (no permission) if the check fails.
const fs = require('node:fs/promises');
async function ensureWritableConfig() {
try {
await fs.access('./config/app.json', fs.constants.R_OK | fs.constants.W_OK);
console.log('配置文件存在且可读写');
} catch (err) {
if (err.code === 'ENOENT') {
console.log('文件不存在,准备创建...');
await fs.writeFile('./config/app.json', '{}');
return;
}
throw err; // 由调用方决定是否提示权限不足等
}
}
ensureWritableConfig().catch((err) => {
console.error('权限检查失败:', err);
});
Note:
fs.accessonly reflects the state at the moment of the check. Subsequent actual read/write operations may still fail due to changing conditions, so error handling is still required for critical write operations.
Resolving Real Path fs.realpath
fs.realpath(path[, options]) resolves relative paths, symbolic links, . / .. segments, etc., and returns the normalized absolute path. By default, it returns a UTF-8 string; you can set options.encoding to 'buffer' to get a Buffer. The Promise version will throw an error if the path does not exist (ENOENT) or if there is a link loop (ELOOP).
const fs = require('node:fs/promises');
async function resolveUpload(pathLike) {
const resolved = await fs.realpath(pathLike);
if (!resolved.startsWith('/var/www/uploads')) {
throw new Error('访问越界');
}
return resolved;
}
resolveUpload('./uploads/../uploads/avatar.jpg')
.then((absPath) => console.log('真实路径:', absPath))
.catch(console.error);
fs.realpath.nativeuses the native implementation provided by the operating system, which might be faster on some platforms but behave slightly differently (especially on Windows UNC paths). Unless there's a performance bottleneck, the regular version is generally preferred.
Deleting Files and Directories fs.rm
fs.rm(target[, options]) is the recommended deletion API for Node 14.14+, capable of deleting single files, symbolic links, and non-empty directories when options.recursive === true is configured. Common options:
recursive: Defaults tofalse; setting it totruewill recursively delete the directory tree.force: Ignores non-existent paths (does not throwENOENT) and attempts to continue deleting inaccessible files, defaults tofalse.maxRetries/retryDelay: Allows automatic retries when dealing with handle contention on Windows.
const fs = require('node:fs/promises');
async function cleanUploadTmp() {
await fs.rm('./uploads/tmp', {
recursive: true,
force: true, // 不存在也不报错
});
console.log('临时目录已清理');
}
cleanUploadTmp().catch((err) => {
console.error('删除失败:', err);
});
The historical
fs.rmdir(path, { recursive: true })has been deprecated; it is recommended to uniformly usefs.rm. When deleting directories that will be rebuilt later, if concurrent write operations exist, error handling withfs.mkdirshould be combined to avoid race conditions.
Renaming and Moving Files
fs.rename / fs.promises.rename can rename files or directories within the same file system. The target path can include a new directory structure (if the directory does not exist, it needs to be created beforehand).
const fs = require('node:fs/promises');
const path = require('node:path');
async function renameLog() {
const src = path.resolve('./logs/app.log');
const destDir = path.resolve('./logs/archive');
await fs.mkdir(destDir, { recursive: true });
const dest = path.join(destDir, `app-${Date.now()}.log`);
await fs.rename(src, dest);
console.log(`已移动到: ${dest}`);
}
renameLog().catch((err) => {
if (err.code === 'ENOENT') {
console.error('源文件不存在');
return;
}
console.error('重命名失败:', err);
});
fs.rename may fail when moving files between different disks or partitions (EXDEV). In such cases, a combination of streams or fs.copyFile + fs.unlink should be used to achieve copy-then-delete.
Streaming Large Files
const fs = require('node:fs');
const path = require('node:path');
function copyLargeFile(src, dest) {
return new Promise((resolve, reject) => {
const readable = fs.createReadStream(src);
const writable = fs.createWriteStream(dest);
readable.on('error', reject);
writable.on('error', reject);
writable.on('finish', resolve);
readable.pipe(writable);
});
}
copyLargeFile(path.resolve('videos/big.mp4'), path.resolve('backup/big.mp4'))
.then(() => console.log('复制完成'))
.catch(console.error);
File Streams Explained
Node.js file streams are based on the core stream module. fs.createReadStream and fs.createWriteStream return readable and writable stream objects, respectively. They do not load content into memory all at once but maintain an internal buffer (default 64 KB) to read or write as needed, making them suitable for processing large files or continuous data streams.
- Common events:
open(file descriptor ready),data(data block read),end(readable stream ended),finish(writable stream flushed),error(error occurred),close(resources released). - Important parameters:
highWaterMark: Buffer size, used to control backpressure.encoding: Readable streams output Buffer by default; a default character encoding can be set.flags,mode: Control file opening method and permissions.- Backpressure: When the write target cannot keep up, the writable stream will return
false, and the readable stream should pause until thedrainevent is triggered. The built-inpipeandstream/promises.pipelinewill handle this for you.
Reading File in Chunks and Counting Bytes
const fs = require('node:fs');
function inspectFile(path) {
return new Promise((resolve, reject) => {
let total = 0;
const reader = fs.createReadStream(path, { highWaterMark: 16 * 1024 });
reader.on('open', (fd) => {
console.log('文件描述符:', fd);
});
reader.on('data', (chunk) => {
total += chunk.length;
console.log('读取块大小:', chunk.length);
});
reader.on('end', () => {
console.log('读取结束,总字节数:', total);
resolve(total);
});
reader.on('error', (err) => {
console.error('读取失败', err);
reject(err);
});
});
}
inspectFile('./logs/app.log').catch(console.error);
Using pipeline to Chain Transformations and Writes
const fs = require('node:fs');
const zlib = require('node:zlib');
const { pipeline } = require('node:stream/promises');
async function compressLog() {
await pipeline(
fs.createReadStream('./logs/app.log', { encoding: 'utf8' }),
zlib.createGzip({ level: 9 }),
fs.createWriteStream('./logs/app.log.gz')
);
console.log('压缩完成');
}
compressLog().catch(console.error);
pipeline has built-in backpressure handling and error propagation, making it recommended for complex stream combinations. When processing binary files or audio/video, you can switch to processing Buffers by not setting an encoding.
Monitoring File Changes
const fs = require('node:fs');
const watcher = fs.watch('./config.json', (eventType, filename) => {
console.log('文件变化:', eventType, filename);
});
process.on('SIGINT', () => {
watcher.close();
console.log('监听已停止');
});
Promise Style (.then/.catch) Writing Example
If you don't want to use async/await, you can directly chain calls to the Promise returned by fs.promises:
const fs = require('node:fs/promises');
fs.readFile('./input.txt', 'utf8')
.then((text) => {
console.log('读取成功:', text);
return fs.writeFile('./result.txt', text.trim() + '\nProcessed');
})
.then(() => fs.stat('./result.txt'))
.then((stats) => {
console.log('写入完成,文件大小:', stats.size);
})
.catch((err) => {
console.error('操作失败:', err);
});
When multiple operations need to run in parallel, use Promise.all:
const fs = require('node:fs/promises');
Promise.all([
fs.readFile('./a.txt', 'utf8'),
fs.readFile('./b.txt', 'utf8'),
fs.readFile('./c.txt', 'utf8'),
])
.then(([a, b, c]) => fs.writeFile('./merged.txt', [a, b, c].join('\n')))
.then(() => console.log('并行读取并合并完成'))
.catch((err) => console.error('并行操作失败:', err));
Tip: When handling a large number of asynchronous file operations, combine with
Promise.allor a task queue to limit concurrency and avoidEMFILEerrors caused by opening too many file descriptors simultaneously.
Comparison of Character Streams and Binary Streams in File Streams
In Java, "character streams (Reader/Writer)" and "byte streams (InputStream/OutputStream)" are clearly distinguished. Node.js does not have separate character stream classes; all file streams are essentially byte streams (based on Buffer). Whether they behave as "character" streams depends on whether an encoding is set. The following examples demonstrate two common patterns:
Text Stream (Specified Encoding)
const fs = require('node:fs');
const textReader = fs.createReadStream('./poem.txt', {
encoding: 'utf8', // 指定编码后 data 事件直接得到字符串
});
textReader.on('data', (chunk) => {
console.log('文本块:', chunk);
});
textReader.on('end', () => {
console.log('文本读取完成');
});
encoding only affects the form of the data read out; it does not change how the underlying Buffer is read. When no encoding is set, the chunk will be a Buffer object.
Binary Stream (Default Buffer)
const fs = require('node:fs');
const binaryReader = fs.createReadStream('./images/logo.png'); // 不设置 encoding
const chunks = [];
binaryReader.on('data', (chunk) => {
chunks.push(chunk);
});
binaryReader.on('end', () => {
const buffer = Buffer.concat(chunks);
console.log('PNG 头部签名:', buffer.slice(0, 8));
});
For binary data, it is usually processed in Buffer form or written to other writable streams (e.g., network, compression streams).
Writing Characters and Binary Data
const fs = require('node:fs');
// 写入文本,指定 UTF-8 编码
const textWriter = fs.createWriteStream('./output/hello.txt', {
encoding: 'utf8',
});
textWriter.write('你好,世界\n');
textWriter.end();
// 写入原始字节
const binaryWriter = fs.createWriteStream('./output/raw.bin');
binaryWriter.write(Buffer.from([0x00, 0xff, 0x10, 0x7a]));
binaryWriter.end();
Summary: Node.js file streams process bytes by default; with encoding, they can simulate "character stream" effects. When dealing with large objects or needing precise byte control, keeping Buffer is safer.
Buffer Module Explained
Buffer is a block of native memory outside the V8 heap in Node.js, used for handling binary data. Common scenarios include file I/O, network communication, encryption, and compression. Buffer and Uint8Array are interoperable, and Node 18+ Buffer instances also inherit from Uint8Array by default.
- Creation methods:
Buffer.from(string[, encoding])Buffer.from(array|ArrayBuffer)Buffer.alloc(size[, fill[, encoding]])Buffer.allocUnsafe(size)(skips initialization, high performance but needs to be filled immediately)- Common encodings:
utf8(default),base64,hex,latin1,ascii. - Recommended to use with
TextEncoder/TextDecoderfor more granular character processing.
Creation and Encoding Conversion
const bufUtf8 = Buffer.from('Node.js', 'utf8');
const bufHex = Buffer.from('e4bda0e5a5bd', 'hex'); // “你好”
console.log(bufUtf8); // <Buffer 4e 6f 64 65 2e 6a 73>
console.log(bufHex.toString('utf8')); // 你好
const base64 = bufUtf8.toString('base64');
console.log('Base64:', base64);
console.log('还原:', Buffer.from(base64, 'base64').toString('utf8'));
Byte-by-Byte Writing and Reading
const buf = Buffer.alloc(8);
buf.writeUInt16BE(0x1234, 0); // 大端
buf.writeUInt16LE(0x5678, 2); // 小端
buf.writeInt32BE(-1, 4);
console.log(buf); // <Buffer 12 34 78 56 ff ff ff ff>
console.log(buf.readUInt16BE(0)); // 4660
console.log(buf.readInt32BE(4)); // -1
Slicing, Copying, and Concatenating
const part1 = Buffer.from('Hello ');
const part2 = Buffer.from('World');
const full = Buffer.concat([part1, part2]);
console.log(full.toString()); // Hello World
const slice = full.slice(6); // 共用内存
console.log(slice.toString()); // World
const copyTarget = Buffer.alloc(5);
full.copy(copyTarget, 0, 6);
console.log(copyTarget.toString()); // World
Buffer and TypedArray Interoperability
const arr = new Uint8Array([1, 2, 3, 4]);
const buf = Buffer.from(arr.buffer); // 共享底层 ArrayBuffer
buf[0] = 99;
console.log(arr[0]); // 99
const view = new Uint32Array(buf.buffer, buf.byteOffset, buf.byteLength / 4);
console.log(view); // Uint32Array(1) [...]
JSON Serialization and Base64 Transmission
Buffer implements toJSON by default, so JSON.stringify(buffer) will result in a { type: 'Buffer', data: [...] } structure. After deserialization, it can be directly passed to Buffer.from for restoration:
const buffer = Buffer.from('你好世界');
const jsonString = JSON.stringify(buffer);
console.log(jsonString); // {"type":"Buffer","data":[228,189,160,229,165,189,228,184,150,231,149,140]}
const jsonObject = JSON.parse(jsonString);
console.log(jsonObject); // { type: 'Buffer', data: [ 228, 189, 160, 229, 165, 189, 228, 184, 150, 231, 149, 140 ] }
const buffer2 = Buffer.from(jsonObject);
console.log(buffer2.toString('utf8')); // 你好世界
When needing to transmit Buffers via a JSON channel, base64 can be used to reduce volume (JSON arrays significantly increase volume):
const payload = Buffer.from(JSON.stringify({ id: 1, msg: 'hi' }), 'utf8');
const transport = payload.toString('base64');
// 接收方
const decoded = Buffer.from(transport, 'base64');
console.log(JSON.parse(decoded.toString('utf8'))); // { id: 1, msg: 'hi' }
Note: Buffers created with
Buffer.allocUnsafecontain old memory data and must be written to before use. Repeatedly creating a large number of Buffers can trigger GC pressure; consider reusing or using a pooling strategy.
Node's Network Modules
net Module Overview
net.createServer(): Creates a TCP server instance, returnsnet.Server, and obtains the clientsocketvia theconnectionevent.net.createConnection(options)/net.connect(): Client entry point, establishes anet.Socketto actively connect to the server, can sethost,port,timeout, etc.net.Socketis both a readable and writable stream, with common eventsdata,end,error,close, and common methodswrite(),end(),setEncoding(),setKeepAlive(), etc.server.address(),server.getConnections(cb)are used for debugging listening addresses and connection counts.
Viewing Local/Remote Connection Information
const net = require('net');
const server = net.createServer((socket) => {
console.log('local port:', socket.localPort);
console.log('local address:', socket.localAddress);
console.log('remote port:', socket.remotePort);
console.log('remote family:', socket.remoteFamily);
console.log('remote address:', socket.remoteAddress);
});
server.listen(8888, () => console.log('server is listening'));
socket.local* properties indicate the port/address the current server is listening on, while socket.remote* points to client information, which is very convenient for debugging multi-client access or troubleshooting NAT issues.
net Getting Started Example
// server.js
const net = require('net');
const server = net.createServer((socket) => {
console.log('client connected:', socket.remoteAddress, socket.remotePort);
socket.setEncoding('utf8');
socket.write('Hello from TCP server, type "bye" to quit.\n');
socket.on('data', (chunk) => {
const message = chunk.trim();
console.log('receive:', message);
if (message.toLowerCase() === 'bye') {
socket.end('Server closing connection.\n');
} else {
socket.write(`Server echo: ${message}\n`);
}
});
socket.on('end', () => console.log('client disconnected'));
socket.on('error', (err) => console.error('socket error:', err.message));
});
server.on('error', (err) => console.error('server error:', err.message));
server.listen(4000, () => {
const addr = server.address();
console.log(`TCP server listening on ${addr.address}:${addr.port}`);
});
// client.js
const net = require('net');
const client = net.createConnection({ host: '127.0.0.1', port: 4000 }, () => {
console.log('connected to server');
client.write('ping');
});
client.setEncoding('utf8');
client.on('data', (data) => {
console.log('server says:', data.trim());
if (data.includes('echo')) {
client.write('bye');
}
});
client.on('end', () => console.log('disconnected from server'));
client.on('error', (err) => console.error('client error:', err.message));
After running node server.js, execute node client.js to see the question-and-answer interaction process.
nc (netcat) Tool
nc is a common network debugging tool in Unix-like systems, capable of quickly establishing TCP/UDP connections, often used to test port listening, transfer text, and forward traffic. Combined with the server above, it can be quickly verified without a Node client:
# 启动 server.js 后,用 nc 充当客户端
nc 127.0.0.1 4000
# 看到提示后输入文本,例如:
ping
hello
bye
nc sends keyboard input to the server as a TCP stream, which is very suitable for troubleshooting net service logic or protocol formats, equivalent to a lightweight TCP terminal.
socket.write Usage Instructions
socket.write(chunk[, encoding][, callback]) is used to send data to the peer and is the most commonly used output method of net.Socket:
chunkcan be aBuffer,Uint8Array, or string; if it's a string, the encoding can be specified viaencoding(defaultutf8).- The return value is a boolean;
falseindicates that the underlying buffer is full and writing should wait for thedrainevent, otherwise backpressure might be triggered. - The optional
callbackis invoked after the data has been flushed to the underlying system, suitable for tracking send completion or error handling.
Common usage is as follows:
socket.write('hello', 'utf8', (err) => {
if (err) {
console.error('send failed:', err);
return;
}
console.log('send success');
});
if (!socket.write(Buffer.from([0x01, 0x02]))) {
socket.once('drain', () => {
console.log('buffer drained, continue writing');
});
}
When needing to end a connection, socket.end() can be used to send the last chunk of data and trigger FIN, which is more elegant than calling write() followed by a manual destroy():
const net = require('net');
const server = net.createServer((socket) => {
socket.on('data', (msg) => {
if (msg.toString().trim() === 'bye') {
socket.end('Goodbye!\n'); // 发送最后一条消息并优雅关闭
return;
}
socket.write('Say "bye" to end connection.\n');
});
});
server.listen(4000, () => console.log('listening on 4000'));
After the client sends bye, the server immediately returns Goodbye! and calls socket.end(). The underlying TCP completes the FIN/ACK handshake, the remote end event is triggered, and the connection closes normally.
Complete TCP Server/Client Example
// tcp-server.js
const net = require('net');
const server = net.createServer((socket) => {
console.log(`new connection: ${socket.remoteAddress}:${socket.remotePort}`);
socket.setEncoding('utf8');
socket.write('Welcome! Type "quit" to close.\n');
socket.on('data', (chunk) => {
const msg = chunk.trim();
if (!msg) return;
if (msg.toLowerCase() === 'quit') {
socket.end('Bye!\n');
return;
}
socket.write(`Echo(${new Date().toLocaleTimeString()}): ${msg}\n`);
});
socket.on('end', () => console.log('client closed:', socket.remoteAddress));
socket.on('error', (err) => console.error('socket error:', err.message));
});
server.listen(5000, () => console.log('TCP server listening on port 5000'));
// tcp-client.js
const net = require('net');
const readline = require('readline');
const client = net.createConnection({ host: '127.0.0.1', port: 5000 }, () => {
console.log('connected to TCP server, type message then Enter');
});
client.setEncoding('utf8');
client.on('data', (data) => {
console.log(data.trim());
});
client.on('end', () => {
console.log('server closed connection');
rl.close();
});
client.on('error', (err) => console.error('client error:', err.message));
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
rl.on('line', (line) => {
client.write(line);
if (line.toLowerCase() === 'quit') {
rl.pause();
}
});
- First run
node tcp-server.js; the server listens on port 5000. - In a second terminal, run
node tcp-client.jsand send any message via the keyboard. - When
quitis entered, the client sends a termination command, and the server callssocket.end()to gracefully close the connection.
Complete UDP Server/Client Example
// udp-server.js
const dgram = require('dgram');
const server = dgram.createSocket('udp4');
server.on('message', (msg, rinfo) => {
console.log(`recv ${msg} from ${rinfo.address}:${rinfo.port}`);
const reply = Buffer.from(`ack:${msg.toString().toUpperCase()}`);
server.send(reply, rinfo.port, rinfo.address, (err) => {
if (err) console.error('send error:', err);
});
});
server.on('listening', () => {
const address = server.address();
console.log(`UDP server listening on ${address.address}:${address.port}`);
});
server.bind(41234);
// udp-client.js
const dgram = require('dgram');
const client = dgram.createSocket('udp4');
client.on('message', (msg) => {
console.log('server reply:', msg.toString());
client.close();
});
const payload = Buffer.from('hello udp');
client.send(payload, 41234, '127.0.0.1', (err) => {
if (err) {
console.error('send error:', err);
client.close();
return;
}
console.log('datagram sent');
});
- UDP creates connectionless sockets via
dgram.createSocket; messages are sent as datagrams, which may be lost or out of order, and reliability is not guaranteed. - After running
node udp-server.js, executenode udp-client.js. The client sends a datagram once, the server immediately sends backack:*upon receipt, and the client prints the reply and then closes.
主题测试文章,只做测试使用。发布者:Walker,转转请注明出处:https://walker-learn.xyz/archives/4766