Node's Package Management and Loading Mechanisms
npm search xxxnpm view xxxnpm install xxx
Node.js File System Operation APIs
Node.js's fs module provides synchronous (Sync) and callback/Promise-based asynchronous APIs for operating on local files and directories. Commonly used capabilities in daily development include reading, writing, appending, deleting, traversing directories, and monitoring changes. The following examples are based on CommonJS syntax; if used in ES Module, they need to be changed to import.
Common API Quick Reference
fs.readFile / fs.promises.readFile: Reads file content all at once.fs.writeFile / fs.promises.writeFile: Writes to overwrite a file, automatically creates the file if it doesn't exist.fs.appendFile / fs.promises.appendFile: Appends content to the end of a file.fs.mkdir / fs.promises.mkdir: Creates a directory, can create recursively.fs.readdir / fs.promises.readdir: Reads the list of filenames in a directory.fs.stat / fs.promises.stat: Views detailed information about a file/directory (size, type, permissions, etc.).fs.access / fs.promises.access: Checks if a path exists and if the calling process has specified permissions.fs.realpath / fs.promises.realpath: Gets the absolute path after resolving symbolic links.fs.unlink / fs.promises.unlink: Deletes a file.fs.rm / fs.promises.rm: Deletes a file or directory, can be used withrecursive/force.fs.watch: Monitors changes in a directory or file.fs.createReadStream / fs.createWriteStream: Stream-based reading and writing suitable for large files or pipelines.
Reading and Writing
const fs = require('node:fs/promises');
async function readAndWrite() {
const content = await fs.readFile('./data.txt', 'utf8');
console.log('原始内容:', content);
await fs.writeFile('./output.txt', content.toUpperCase(), 'utf8');
await fs.appendFile(
'./output.txt',
'\n-- appended at ' + new Date().toISOString()
);
}
readAndWrite().catch(console.error);
Directory Traversal and Details
const fs = require('node:fs/promises');
const path = require('node:path');
async function listDir(dir) {
const entries = await fs.readdir(dir, { withFileTypes: true });
for (const entry of entries) {
const fullPath = path.join(dir, entry.name);
const stats = await fs.stat(fullPath);
console.log({
name: entry.name,
isDirectory: entry.isDirectory(),
size: stats.size,
modified: stats.mtime,
});
}
}
listDir('./logs').catch(console.error);
Ensuring Directory Existence
const fs = require('node:fs/promises');
async function ensureDir(dir) {
await fs.mkdir(dir, { recursive: true }); // Create directories nested
}
ensureDir('./uploads/images').catch(console.error);
Permission Check fs.access
fs.access(path[, mode]) can be used to check if a target path exists and what permissions the calling process has on it before actual read/write operations. mode defaults to fs.constants.F_OK (existence check only), and can also be bitwise combined with R_OK (readable), W_OK (writable), X_OK (executable). The asynchronous callback convention is "no error means success"; the Promise version throws errors like ENOENT (does not exist) or EACCES (no permission) on validation failure.
const fs = require('node:fs/promises');
async function ensureWritableConfig() {
try {
await fs.access('./config/app.json', fs.constants.R_OK | fs.constants.W_OK);
console.log('Configuration file exists and is readable/writable');
} catch (err) {
if (err.code === 'ENOENT') {
console.log('File does not exist, preparing to create...');
await fs.writeFile('./config/app.json', '{}');
return;
}
throw err; // Let the caller decide whether to prompt for insufficient permissions, etc.
}
}
ensureWritableConfig().catch((err) => {
console.error('Permission check failed:', err);
});
Note:
fs.accessonly reflects the state at the moment of the check. Subsequent actual read/write operations may still fail due to changed conditions, so error handling is still required for critical write operations.
Resolve Actual Path fs.realpath
fs.realpath(path[, options]) resolves relative paths, symbolic links, . / .. segments, etc., and returns the canonicalized absolute path. By default, it returns a UTF-8 string; it can be set to 'buffer' via options.encoding to get a Buffer. The Promise version throws an error if the path does not exist (ENOENT) or if there is a link loop (ELOOP).
const fs = require('node:fs/promises');
async function resolveUpload(pathLike) {
const resolved = await fs.realpath(pathLike);
if (!resolved.startsWith('/var/www/uploads')) {
throw new Error('Access out of bounds');
}
return resolved;
}
resolveUpload('./uploads/../uploads/avatar.jpg')
.then((absPath) => console.log('Real path:', absPath))
.catch(console.error);
fs.realpath.nativeuses the operating system's native implementation, which may be faster on some platforms but behave slightly differently (especially on Windows UNC paths). Unless there is a performance bottleneck, the regular version is generally preferred.
Delete Files and Directories fs.rm
fs.rm(target[, options]) is the recommended deletion API since Node 14.14+. It can delete single files, symbolic links, and also non-empty directories when options.recursive === true is configured. Common options:
recursive: Defaults tofalse; set totrueto recursively delete the directory tree.force: Ignores non-existent paths (does not throwENOENT) and attempts to continue deleting inaccessible files, defaults tofalse.maxRetries/retryDelay: Can automatically retry when dealing with locked handles on Windows.
const fs = require('node:fs/promises');
async function cleanUploadTmp() {
await fs.rm('./uploads/tmp', {
recursive: true,
force: true, // Do not throw error if it doesn't exist
});
console.log('Temporary directory cleaned');
}
cleanUploadTmp().catch((err) => {
console.error('Deletion failed:', err);
});
The historical
fs.rmdir(path, { recursive: true })has been deprecated; it is recommended to usefs.rmconsistently. When deleting directories that will be rebuilt later, if there are concurrent write operations, error handling forfs.mkdirshould be combined to avoid race conditions.
Rename and Move Files
fs.rename / fs.promises.rename can rename files or directories within the same file system. The target path can include a new directory structure (if the directory does not exist, it needs to be created beforehand).
const fs = require('node:fs/promises');
const path = require('node:path');
async function renameLog() {
const src = path.resolve('./logs/app.log');
const destDir = path.resolve('./logs/archive');
await fs.mkdir(destDir, { recursive: true });
const dest = path.join(destDir, `app-${Date.now()}.log`);
await fs.rename(src, dest);
console.log(`Moved to: ${dest}`);
}
renameLog().catch((err) => {
if (err.code === 'ENOENT') {
console.error('Source file does not exist');
return;
}
console.error('Rename failed:', err);
});
fs.rename may fail when moving files between different disks or partitions (EXDEV). In such cases, a combination of streams or fs.copyFile + fs.unlink should be used to copy and then delete.
Stream Processing Large Files
const fs = require('node:fs');
const path = require('node:path');
function copyLargeFile(src, dest) {
return new Promise((resolve, reject) => {
const readable = fs.createReadStream(src);
const writable = fs.createWriteStream(dest);
readable.on('error', reject);
writable.on('error', reject);
writable.on('finish', resolve);
readable.pipe(writable);
});
}
copyLargeFile(path.resolve('videos/big.mp4'), path.resolve('backup/big.mp4'))
.then(() => console.log('Copy completed'))
.catch(console.error);
File Stream Explained
Node.js file streams are based on the core stream module. fs.createReadStream and fs.createWriteStream return readable and writable stream objects, respectively. They do not load the entire content into memory at once but rather maintain an internal buffer (default 64 KB) to read or write on demand, making them suitable for processing large files or continuous data streams.
- Common events:
open(file descriptor ready),data(data chunk read),end(readable stream ended),finish(writable stream flushed),error(error occurred),close(resources released). - Important parameters:
highWaterMark: Buffer size, used to control backpressure.encoding: Readable stream outputs Buffer by default, can set default character encoding.flags,mode: Control file opening method and permissions.- Backpressure: When the write target cannot keep up, the writable stream returns
false, and the readable stream should pause until adrainevent is triggered. The built-inpipeandstream/promises.pipelinehandle this for you.
Read File in Chunks and Count Bytes
const fs = require('node:fs');
function inspectFile(path) {
return new Promise((resolve, reject) => {
let total = 0;
const reader = fs.createReadStream(path, { highWaterMark: 16 * 1024 });
reader.on('open', (fd) => {
console.log('File descriptor:', fd);
});
reader.on('data', (chunk) => {
total += chunk.length;
console.log('Read chunk size:', chunk.length);
});
reader.on('end', () => {
console.log('Read finished, total bytes:', total);
resolve(total);
});
reader.on('error', (err) => {
console.error('Read failed', err);
reject(err);
});
});
}
inspectFile('./logs/app.log').catch(console.error);
Use pipeline to Chain Transformations and Writes
const fs = require('node:fs');
const zlib = require('node:zlib');
const { pipeline } = require('node:stream/promises');
async function compressLog() {
await pipeline(
fs.createReadStream('./logs/app.log', { encoding: 'utf8' }),
zlib.createGzip({ level: 9 }),
fs.createWriteStream('./logs/app.log.gz')
);
console.log('Compression completed');
}
compressLog().catch(console.error);
pipeline has built-in backpressure handling and error propagation, recommended for complex stream combinations. When processing binary files or audio/video, you can switch to processing Buffers by not setting an encoding.
Monitor File Changes
const fs = require('node:fs');
const watcher = fs.watch('./config.json', (eventType, filename) => {
console.log('File change:', eventType, filename);
});
process.on('SIGINT', () => {
watcher.close();
console.log('Monitoring stopped');
});
Promise Style (.then/.catch) Example
If you don't want to use async/await, you can directly chain calls to the Promises returned by fs.promises:
const fs = require('node:fs/promises');
fs.readFile('./input.txt', 'utf8')
.then((text) => {
console.log('Read successful:', text);
return fs.writeFile('./result.txt', text.trim() + '\nProcessed');
})
.then(() => fs.stat('./result.txt'))
.then((stats) => {
console.log('Write completed, file size:', stats.size);
})
.catch((err) => {
console.error('Operation failed:', err);
});
When multiple operations need to run in parallel, use Promise.all:
const fs = require('node:fs/promises');
Promise.all([
fs.readFile('./a.txt', 'utf8'),
fs.readFile('./b.txt', 'utf8'),
fs.readFile('./c.txt', 'utf8'),
])
.then(([a, b, c]) => fs.writeFile('./merged.txt', [a, b, c].join('\n')))
.then(() => console.log('Parallel read and merge completed'))
.catch((err) => console.error('Parallel operation failed:', err));
Tip: When handling a large number of asynchronous file operations, you can combine
Promise.allor a task queue to limit concurrency, avoiding opening too many file descriptors simultaneously which can lead to anEMFILEerror.
Comparison of Character Streams and Binary Streams in File Streams
In Java, "character streams (Reader/Writer)" are clearly distinguished from "byte streams (InputStream/OutputStream)". In Node.js, there are no separate character stream classes; all file streams are essentially byte streams (based on Buffer). Whether they behave as "character" streams depends on whether an encoding is set. The following examples demonstrate two common patterns:
Text Stream (Specified Encoding)
const fs = require('node:fs');
const textReader = fs.createReadStream('./poem.txt', {
encoding: 'utf8', // After specifying encoding, the data event directly yields strings
});
textReader.on('data', (chunk) => {
console.log('Text chunk:', chunk);
});
textReader.on('end', () => {
console.log('Text read completed');
});
encoding only affects the form of the data read, it does not change the underlying Buffer reading method. If no encoding is set, the chunk will be a Buffer object.
Binary Stream (Default Buffer)
const fs = require('node:fs');
const binaryReader = fs.createReadStream('./images/logo.png'); // No encoding set
const chunks = [];
binaryReader.on('data', (chunk) => {
chunks.push(chunk);
});
binaryReader.on('end', () => {
const buffer = Buffer.concat(chunks);
console.log('PNG header signature:', buffer.slice(0, 8));
});
For binary data, it is usually processed in Buffer form or written to other writable streams (such as network or compression streams).
Writing Characters and Binary Data
const fs = require('node:fs');
// Write text, specifying UTF-8 encoding
const textWriter = fs.createWriteStream('./output/hello.txt', {
encoding: 'utf8',
});
textWriter.write('你好,世界\n');
textWriter.end();
// Write raw bytes
const binaryWriter = fs.createWriteStream('./output/raw.bin');
binaryWriter.write(Buffer.from([0x00, 0xff, 0x10, 0x7a]));
binaryWriter.end();
Summary: Node.js file streams process bytes by default; by using encoding, they can simulate "character stream" effects. When handling large objects or needing precise control over bytes, keeping data as Buffer is safer.
Buffer Module Explained
Buffer is a block of native memory outside the V8 heap in Node.js, used for handling binary data. Common scenarios include file I/O, network communication, encryption, and compression. Buffer and Uint8Array are interoperable, and since Node 18+, Buffer instances also inherit from Uint8Array.
- Creation methods:
Buffer.from(string[, encoding])Buffer.from(array|ArrayBuffer)Buffer.alloc(size[, fill[, encoding]])Buffer.allocUnsafe(size)(skips initialization, higher performance but must be immediately filled)- Common encodings:
utf8(default),base64,hex,latin1,ascii. - Recommended to use with
TextEncoder/TextDecoderfor more granular character processing.
Creation and Encoding Conversion
const bufUtf8 = Buffer.from('Node.js', 'utf8');
const bufHex = Buffer.from('e4bda0e5a5bd', 'hex'); // “你好”
console.log(bufUtf8); // <Buffer 4e 6f 64 65 2e 6a 73>
console.log(bufHex.toString('utf8')); // 你好
const base64 = bufUtf8.toString('base64');
console.log('Base64:', base64);
console.log('Restore:', Buffer.from(base64, 'base64').toString('utf8'));
Byte-level Writing and Reading
const buf = Buffer.alloc(8);
buf.writeUInt16BE(0x1234, 0); // Big-endian
buf.writeUInt16LE(0x5678, 2); // Little-endian
buf.writeInt32BE(-1, 4);
console.log(buf); // <Buffer 12 34 78 56 ff ff ff ff>
console.log(buf.readUInt16BE(0)); // 4660
console.log(buf.readInt32BE(4)); // -1
Slicing, Copying, and Concatenating
const part1 = Buffer.from('Hello ');
const part2 = Buffer.from('World');
const full = Buffer.concat([part1, part2]);
console.log(full.toString()); // Hello World
const slice = full.slice(6); // Shares memory
console.log(slice.toString()); // World
const copyTarget = Buffer.alloc(5);
full.copy(copyTarget, 0, 6);
console.log(copyTarget.toString()); // World
Buffer and TypedArray Interoperability
const arr = new Uint8Array([1, 2, 3, 4]);
const buf = Buffer.from(arr.buffer); // Shares underlying ArrayBuffer
buf[0] = 99;
console.log(arr[0]); // 99
const view = new Uint32Array(buf.buffer, buf.byteOffset, buf.byteLength / 4);
console.log(view); // Uint32Array(1) [...]
JSON Serialization and base64 Transmission
Buffer implements toJSON by default, so JSON.stringify(buffer) will result in a { type: 'Buffer', data: [...] } structure, which can be directly passed to Buffer.from for deserialization:
const buffer = Buffer.from('你好世界');
const jsonString = JSON.stringify(buffer);
console.log(jsonString); // {"type":"Buffer","data":[228,189,160,229,165,189,228,184,150,231,149,140]}
const jsonObject = JSON.parse(jsonString);
console.log(jsonObject); // { type: 'Buffer', data: [ 228, 189, 160, 229, 165, 189, 228, 184, 150, 231, 149, 140 ] }
const buffer2 = Buffer.from(jsonObject);
console.log(buffer2.toString('utf8')); // 你好世界
When needing to transmit Buffer via a JSON channel, base64 can be used to reduce size (JSON arrays can significantly increase size):
const payload = Buffer.from(JSON.stringify({ id: 1, msg: 'hi' }), 'utf8');
const transport = payload.toString('base64');
// Receiver
const decoded = Buffer.from(transport, 'base64');
console.log(JSON.parse(decoded.toString('utf8'))); // { id: 1, msg: 'hi' }
Note: Buffers created with
Buffer.allocUnsafecontain old memory data and must be written to before use. Repeatedly creating many Buffers can trigger GC pressure; consider reusing or pooling strategies.
Node's Network Modules
net Module Overview
net.createServer(): Creates a TCP server instance, returnsnet.Server, and provides the clientsocketvia theconnectionevent.net.createConnection(options)/net.connect(): Client entry point, establishes anet.Socketto actively connect to a server, can sethost,port,timeout, etc.net.Socketis both a readable and writable stream; common events includedata,end,error,close, and common methods includewrite(),end(),setEncoding(),setKeepAlive(), etc.server.address(),server.getConnections(cb)are used for debugging listening address and connection count.
View Local/Remote Connection Information
const net = require('net');
const server = net.createServer((socket) => {
console.log('local port:', socket.localPort);
console.log('local address:', socket.localAddress);
console.log('remote port:', socket.remotePort);
console.log('remote family:', socket.remoteFamily);
console.log('remote address:', socket.remoteAddress);
});
server.listen(8888, () => console.log('server is listening'));
socket.local* properties indicate the port/address the current server is listening on, while socket.remote* point to client information, which is very convenient for debugging multi-client access or troubleshooting NAT issues.
net Getting Started Example
主题测试文章,只做测试使用。发布者:Walker,转转请注明出处:https://walker-learn.xyz/archives/4766