Node
Understanding Node.js from the Asynchronous Programming Paradigm
Node.js's Positioning and Core Philosophy
- Based on the V8 engine + libuv event-driven library, bringing JavaScript from the browser to the server side.
- Uses a single-threaded event loop to handle I/O, maximizing CPU time slices while waiting for I/O, making it particularly suitable for high-concurrency, I/O-intensive scenarios.
- "Don't block the main thread" is the design philosophy: lengthy operations should be delegated to the kernel or thread pool as much as possible, with callback results returning to the event loop.
Event Loop and Task Scheduling
- Phase Division: The event loop processes
timers(setTimeout/setInterval),pending callbacks,idle/prepare,poll(most I/O callbacks execute in this phase),check(setImmediate), andclose callbackssequentially by phase. - Microtask Queue: Before the end of each phase, microtasks (
process.nextTick, Promise callbacks) are cleared.process.nextTickhas the highest priority, followed by Promise microtasks; recursive calls should be avoided to prevent the main loop from "starving". - Thread Pool and Kernel Collaboration: libuv internally maintains a thread pool (defaulting to 4 threads) to make blocking tasks like file system and DNS operations asynchronous; network I/O is directly handed over to the kernel's event notification mechanisms (epoll/kqueue/IOCP).
- Backpressure and Flow Control: Task scheduling based on the event loop requires the consumer to be aware of the production rate. Node.js provides Stream interfaces (
readable.pause()/resume(),pipeline()) to prevent memory explosions.
Common Asynchronous API Lineage
- I/O API:
fs,net,http,tls,dns, etc., provide callback-style asynchronous interfaces by default, which can be converted to Promises withrequire('node:util').promisify. - Timer API:
setTimeout,setInterval,setImmediate, follow the event loop phases and do not guarantee strictly precise timing, especially when the main thread is busy, delays may occur. - Microtask Related:
process.nextTickis used to queue at the end of the current phase; Promise'sthen/catch/finallyexecute in the microtask queue;queueMicrotaskcan trigger microtasks cross-platform. - Events and Streams:
EventEmitteris used for publish-subscribe; Stream (Readable/Writable/Duplex/Transform) encapsulates event-based backpressure handling and is the preferred model for handling large files and network transfers. - Parallel Capability Supplements:
worker_threadsare suitable for CPU-intensive tasks;clusterutilizes multiple processes to share one listening port;child_processis used when external commands need to be called or isolated environments are required.
Evolution of Control Flow Patterns
- Callback Era: Based on the error-first callback convention (
(err, data) => {}), simple but prone to callback hell, requiring attention to the error chain. - Promise: Provides a state machine and chainable calls, making asynchronous flows easier to compose, paired with
Promise.all/any/allSettledfor batch concurrency control. - async/await: Syntactic sugar that further approximates synchronous code structure, with error handling manageable via
try/catch. It's important to remember that await blocks the microtask execution of the current function, making it suitable for sequential logic. - More Advanced Composition Patterns: Reactive libraries like RxJS, generators (
co,async), and iterator-basedfor await...ofhandle asynchronous iterable objects, helping manage complex data flows.
Design Considerations in Asynchronous Programming
- Error Handling: Uniformly capture exceptions (
domainis deprecated,async_hooksor custom middleware are recommended); asynchronous callbacks must checkerrimmediately. - Resource and Concurrency Control: Use
p-limit,Bottleneck, etc., to limit concurrency, preventing the thread pool from being exhausted; configureUV_THREADPOOL_SIZEappropriately. - Observability: Use
async_hooksto track asynchronous contexts, combined withdiagnostics_channelandperf_hooksfor performance analysis, to avoid implicit blocking. - Avoid Blocking Operations: Synchronous APIs like
fs.readFileSync,crypto.pbkdf2Syncwill block the event loop; CPU-intensive logic should be offloaded to Workers or native extensions as much as possible. - Writing Testable Asynchronous Code: Utilize the async testing capabilities of
jest/mocha, paying attention to unhandled Promise rejections (unhandledRejection) and exceptions (uncaughtException).
Typical Application Scenarios and Limitations
- Suitable Scenarios: High-concurrency API gateways, real-time push, chat systems, microservice gateways, isomorphic rendering (frontend/backend), scripts, and CLI tools.
- Unsuitable Scenarios: Heavy CPU computation, image/video encoding/decoding, tightly coupled multi-threaded shared memory scenarios. If necessary, Worker Threads or native modules can be used.
Mastering Node.js's asynchronous programming paradigm is crucial for understanding event loop scheduling, appropriately combining asynchronous control flows, and monitoring asynchronous code behavior through toolchains. Node can only unleash its full high-concurrency potential by not blocking the main thread.
Node.js Version Management Tools and Package Management Tools
Common Version Management Tools
- nvm (Node Version Manager): The most widely used Bash script-based manager, supporting
nvm install <version>andnvm use <version>for easy switching between projects; Windows requires installing the dedicated nvm-windows. - n (TJ Holowaychuk): A lightweight tool installed via npm, with concise commands (
n latest,n lts), suitable for macOS/Linux; manages Node installation paths through global npm permissions. - fnm (Fast Node Manager): Written in Rust, offers fast download speeds and multi-platform support; can be used with
fnm useand.node-versionfiles for automatic switching, compatible with fish/powershell. - Volta: Positioned as a "toolchain manager", it simultaneously fixes Node, npm/Yarn/pnpm versions, suitable for team collaboration; `volta pin node@18` writes the version to
package.json. - Core Idea: Write
.nvmrc,.node-versionin the project root directory or use Volta'spackage.jsonconfiguration to ensure team members use the same runtime, avoiding inconsistent behavior due to version differences.
Choice and Characteristics of Package Management Tools
- npm: Comes officially with Node.js, supports Workspaces since v7, and is compatible with most ecosystems by default;
npm ciallows for repeatable installations based onpackage-lock.json. - Yarn:
- Yarn Classic (v1): Known for parallel installations and
yarn.locklock files, providesworkspacessupport for monorepos; - Yarn Berry (v2+): Introduces Plug'n'Play (PnP) mode, eliminating
node_modules, requiring additional configuration and IDE support. - pnpm: Saves disk space and improves installation speed through content-addressable storage, generating a symbolic link structure for
node_modulesby default; its advantage is being naturally suited for multi-package repositories and monorepos, significantly reducing duplicate dependencies. - Core Command Comparison: Initialization (
npm init/yarn init/pnpm init), installing dependencies (npm install/yarn add/pnpm add), locking versions (package-lock.json/yarn.lock/pnpm-lock.yaml). - Corepack: A tool bundled with Node.js 16.9+, which allows automatic management of npm, Yarn, and pnpm versions after activation with
corepack enable; by declaring the version in thepackageManagerfield ofpackage.json, team members can unify the toolchain when executingcorepack install.
Best Practices for Version and Dependency Collaboration
- Use the same Node version on CI/CD as locally, ensuring consistency via
.nvmrc+nvm installorvolta install. - Lock files should be included in version control to ensure consistent dependency resolution across environments; when upgrading dependencies, use
npm updateand similar commands and regenerate the lock file. - For multi-repository or microservice architectures, combine version managers + monorepo package managers (e.g., pnpm workspace, Yarn workspace) to unify dependencies; and use
npm run/pnpm runto maintain a unified entry point for scripts. - Regularly run
npm auditorpnpm auditto check for security vulnerabilities; for private registries, configure.npmrcoryarnrc.ymlto ensure secure management of authentication information.
Node Learning Outline and Key Points
- Node Installation and Configuration, Using NPM: Set up the runtime environment from scratch, familiarize yourself with basic
nodeandnpmcommands, paving the way for subsequent development. - Managing Node and npm with nvm: Master multi-version switching and
.nvmrcconfiguration to avoid runtime conflicts between projects. - nvm Configuration and Interpretation of Important Commands: Focus on remembering commands like
nvm install/use/ls, and know how to set default versions and mirror sources. - Node Event and Callback Mechanism: Understand how event-driven programming triggers callbacks, clarify the
EventEmittersubscription and triggering process. - Node Asynchronous I/O Model: Analyze the libuv thread pool and kernel event notification mechanisms, understand the performance advantages of non-blocking I/O.
- Node's Single-Threaded Model: Explain the collaboration between a single thread + event loop, and identify its bottlenecks and suitable scenarios.
- Node Module System: Master CommonJS import/export, module caching, and
requirelookup rules. - npm Usage: Familiarize yourself with dependency installation, semantic versioning,
npm scripts, and package publishing processes. package.jsonExplained: Understand the role of core fields such as metadata, scripts, and dependency configurations item by item.- Global vs. Local Installation: Differentiate installation scenarios for CLI tools and project dependencies to avoid permission and pollution issues.
- Important npm Features Explained: Including advanced capabilities like
npm ci,prune,audit, to maintain secure and stable dependencies. - Node Asynchronous Programming Explained: Compare patterns like callbacks, Promises,
async/await, and master error handling and concurrency control. - Node Stream Analysis: Learn Readable/Writable/Transform streams and backpressure control for handling large files and network transfers.
- Input and Output: Master file I/O and standard input/output APIs to build CLI or scripting tools.
- Node Network Capabilities: Use
http/https/netmodules to build network services, understand underlying Socket workings. - Node Console: Utilize the
consolefamily for quick diagnostics such as debugging, timing, and table output. - Event Loop Mechanism: Master the event loop phases and microtask execution order to avoid blocking the main thread.
- Node Debugging: Use the built-in debugger, Chrome DevTools, or VS Code breakpoint debugging to pinpoint issues.
- Using the
exportsObject: Distinguish betweenexportsandmodule.exports, understand module export specifications. - Node File System Manipulation: Proficiently use the
fsmodule for read/write, monitoring, permissions, and stream operations. - Buffer Explained: Master the binary data storage structure and its coordination with encoding and network transmission.
- Node's Error Handling Model: Systematically understand synchronous/asynchronous error capture, global exceptions, and Promise rejection handling.
- Accessing MongoDB with Node: Operate document databases via drivers or ODMs, implementing CRUD and indexing.
- Accessing MySQL with Node: Connect to relational databases, learn connection pooling and transaction control.
- Accessing Redis with Node: Design high-performance services by combining caching, message queues, and other scenarios.
- Middleware Explained: Understand the request processing chain and how to separate reusable logic, writing middleware for frameworks or custom services.
- Node Web Server Explained: Build HTTP services, routing, static resources, and request-response processes from scratch.
- WebSocket Implementation in Node: Master the basic steps for creating long connections and the protocol upgrade process.
- WebSocket Data Transfer: Design message formats, heartbeats, and reconnection strategies to ensure reliable real-time communication.
- Socket.IO Explained: Familiarize yourself with features like rooms and namespaces to accelerate real-time business development.
- Express or KOA Full Feature Explanation: Systematically learn routing, middleware, error handling, templating, and static resource management to solidify Web framework practical skills.
Node.js Module System
CommonJS Basics
Node.js's initial module system was based on the CommonJS specification, using require for import and module.exports for export. Each file is wrapped in a function scope upon its first load ((function (exports, require, module, __filename, __dirname) {})), the code executes only once and is cached in require.cache.
module.exports = valuedetermines the final value exposed by the module;exportsis merely a shortcut reference tomodule.exportsand cannot be entirely replaced.require('./foo')supports relative paths,require('node:fs')references built-in modules, andrequire('package-name')looks upnode_modules.require.main === modulecan determine if the current file is the entry script, making it easy to distinguish between CLI and library logic.- Cache sharing: Multiple
requirecalls for the same module return the same instance, suitable for storing singleton states, such as database connections or configurations.
// counter.js
let count = 0;
function increase() {
count += 1;
return count;
}
module.exports = {
increase,
get value() {
return count;
},
};
// app.js
const counter = require('./counter.js');
console.log(counter.increase()); // 1
console.log(counter.increase()); // 2 - 共享缓存中的状态
Module Resolution and Package Entry Points
The lookup order for require follows: absolute path > relative path > built-in modules > current directory node_modules > parent directory node_modules recursively upwards. For directories or packages:
- The
mainfield inpackage.jsonpoints to the CommonJS entry; if anexportsfield exists, it takes precedence and can define subpath exports (e.g.,"./api": "./dist/api.js"). - If no entry is specified, Node will try
index.js,index.json,index.node. .jsonfiles are automatically parsed as objects, and.nodefiles are used to load native extensions.- You can use
require.resolve('pkg')to see the actual resolved path, aiding in debugging multi-level dependencies.
ECMAScript Modules (ESM)
Node 14+ officially supports ESM. Activation methods include naming files as .mjs, or using .js files after setting "type": "module" in package.json. ESM features static analysis, asynchronous loading, and top-level await.
- Uses
import/exportsyntax, with strict mode enabled by default;__filenameand__dirnameno longer exist and can be obtained viaimport.meta.url+fileURLToPath. import fs from 'node:fs';imports the module's default export; named exports useimport { readFile } from 'node:fs/promises';.export default valuedefines a default export,export const name = valueorexport { local as alias }defines named exports.- ESM module resolution also follows the
exportsfield, but no longer automatically completes extensions, requiring explicit specification (e.g.,import './utils.js').
// package.json: { "type": "module" }
import { readFile } from 'node:fs/promises';
const text = await readFile(new URL('./README.md', import.meta.url), 'utf-8');
export default text.length;
CommonJS and ESM Interoperability
In the same project, both module formats often need to coexist. Common interoperability methods include:
- Using old modules in ESM:
import legacy from './legacy.cjs'; const { doWork } = legacy; - Using new modules in CommonJS: First introduce
createRequireprovided bynode:module, or dynamically import viaimport(). - When importing each other by default, content exposed by CommonJS maps to ESM's
defaultexport; ESM's default export can be accessed in CommonJS viamodule.exports = require('./esm.mjs'). - Avoid partial initialization caused by circular dependencies; if necessary, split common state into separate modules or defer calls.
// cjs-wrapper.cjs
const { createRequire } = require('node:module');
const requireESM = createRequire(__filename);
async function loadMath() {
const { sum } = await import('./math.js'); // ESM
return sum(1, 2);
}
module.exports = { loadMath, config: requireESM('./config.json') };
Practical advice: In new projects, try to uniformly use ESM. When migrating older projects, maintain clear formatting (e.g.,
.cjs/.mjssuffixes or package-leveltype), and use theexportsfield to consolidate external interfaces, avoiding deep path coupling.
Node.js High-Performance HTTP
It is a single-threaded service (event-driven logic)
// app.js
let http = require('http');
let server = http.createServer(function (request, response) {
response.writeHead(200, { 'content-type': 'text/plain' });
response.end('Hello world~ node.js');
});
server.listen(3000, 'localhost');
console.log('Node server started on port 3000');
server.on('listening', function () {
console.log('Server is listening...');
});
// connection
// close
Note: The server example above uses the native
httpmodule to directly create a server.response.writeHeadsets the response headers and immediately returns a plain text string;server.on('listening')is used to confirm port binding. If logic needs to be extended upon connection establishment or closure,connection,close, and other events can be further listened to.
HTTP Client Example
const http = require('http');
let responseData = '';
const req = http.request(
{
host: 'localhost',
port: 3000,
method: 'GET',
},
function (response) {
response.on('data', function (chunk) {
responseData += chunk;
});
response.on('end', function () {
console.log(responseData);
});
}
);
req.end();
Note: The client actively sends a
GETrequest tolocalhost:3000viahttp.request, continuously accumulating response content by listening to thedataevent, until theendevent triggers andresponseDatais printed once. When combined with the server example above, this fully verifies the process from request initiation to response output.
Request Information Echo Example
The example below, when handling scenarios with request bodies such as POST, collects data sent by the client and concatenates key request-related information before returning it to the browser, facilitating debugging and understanding the workflow of the native http module.
const http = require('http');
const server = http.createServer(function (request, response) {
let data = '';
request.on('data', function (chunk) {
data += chunk;
});
request.on('end', function () {
const method = request.method;
const headers = JSON.stringify(request.headers);
const httpVersion = request.httpVersion;
const requestUrl = request.url;
response.writeHead(200, { 'Content-Type': 'text/html' });
const responseData = `${method}, ${headers}, ${httpVersion}, ${requestUrl}, ${data}`;
response.end(responseData);
});
});
server.listen(3000, function () {
console.log('Node Server started on port 3000');
});
Note:
request.on('data')continuously receives data chunks. Once theendevent is triggered, the complete request body can be safely read and a response sent; in this example, the HTTP method, headers, protocol version, URL, and request body are concatenated into a string and returned. In actual projects, this can be changed to structured JSON or processed according to business requirements.
URL Module and Common Usage
Node.js provides two sets of URL handling APIs: one is the WHATWG-compliant URL/URLSearchParams classes (recommended), and the other is the legacy url.parse, url.format, and other functions. Common parsing and construction scenarios can directly use the WHATWG version; only when dealing with old code or requiring compatibility with special usages should one fall back to the traditional API.
// WHATWG URL:解析并读取查询参数
const { URL } = require('node:url');
const userUrl = new URL('/users?role=admin&active=true', 'https://example.com');
console.log(userUrl.hostname); // example.com
console.log(userUrl.pathname); // /users
console.log(userUrl.searchParams.get('role')); // admin
console.log(userUrl.searchParams.has('active')); // true
URLSearchParams can also conveniently construct query strings:
const { URLSearchParams } = require('node:url');
const params = new URLSearchParams({ page: 2, pageSize: 20 });
params.append('keyword', 'nodejs');
console.log(params.toString()); // page=2&pageSize=20&keyword=nodejs
If a project is still using the traditional url module, parse/format/resolve can be used for deconstruction and assembly:
const url = require('node:url');
const legacyParsed = url.parse(
'https://foo.com:8080/articles/list?tag=node#summary',
true
);
console.log(legacyParsed.host); // foo.com:8080
console.log(legacyParsed.query.tag); // node
const rebuilt = url.format({
protocol: 'https',
hostname: 'foo.com',
pathname: '/articles/detail',
query: { id: 123 },
});
console.log(rebuilt); // https://foo.com/articles/detail?id=123
console.log(url.resolve('https://foo.com/docs/', '../api')); // https://foo.com/api
Summary: Prioritize using the object-oriented interface provided by WHATWG
URLfor better readability and native support for standard behavior; when maintaining old projects or handling special formats, then use older functions likeurl.parse.
Querystring Utility Functions
The querystring module was used in early Node.js for serializing and parsing query strings, with an API style leaning towards functional programming. Although URLSearchParams is recommended in modern projects, querystring may still be encountered when dealing with legacy code or compatibility with older services. Core functions include querystring.parse, querystring.stringify, querystring.escape, and querystring.unescape.
const querystring = require('node:querystring');
const query = 'page=2&pageSize=20&keyword=nodejs';
const parsed = querystring.parse(query);
console.log(parsed.page); // '2'
console.log(parsed.keyword); // 'nodejs'
stringify can convert objects into query strings, supporting custom delimiters and encoding functions:
const params = { page: 2, pageSize: 20, keyword: 'node.js 入门' };
const qs = querystring.stringify(params);
console.log(qs); // page=2&pageSize=20&keyword=node.js%20入门
const custom = querystring.stringify(params, ';', ':');
console.log(custom); // page:2;pageSize:20;keyword:node.js%20入门
With escape/unescape, encoding details can be controlled, commonly used when handling special characters or non-standard encoding scenarios:
const raw = 'name=张三&city=北京';
const escaped = querystring.escape(raw);
console.log(escaped); // name%3D%E5%BC%A0%E4%B8%89%26city%3D%E5%8C%97%E4%BA%AC
console.log(querystring.unescape(escaped)); // name=张三&city=北京
Summary:
querystringprovides a compatibility solution for legacy code, covering parsing, serialization, and encoding control; in new projects, it is recommended to useURLSearchParamsfor more consistent behavior and better internationalization support.
Common Debugging Tools in the util Module
node:util collects many helper functions for development and debugging, which can both improve log readability and make older callback APIs easier to compose.
const util = require('node:util');
// util.format 按占位符格式化,可用于快速拼接日志
const message = util.format('User %s logged in at %d', 'alice', Date.now());
console.log(message); // User alice logged in at 1691392589123
// util.inspect 让对象在日志中打印得更清晰,支持深度与颜色控制
const config = {
db: { host: 'localhost', password: 'secret' },
features: ['sso', 'metrics'],
};
console.log(
util.inspect(config, {
depth: null,
colors: true,
showHidden: false,
compact: false,
})
);
// util.inspect.custom 可自定义对象打印逻辑
const user = {
name: 'alice',
password: 'secret',
[util.inspect.custom]() {
return `User<name=${this.name}>`;
},
};
console.log(user); // User<name=alice>
For older callback functions, `util.promisify`/`util.callbackify` can be used to convert between Promises and callbacks, unifying asynchronous syntax:
const fs = require('node:fs');
const readFileAsync = util.promisify(fs.readFile);
async function loadConfig() {
const content = await readFileAsync('./config.json', 'utf-8');
return JSON.parse(content);
}
const legacyFn = util.callbackify(loadConfig);
legacyFn(function (err, data) {
if (err) {
console.error('loadConfig failed', err);
} else {
console.log('config ready', data);
}
});
When troubleshooting complex issues, `util.debuglog` can be used to create debug logs categorized by namespace; simply set the corresponding NODE_DEBUG environment variable before startup to enable it:
const debug = util.debuglog('app');
function doSomething() {
debug('processing request %d', process.pid);
}
doSomething(); // 启动时执行 NODE_DEBUG=app node app.js 才会输出
// util.getSystemErrorName 可以根据 errno 获取友好描述
console.log(util.getSystemErrorName(-4048)); // EACCES 等基于平台的错误名
Additionally, tools like `util.types` and `util.inherits` often appear in low-level libraries: the former is used for precise determination of structures like Buffer and TypedArray, while the latter assists with ES5-style prototype inheritance.
Summary: During development and debugging,
util.format/util.inspectcan be used to improve log quality (withcolors: truefor colored object output),util.promisify/util.callbackifyto unify asynchronous call styles, and tools likeutil.debuglog,util.getSystemErrorNameto categorize debug information and error messages.
DNS Module and Common Use Cases
Node.js's built-in node:dns module is responsible for domain name resolution, relying on the libuv thread pool and system resolver at its core. It is commonly used in network scenarios such as service discovery, health checks, and monitoring IP changes. Understanding the following differences is particularly crucial:
dns.lookup: Uses the operating system's local resolver (can utilize/etc/hostsand system cache), by default only returns the first record, but can return all IPv4/IPv6 addresses via{ all: true, family: 4 }. Calls occupy the libuv thread pool, soUV_THREADPOOL_SIZEneeds attention during intensive resolution.dns.resolve*Series: Directly sends UDP queries to authoritative DNS servers, bypassing local cache.resolve4/resolve6/resolveMx/resolveTxt/resolveSrv/resolveAny, etc., can return structured results for different record types.dns.reverseandlookupService: The former performs reverse resolution based on IP, while the latter resolves IP + port to hostname and service name (based on/etc/services).dns.promisesandResolver: Provides Promise-based APIs and allows custom recursive resolvers (resolver.setServers(['1.1.1.1'])), facilitating combined calls in async/await code.- Result Order Control: Specify IPv4/IPv6 return order via
dns.setDefaultResultOrder('ipv4first')to avoid connection timeouts caused by priority in dual-stack networks.
const dns = require('node:dns');
dns.lookup('nodejs.org', { all: true }, function (err, addresses) {
if (err) {
console.error('lookup failed', err);
return;
}
console.log('system resolver addresses:', addresses);
});
dns.resolve4('nodejs.org', function (err, records) {
if (err) {
console.error('resolve4 failed', err);
return;
}
console.log('authoritative A records:', records);
});
const { promises: dnsPromises } = require('node:dns');
async function inspectService(hostname) {
const addresses = await dnsPromises.lookup(hostname, { all: true });
const mxRecords = await dnsPromises.resolveMx(hostname);
const reverseHostnames = await Promise.all(
addresses.map((item) => dnsPromises.reverse(item.address))
);
console.log({ addresses, mxRecords, reverseHostnames });
}
inspectService('example.com').catch(console.error);
Usage strategy: For high-frequency queries, consider caching or batch resolution at the business layer to avoid consuming excessive thread pool resources; when a specific DNS server (e.g., enterprise intranet DNS, public DNS) must be used, prioritize
dns.promises.Resolvercombined with timeout and retry strategies to ensure stable resolution links.
Comprehensive Example
Below, a minimal runnable login service demonstrates how to implement modular layering based on Node.js built-in modules, including entry, controller, and service layers.
project/
├── controllers/
│ └── authController.js
├── services/
│ └── userService.js
└── server.js
services/userService.js is responsible for pure business logic, providing a UserService class for validating user credentials:
// services/userService.js
class UserService {
constructor() {
// 模拟数据库,生产环境请替换为真实数据源
this.users = new Map([
['alice', { password: '123456' }],
['bob', { password: 'password' }],
]);
}
login(username, password) {
const record = this.users.get(username);
if (!record || record.password !== password) {
const error = new Error('用户名或密码错误');
error.statusCode = 401;
throw error;
}
return { username, password };
}
}
module.exports = new UserService();
controllers/authController.js handles HTTP details, calls the service layer, and returns standardized responses:
// controllers/authController.js
const userService = require('../services/userService');
function parseJsonBody(req) {
return new Promise((resolve, reject) => {
let raw = '';
req.setEncoding('utf8');
req.on('data', (chunk) => {
raw += chunk;
if (raw.length > 1e6) {
reject(new Error('请求体过大'));
req.connection.destroy();
}
});
req.on('end', () => {
try {
resolve(JSON.parse(raw || '{}'));
} catch (err) {
reject(new Error('请求体必须是合法 JSON'));
}
});
req.on('error', reject);
});
}
async function handleLogin(req, res) {
try {
const { username, password } = await parseJsonBody(req);
const user = userService.login(username, password);
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ message: 'Login成功', user }));
} catch (err) {
const statusCode = err.statusCode || 400;
res.writeHead(statusCode, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: err.message }));
}
}
module.exports = { handleLogin };
The entry file server.js uses the built-in http module to start the service and handle route dispatch:
// server.js
const http = require('node:http');
const { handleLogin } = require('./controllers/authController');
const server = http.createServer((req, res) => {
if (req.method === 'POST' && req.url === '/login') {
handleLogin(req, res);
return;
}
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Not Found' }));
});
const PORT = process.env.PORT || 3000;
server.listen(PORT, () => {
console.log(`HTTP server listening on http://localhost:${PORT}`);
});
After running node server.js, you can use curl to send requests and verify the layered logic:
curl -X POST http://localhost:3000/login \
-H 'Content-Type: application/json' \
-d '{"username": "alice", "password": "123456"}'
curl -X POST http://localhost:3000/login \
-H 'Content-Type: application/json' \
-d '{"username": "alice", "password": "wrong"}'
To avoid relying on external tools, a client script using the built-in http module can also be written to complete login requests directly in Node.js:
// client.js
const http = require('node:http');
function login(username, password) {
return new Promise((resolve, reject) => {
const payload = JSON.stringify({ username, password });
const req = http.request(
{
hostname: 'localhost',
port: 3000,
path: '/login',
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': Buffer.byteLength(payload),
},
},
(res) => {
let raw = '';
res.setEncoding('utf8');
res.on('data', (chunk) => {
raw += chunk;
});
res.on('end', () => {
try {
resolve({ statusCode: res.statusCode, body: JSON.parse(raw || '{}') });
} catch (err) {
reject(err);
}
});
}
);
req.on('error', reject);
req.write(payload);
req.end();
});
}
async function main() {
try {
const success = await login('alice', '123456');
console.log('Login成功响应:', success);
const failure = await login('alice', 'wrong');
console.log('Login失败响应:', failure);
} catch (err) {
console.error('请求失败', err);
}
}
main();
After starting the server, run node client.js to see both successful and failed response results.
主题测试文章,只做测试使用。发布者:Walker,转转请注明出处:https://walker-learn.xyz/archives/4765