For me, and for the team, the reason for applying Node is really simple: it’s fast to develop. Front-end students who are familiar with JS can get started quickly and save costs. Choose a http server library to start a server, choose the right middleware, match the request routing, see the situation reasonable use of ORM libraries to link the database, add, delete, change and check can be.
Scenarios for Node
Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. This model allows Node.js to avoid the loss of CPU time caused by having to wait for input or output (databases, filesystems, web servers…) to respond. (databases, file systems, web servers, …). Therefore, Node.js is suitable for use in highly concurrent, I/O-intensive, small amount of business logic scenarios.
Corresponding to the usual specific business, if it is an internal system, most just need to add, delete, modify and check a database, then the Server side is directly Node.js a shuttle.
For online business, if the traffic is not large, and the business logic is simple, the Server side can also be completely using Node.js. For the traffic is huge, the complexity of the project, generally use Node.js as the access layer, the background students are responsible for the implementation of the service. The following chart:
Also writing JS, what is the difference between Node.js development and page development
Developing pages on the browser side is dealing with the user, heavy interaction, and the browser also provides a variety of Web Api for us to use.Node.js is mainly oriented to the data, and after receiving a request, return specific data. This is the difference between the two in the business path. And the real difference is actually in the business model (business model, this is my own blind idea of a word). Directly with a diagram to show it.
When developing a page, each user has a copy of the JS code on their browser. If the code crashes under certain circumstances, it only affects the current user and does not affect other users, who can recover by refreshing. In Node.js, on the other hand, without turning on multiprocessing, all users’ requests come into the same piece of JS code, and there is only one thread executing this JS code. If a user request causes an error to occur, the Node.js process hangs, and the server side hangs outright. Although there may be a process guard, the process will be restarted, but in the case of a large number of user requests, the error will be triggered frequently, there may be the server side of the case of non-stop hanging, non-stop restart, the user experience.
Above, probably the biggest difference between Node.js development and front-end JS development.
Do’s and Don’ts When Developing with Node.js
When a user accesses a Node.js service, if a request gets stuck, the service delays in returning results, or the logic goes wrong and the service hangs, it can cause massive experience problems. the goal of the server side is to return data quickly and reliably.
Since Node.js is not good at handling complex logic (JavaScript itself executes less efficiently), if you want to use Node.js as an access layer, you should avoid complex logic. A crucial point to want to process data and return it quickly: use caching.
For example, using Node to do React isomorphism straight out of the box, renderToString
this Api, can be said to be heavier logic. If the complexity of the page is high, each request for the complete implementation of renderToString
, will occupy a long time thread to execute the code, increasing the response time, reducing the throughput of the service. At this time, caching is very important.
The main way to implement caching: in-memory caching. You can use Map, WeakMap, WeakRef and other implementations. Refer to the following simple sample code:
const cache = new Map();
router.get('/getContent', async (req, res) => {
const id = req.query.id;
if(cache.get(id)) {
return res.send(cache.get(id));
}
const rsp = await rpc.get(id);
const content = process(rsp);
cache.set(id, content);
return res.send(content);
});
One important issue when using caching is how the in-memory cache is updated. One of the easiest ways to do this is to start a timer that periodically deletes the cache and just resets it when the next request comes in. In the above code, add the following code:
setTimeout(function() {
cache.clear();
}, 1000 * 60);
If the server side of the complete use of Node implementation, you need to use the Node side of the direct connection to the database, in the data timeliness requirements are not too high, and the traffic is not too large, you can use a similar model as described above, the following figure. This can reduce the pressure on the database and speed up the response speed of Node.
It is also important to pay attention to the size of the memory cache. If you keep writing new data to the cache, then the memory will get bigger and bigger and eventually burst. Consider using the LRU (Least Recently Used) algorithm for caching. Open up a block of memory dedicated as a cache area. When the cache size reaches the upper limit, eliminate the longest unused cache.
The memory cache will all be invalidated with a process restart.
When the backend business is more complex, access layer traffic, data volume is large, you can use the following architecture, using an independent memory cache service. the Node access layer directly from the cache service to fetch data, the backend service directly update the cache service.
Of course, the architecture in the figure above is the simplest scenario, and in reality there are distributed caching and cache consistency issues to consider. This is another topic.
error handling
Due to the nature of the Node.js language, Node services are relatively error prone. Once an error occurs, the impact is that the service is unavailable. Therefore, the handling of errors is very important.
The most common way to handle errors is to use try catch
. However, try catch
cannot catch asynchronous errors. Asynchronous operations are very common in Node.js, and asynchronous operations mainly expose errors in callback functions. Look at an example:
const readFile = function(path) {
return new Promise((resolve,reject) => {
fs.readFile(path, (err, data) => {
if(err) {
throw err;
}
resolve(data);
});
});
}
router.get('/xxx', async function(req, res) {
try {
const res = await readFile('xxx');
...
} catch (e){
...
res.send(500);
}
});
In the above code, the error thrown by readFile cannot be caught by the catch. If we replace throw err
with Promise.reject(err)
, the catch will catch the error.
We can Promise all asynchronous operations and then use async, try, and catch to handle errors uniformly.
However, there will always be places that will be missed. At this point, you can use process to catch global errors and prevent the process from just exiting and causing later requests to hang. Sample code:
process.on('uncaughtException', (err) => {
console.error(`${err.message}\n${err.stack}`);
});
process.on('unhandledRejection', (reason, p) => {
console.error(`Unhandled Rejection at: Promise ${p} reason: `, reason);
});
For error trapping in Node.js, you can also use the domain
module. Now this module is not recommended to use, I have not practiced in the project, here will not be expanded. Node.js in recent years introduced async_hooks module, is still in the experimental stage, is not recommended for online environments directly use. Do a good job of process guarding, open multi-process, error alerts in a timely manner to repair, develop good coding standards, the use of appropriate frameworks, in order to improve the efficiency and stability of Node services.
Write it on the back.
This article summarizes the development of Node.js more than a year since the practice of summarizing the development of Node.js and front-end web development ideas are different, focus is not the same. I formally developed Node.js time is not too long, some points do not have a deep understanding of this article is just some experience. Welcome to exchange.