-
Notifications
You must be signed in to change notification settings - Fork 10.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance & diagnostics best practices - call for ideas #256
Comments
Title: Monitor first customer-facing metrics Anything else you monitor (e.g. event loop) is a cause and not a symptom. Quote from My Philosophy on Alerting:
Examples: we will show how to acheive this in Node.js |
Perhaps debugging can be part of it? This is a good place - https://www.joyent.com/node-js/production/debug |
The express documentation has good tips: https://expressjs.com/en/advanced/best-practice-performance.html We may want to review the current production practices section, as they may overlap or be more appropriate as part of the new section, for example the bullet around serving assets from a CDN rather than Node.js |
Choose the classical Reasons why: https://stackoverflow.com/questions/500504/why-is-using-for-in-with-array-iteration-a-bad-idea https://stackoverflow.com/questions/43031988/javascript-efficiency-for-vs-foreach/43032526#43032526 |
Use a version of node that ships with new TurboFan JIT compiler rather than just the older Crankshaft compiler. Reasons why: |
Deal with database and external APIs in batches, meaning that a developer should favor, and try to fetch a 100 entities using a single HTTP request, instead of a 100 HTTP requests with a single document each. Same goes for database operations, writing and fetching data are faster when done in batch rather than multiple operations. |
Copy pasting from the reddit discussion:
|
Use factories or constructors to ensure objects are created using the same schema so the v8 won't have to generate hidden classes on runtime (a.k.a POLYMORPHIC VS MONOMORPHIC FUNCTIONS) |
|
When evaluating alternatives and need to measure performance, use benchmark tooling like auto-canon and benchmark js which can provide more precise results (microtime, run many iterations) and better benchmarking performance (e.g. issue more call per second) |
Analyzing and monitoring cpu bottlenecks or memory leaks with Node Clinic/Prometheus |
A few others:
References:
|
i think a good performance practice is to use a load balancer for request and a load balancer to distribute the node.js app on the multiple cores of your cpu(because as we know node is single threaded). the latter can be achieved EXTREMELY easily using pm2, setting one flag that does the multi core load balancing automatically. |
@TheHollidayInn Great list! Few remarks/questions:
|
@barretojs First, welcome on board, good to have you here. I'm not sure about this advice as PM2 is using the cluster module under the hood (I think so at least, does it?) which seems to be slower than *real router like nginx and iptables What do you think? |
@i0natan Np!
|
@i0natan you're right. i wasn't aware of iptables' capability, and the numbers are impressive. i think there's no doubt that for pure performance it distributes the load way better. thanks for the great in-depth article! |
/remind me in 2 days |
@i0natan set a reminder for Oct 9th 2018 |
I'd definitely +1 using the CDN for static assets - we have this as a production best practice at the moment. I think the performance section and production sections are going to have a lot of potential crossover so it's a good time to clear up the differences Again, with gzip & other CPU intensive tasks, we actually recommend that they are better handled outside of Node for performance reasons. So I guess it's whether we want to adjust our view on this, or, as part of this section recommend to not use Node. Thoughts? See current bullets 5.3 & 5.11 |
👋 @i0natan, |
Adding one more which exists in our production best practices: set NODE_ENV=production if rendering on the server as it utilizes the templating engine cache |
Also highly recommends this video to be included in our bibliography: https://www.youtube.com/watch?v=K8spO4hHMhg&vl=en |
For MySQL Users:
|
@Berkmann18 Great set of advice. And welcome aboard. Why removing orphaned dependencies (tree shaking) improves performance? what is "Memoising"? @mohemos Welcome & Thank you. Adding some of the advice to the pool #302 |
@Berkmann18 Thank you. Example: //NodeJS
const maths = require('maths');
//ES
import * from 'maths'; Only importing //NodeJS
const { add } = require('maths');
//Or
const add = require('maths').add;
//ES
import { add } from 'maths';
import add from 'maths/add'; Memoising is where a code does something then instead of doing it again, which might waste some resources to do the exact same thing as done before, it simply gets what was done and stored in a variable. Example: const fx = (x, y) => {
console.log('The result is', doALongCalculation(x, y));
return doALongCalculation(x, y);
}; With memoising: const fx = (x, y) => {
const result = doALongCalculation(x, y);
console.log('The result is', result);
return result;
}; So it's basically where the code will memorise some data that will be stored in a variable instead of re-doing the computation/fetching process again and again. |
@Berkmann18 Thanks for the comprehensive and great explanation. Memoising sounds like a good fit for the General bullet - a bullet which holds all generic practices that are not node.js related About tree shaking - I'm familiar with it, for frontend the benefit is obvious, but why would it improve backend performance? during run-time anyway all the files are parsed (this is how 'require' work), do you mean to include also re-bundling of the code and pack in less files? I'm looking for the exact part that will boost performance |
@i0natan You're welcome 😄 . |
We should add throttling to the list |
We should add |
Can you provide an example, maybe even using code, and some data/benchmark that shows the impact? just want to ensure we address popular issues that leave a serious performance foortprint |
@VinayaSathyanarayana Thanks for joining the brainstorm :]
Can you provide (pseudo) code example? are we concerned because of performance or the code readability/errors? |
@i0natan Sure, I've made a repo with benchmarks using only |
@AbdelrahmanHafez For your proposed idea |
Not only for DB purpose, using Promise.all instead of lot of async/await should be a performance best practice, i.e. we need to call 4 API calls correctly with Axios to build our complete response, instead of using await for each one is better to use a single await with Promise.all(). |
@migramcastillo Makes a lot of sense. Do you want to write that bullet? |
@js-kyle Maybe add this idea to our TOC issue? |
I'd be glad to, in which branch should I make the PR? |
@migramcastillo If you meant to ask what branch to raise your PR against, that would be master. If you meant to ask what branch to make your changes in, feel free to create a new branch in your fork. |
Hello there! 👋 |
Although Promise.all is better than sequential non-dependent promises in most of the cases, It works best only when the number of calls/promises are known before hand or atleast few. In case of dynamic number of promises, e.g. batching in runtime and if the number of calls increases significantly it might endup exhausting all the connection pool or create deadlocks or if it http calls then spam the service. sample stackoverflow link https://stackoverflow.com/questions/53964228/how-do-i-perform-a-large-batch-of-promises |
|
When writing NPM module, it's a good idea to measure the time needed to console.time('require');
const myModule = require('my-module');
console.timeEnd('require'); Reason: this time is added to the overall startup time of every code that uses your module. It doesn't play a big role for long-running servers, but is very important for CLI utilities (pre-commit hooks, linters etc.). Few examples:
How to fix/mitigate:
|
Reduce hidden classes numberBenefits from the inline cache and let Turbofan generate optimized assembly code Use the ES Lint sort-keys rule to ensure same internal hidden classes for your object. This will allows to fully benefits from the inline caching. Also Turbofan will be able to compile the JS code into optimized bytecode. If you are interested I can write a section on this topic, providing code example, performance diff, etc Hidden classes and Inline cache |
Prefer monomorphic functionPrefer the usage of monomorphic function instead of polymorphic functions. This will allows:
example// don't
function add (a, b) {
return a + b
}
add(21, 42);
add(21.2, 42.3);
// do
function addInt(a, b) {
return a + b;
}
function addFloat(a, b) {
return a+ b;
}
addInt(21, 42);
addFloat(21.2, 42.3); If you are interested I can write a section on this topic, providing code example, performance diff, etc Inline Caches with Monomorphism, Polymorphism and Megamorphism |
Avoid dynamic allocation of functionAvoid to declare function inside function. Function are object and object allocation is costly as well as the garbage collection. example// don't
function add(a, b) {
const addInt = (a, b) => {
return a + b;
}
if (typeof a === 'number') {
return addInt(a, b);
}
if (typeof a.x === 'number') {
return addInt(a.x, b.x);
}
}
// do
function addInt(a, b) {
return a + b;
}
function add(a, b) {
if (typeof a === 'number') {
return addInt(a, b);
}
if (typeof a.x === 'number') {
return addInt(a.x, b.x);
}
} If you are interested I can write a section on this topic, providing code example, performance diff, etc |
Avoid process.env. Copy and cache it: https://www.reddit.com/r/node/comments/7thtlv/performance_penalty_of_accessing_env_variables/ |
@goldbergyoni Are you still looking for contributor on this topic? |
There are really good ideas here! We should add it I also have some:
// Bad - really large file not getting garbage collected
function run() {
const reallyLargeFile = readFile()
const transformed = transformData(reallyLargeFile)
// do some extra work
}
// Good - really large file getting garbage collected
function run() {
let transformed;
// prefer extract to function instead of empty closure
{
const reallyLargeFile = readFile()
transformed = transformData(reallyLargeFile)
}
// do some extra work
}
|
Looks like in order to benefit from this change, the "extra work" part
I think these details should be noted - the readers should not follow the advice blindly but should have some understanding of why and when it helps. Also, it would be great to have some proofs or benchmarks to illustrate this idea.
But we've already seen |
We're kicking off the performance best practices section. Any idea regarding performance best practices or great bibliography to extract ideas from (youtube, blog post, etc)?
This issue will summarize all the ideas and bibliography and serve as a TOC for the writers. This is also a call for write or would like to learn by practicing, discussing and writing.
@sagirk @BrunoScheufler @js-kyle
The text was updated successfully, but these errors were encountered: