-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: script to compare fresh install time #103
base: master
Are you sure you want to change the base?
Conversation
A quick test to compare the first install time of a module via ipfs-npm against npm License: MIT Signed-off-by: Oli Evans <oli@tableflip.io>
Today it's 40s rather than 2 mins for $ ./docker-race.sh
found ipfs-npm Docker image
---- ipfs-npm flavour ----
👿 Spawning an in-process IPFS node using repo at /root/.jsipfs
...
+ iim@0.6.1
added 415 packages from 782 contributors in 36.717s
🎁 /usr/local/bin/npm exited with code 0
🔏 Updating package-lock.json
real 0m40.804s
user 0m0.033s
sys 0m0.018s
---- npm flavour ----
/usr/local/bin/iim -> /usr/local/lib/node_modules/iim/src/bin.js
+ iim@0.6.1
added 415 packages from 782 contributors in 8.375s
real 0m10.934s
user 0m0.031s
sys 0m0.028s |
License: MIT Signed-off-by: Oli Evans <oli@tableflip.io>
Nice. I wonder if the time of day affects it? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool! I've run it multiple times and consistently get around 45s for npm-ipfs
Small ask: we should run tests in ephemeral containers (see notes below).
prefix relative path to docker build step Co-Authored-By: Marcin Rataj <lidel@lidel.org>
No need to keep container around after the test, as it will slowly eat up disk space. If you've already did run it multiple times: see last 10 containers via docker ps -a | head -10 remove image and all stale containers that use it docker rmi ipfs-npm -f It also makes sense to disable NAT and just use host network interfaces directly, bringing test closer to reality. This should run in ephemeral container with no NAT: Co-Authored-By: Marcin Rataj <lidel@lidel.org>
No need to keep container around after the test, as it will slowly eat up disk space. If you've already did run it multiple times: see last 10 containers via docker ps -a | head -10 remove image and all stale containers that use it docker rmi ipfs-npm -f It also makes sense to disable NAT and just use host network interfaces directly, bringing test closer to reality. This should run in ephemeral container with no NAT: Co-Authored-By: Marcin Rataj <lidel@lidel.org>
License: MIT Signed-off-by: Oli Evans <oli@tableflip.io>
This might be relevant for you: https://github.com/open-services/public-registry-benchmarks Basically, the project above creates bunch of different Dockerfiles for every combination of a list of CLIs and a list of public registries: https://github.com/open-services/public-registry-benchmarks/tree/master/tests Then it runs them all a couple of times and logs the time for each one. The average and mean gets recorded as well, then you'll get a npm-on-ipfs (the deployed version) is already there and gets run with all the rest of the registries. You might be able to reuse it somehow to run it within your own infrastructure. Otherwise I could try to focus some time on running it nightly, and you can just use the results as-is for figuring out improvements. |
That is very cool @victorb. I'm assuming currently you run it add-hoc when you want to update the stats? Running it nightly or even weekly would be rad, then we could see the any improvements in published releases. I'll take a look and see what I can use to improve this test too. |
@olizilla indeed. I just opened up a issue about running it each night: open-services/public-registry-benchmarks#1 |
Update: open-services/public-registry-benchmarks now runs the benchmarks once a day and updates the README with the latest results + stores each report in a directory in the git repository If you can't figure out how to add a new test-case for the benchmarks, I'd be happy to help you |
A script to repeatably compare the first install time of a module via
ipfs-npm
againstnpm
.We know
ipfs-npm
is likely to be slower on first install time, but I want to do whatever we can to bring that time down, as it's the first experience folks will have with it.We create a docker image with
ipfs-npm
in, then time how long it take to install takes there, then run the same install again in a stocknode
docker image vianpm
. Both will have empty caches for the run.Example output
License: MIT
Signed-off-by: Oli Evans oli@tableflip.io