Replies: 3 comments
-
Hi @mdhaisne, thank you for your interest! you have good questions here, I'll try to answer to the best of my ability. So the underlying spec power model is based on physical characteristics of the machine - so RAM for example should reflect the bare metal machine, as should all the other parameters. Ultimately these are the characteristics that determine how much power is being pulled. Where it gets tricky is the cpu-utilization, as you've noticed you don't get an accurate reading from within a docker container. We track the cpu utilization effectively by looking at the /proc/stat file. This is the code we use specifically if you wanna take a look: However, in a docker container, that only has information for the processes running inside of that container. So for example, if we look in /proc/stat and it shows 100% utilization, that doesn't mean the machine is actually running at 100% utilization, just the processes attributed to the docker container (and therefore the actual power draw is less than what it'd be at 100%). This is where the vhost ratio comes in - we make an estimation of what percentage of the full machine we are actually seeing when we look in this file. So in your case, I believe the correct vhost-ratio should be 4/16. Even if cores are shared between runners, what's important is what is the ratio of what each VM sees compared to the total in the bare metal machine. I hope this helps! |
Beta Was this translation helpful? Give feedback.
-
I can maybe provide some more details especially on the first question you answered. The tool has no limitation in general when being used inside of docker. As @dan-mm mentioned correctly it operates on the CPU utilization and then creates an ML model based estimation from that. However if that value it can see in the If you do not mount anything different in the profcs than the default then the container can be assumed as transparent. Example: Assume you have 5 cores per VM. If you start a docker container even with a CPU restriction like this: How many cores are seen in the procfs depends mostly on how you configured the VMs. From What I understand the cores are not pinned but floating? Or are they pinned but oversubscrived? In any case: I understand that cores are not used exclusively. This is what our model atm cannot correctly attribute. The way to go, if your VM orchestrator is reporting that, would be to account for the steal time in the As @dan-mm said the CPU utilization is |
Beta Was this translation helpful? Give feedback.
-
Thank you @dan-mm and @ArneTR for your answers ! Really helpful !
Indeed the cores are floating. I don't have any more questions for now. I'll come back with more if I need !!I'll close this discussion as I believe I got everything. And maybe I'll open a PR someday, will see :) Thanks again ! Best regards |
Beta Was this translation helpful? Give feedback.
-
Hello everyone !
I'm a beginner here and still learning about this project ! First of all, congratulations on the amazing work you are doing here ! Really interested and lot of things to learn ! I'm not an expert into DevOps, CI or virtualization
I'm looking for testing / enabling your solution on my CI-CD (using https://github.com/green-coding-solutions/eco-ci-energy-estimation) but there are some points that I'm not sure to fully understand.
Context
I'm running a github action CI through a docker container on a self-hosted runner in a VM. The docker image is ubuntu but not ubuntu-latest (provided by github).
There is no hyperthreading enabled so each core is physical and not logical.
The bare-metal machine (Intel core) has several VMs but we can assume that they all share the same specs.
Hopefully, each VM has a single runner, so multiple container are not expected to run in parallel in a VM.
I've read the topic here green-coding-solutions/cloud-energy#4 about VMs and I understood that they were still discussions.
Questions
RAM
parameter but it is unclear if this is the bare-metal ram or VM allocated RAM ? I guessed that it was bare-metal but I may be wrongThanks for your time and your work !
Best regards
Beta Was this translation helpful? Give feedback.
All reactions