-
-
Notifications
You must be signed in to change notification settings - Fork 399
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak in Valetudo? #39
Comments
After looking at the code for 20 minutes, I still have no idea what could be responsible for the memory leak. However, I've noticed that on every request the robot + charger images are read from flash for no reason so I guess the whole part will be rewritten at some point. |
Essentially all HA is doing is calling The only real difference between HA and using curl is the speed of the requests, normally HA will do this every 30 seconds I tuned it up to every 10 seconds, I wouldn't think that would cause an issue.
As a side note: Having a further look into the HA settings I've enabled |
After further looking into it, it's strange I am seeing the issue after all the upstart script has an oom score of 1000 so it should in theory handle itself. I am using an older firmware |
Is this still happening? Did you try remote debugging and taking a heap dump? |
I am still seeing the issue. I'll try to look into the heap dump soon.
I'll also look into updating the firmware as I'm using an older one
…On December 23, 2018 2:15:26 PM UTC, "Sören Beye" ***@***.***> wrote:
Is this still happening? Did you try remote debugging and taking a heap
dump?
--
You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub:
#39 (comment)
|
I just rooted my rockrobo v1 with latest firmwarebuilder (FW v11_003194, --disable-xiaomi), dummycloud_0.1 and Valetudo 0.9 via rrcc. I really like the newly gained power over my vacuum but observed the same behaviour as @dugite-code. Basically every map related call increases the memory footprint of Valetudo process. I did some measurements in a notebook to stress Valetudo by doing the same api call 1000 times in a row. requests was used to interact with Valetudo API and output of ps via paramiko for the process info. The memory footprint steadily increases to up to 60% before the process terminates when triggering a new map with Simply fetching the same png for 1000 times also steadily increases footprint but only by 0.5% points. Doing the same for I don't have any experience with node.js apps but I'll try to use the techniques described here to get a clue why the garbage collection fails: https://www.nearform.com/blog/how-to-self-detect-a-memory-leak-in-node/ |
Tracing the memory leak is a lot harder than I thought. Stumbling blocks: Debugging on the robot:
Debugging on the host:
Using Do the devs have any input for me how to debug this properly, what to look for? |
I guess this should be fixed with https://github.com/Hypfer/Valetudo/releases/tag/0.2.2 |
Nope. Still an issue. Although it seems like Valetudo is not the only software experiencing it |
Doesn't happen with Node 8 it seems. |
Testing Node 11.12 shows no signs of the leak so I guess nodejs/node#23862 this is the culprit. Fixed in Node 11.10 |
Awesome, with the map via MQTT it looks like it's now working as expected. Update: Frustratingly after re-enabling valentudo it again caused the vacuum to unprovision at the 4am re-boot. I am certain it's not the memory leak though as unlike before where the vacuum was locked up due to low memory it was just unprovisioned. After testing this I might set-up the private provisioning from dustcloud as with MQTT I have don't really need miio any more |
Great work! |
I discovered a small issue when using Home-Assistant to Poll the Map too often (every 10 seconds to get a sort of Live map). The
/tmp
directory filled and the vacuum locked up. A re-boot fixes the issue obviously.I had done other things on the vacuum in the past so I might have had less space available than normal.
A quick work-around was to throw this into cron
*/1 * * * * cd /tmp/maps && ls -t /tmp/maps | tail -n +2 | xargs rm --
The text was updated successfully, but these errors were encountered: