You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using devilbox to run 15 local sites for development, and some of those sites have lots of little files (user uploads).
Whenever this script runs, I can hear my CPU fan spin up and down constantly as this script traverses every single directory and sub-directory.
Using the time command, it says it takes almost 2s for the find command to run. I have 807983 files and folders, at least that's the number I get back by counting the number of lines returned from the find command.
I would like to propose a change for the find command to use the -maxdepth 1 option so that it doesn't have to uselessly traverse every single sub-directly since we really only care about the direct folders underneath the /shared/httpd directory (specifically in devilbox). Using the maxdepth option reduces the time it takes to run the command down to just 0.022s which is a significant savings, and could also reduce the amount of disk I/O required as well.
I kind of suspect this script could reduce the lifespan of an SSD because of how much it is looping through all of my files and folders, every few seconds for 8+ hours a day.
And if we really want to optimize, we could also pass the -type d option so the find command only gets folders only instead of both files and folders by default. This could also replace the need for the two grep statements possibly since those seem like there really only used for getting the base folders from what I understand.
I can make a PR if you would like to implement this change.
The text was updated successfully, but these errors were encountered:
I am using devilbox to run 15 local sites for development, and some of those sites have lots of little files (user uploads).
Whenever this script runs, I can hear my CPU fan spin up and down constantly as this script traverses every single directory and sub-directory.
Using the
time
command, it says it takes almost 2s for the find command to run. I have 807983 files and folders, at least that's the number I get back by counting the number of lines returned from the find command.I would like to propose a change for the find command to use the
-maxdepth 1
option so that it doesn't have to uselessly traverse every single sub-directly since we really only care about the direct folders underneath the /shared/httpd directory (specifically in devilbox). Using the maxdepth option reduces the time it takes to run the command down to just 0.022s which is a significant savings, and could also reduce the amount of disk I/O required as well.I kind of suspect this script could reduce the lifespan of an SSD because of how much it is looping through all of my files and folders, every few seconds for 8+ hours a day.
And if we really want to optimize, we could also pass the
-type d
option so the find command only gets folders only instead of both files and folders by default. This could also replace the need for the two grep statements possibly since those seem like there really only used for getting the base folders from what I understand.I can make a PR if you would like to implement this change.
The text was updated successfully, but these errors were encountered: