You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am taking mysql dump of nextcloud table everyday.
The size of my dump that was taken on Sunday (22/ July /2018 IST) is around 18.6 GB. But, the size of the same dump taken on next day i.e. Monday (23/ July / 2018) is just 1.3 GB.
We have also found that there is a major difference in the number of entries related to "INSERT INTO oc_filecache". The number entries for "INSERT INTO oc_filecache" in Sunday's (22/ July /2018 IST) dump are around 12000, where has the number of entries for the same in Monday's (23/ July / 2018) dump have drastically reduced to around 1000.
Does nextcloud automatically does some kind of cleanup.
Does it deletes some entries regularly?
Is it possible to restore my nextcloud using the current dump (i.e 1.8 GB dump).
Could anyone please help me on this.
Thanks
Manjunath
The text was updated successfully, but these errors were encountered:
If you haven't experienced any other issue after the size drop, I suppose the automatic backgroundjob table cleanup has happened and cleaned a lot of stale entries.
Hello All,
I have configured nextcloud 13 on ubuntu 16.04.
I am taking mysql dump of nextcloud table everyday.
The size of my dump that was taken on Sunday (22/ July /2018 IST) is around 18.6 GB. But, the size of the same dump taken on next day i.e. Monday (23/ July / 2018) is just 1.3 GB.
We have also found that there is a major difference in the number of entries related to "INSERT INTO
oc_filecache
". The number entries for "INSERT INTOoc_filecache
" in Sunday's (22/ July /2018 IST) dump are around 12000, where has the number of entries for the same in Monday's (23/ July / 2018) dump have drastically reduced to around 1000.Does nextcloud automatically does some kind of cleanup.
Does it deletes some entries regularly?
Is it possible to restore my nextcloud using the current dump (i.e 1.8 GB dump).
Could anyone please help me on this.
Thanks
Manjunath
The text was updated successfully, but these errors were encountered: