You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a leofs cluster with 3 storage nodes, 2 managers and 3 gateway (2 REST and 1 NFS) , it's brand new, I have injected some files (23 million to be exact) and I have tried to drop a bucket, create a new one (with the same name) and now i have discover some problems, in the storage nodes I have a lot of this messages
`[W] s3storage1@10.10.10.89 2017-04-11 14:59:52.643720 +0200 1491915592 leo_storage_replicator:loop/6 209 [{method,delete},{key,<<"mep/20x20/48/480550.jpg">>},{cause,timeout}]
And the deletion queue does not shrink `leofs-adm mq-stats s3storage1@10.10.10.89
id | state | number of msgs | batch of msgs | interval | description
--------------------------------+-------------+----------------|----------------|----------------|---------------------------------------------
leo_delete_dir_queue | idling | 0 | 1600 | 500 | remove directories
As you already closed this issue, you might not have a interest about what was going on..
however just in case I'd like to share the root problem that may cause what you described above.
Hi,
I have a leofs cluster with 3 storage nodes, 2 managers and 3 gateway (2 REST and 1 NFS) , it's brand new, I have injected some files (23 million to be exact) and I have tried to drop a bucket, create a new one (with the same name) and now i have discover some problems, in the storage nodes I have a lot of this messages
`[W] s3storage1@10.10.10.89 2017-04-11 14:59:52.643720 +0200 1491915592 leo_storage_replicator:loop/6 209 [{method,delete},{key,<<"mep/20x20/48/480550.jpg">>},{cause,timeout}]
[W] s3storage1@10.10.10.89 2017-04-11 14:59:53.642517 +0200 1491915593 leo_storage_replicator:replicate/5 121 [{method,delete},{key,<<"mep/20x20/48/480550.jpg">>},{cause,timeout}]
[E] s3storage1@10.10.10.89 2017-04-11 14:59:53.642749 +0200 1491915593 leo_storage_handler_object:delete/3 502 [{from,gateway},{method,del},{key,<<"mep/20x20/48/480550.jpg">>},{req_id,0},{cause,"Replicate failure"}]
[W] s3storage1@10.10.10.89 2017-04-11 15:00:24.751580 +0200 1491915624 leo_storage_replicator:loop/6 209 [{method,delete},{key,<<"mep/20x20/48/480550.jpg">>},{cause,timeout}]
[W] s3storage1@10.10.10.89 2017-04-11 15:00:25.750497 +0200 1491915625 leo_storage_replicator:replicate/5 121 [{method,delete},{key,<<"mep/20x20/48/480550.jpg">>},{cause,timeout}]
[E] s3storage1@10.10.10.89 2017-04-11 15:00:25.750697 +0200 1491915625 leo_storage_handler_object:delete/3 502 [{from,gateway},{method,del},{key,<<"mep/20x20/48/480550.jpg">>},{req_id,0},{cause,"Replicate failure"}]`
And the deletion queue does not shrink `leofs-adm mq-stats s3storage1@10.10.10.89
id | state | number of msgs | batch of msgs | interval | description
--------------------------------+-------------+----------------|----------------|----------------|---------------------------------------------
leo_delete_dir_queue | idling | 0 | 1600 | 500 | remove directories
leo_comp_meta_with_dc_queue | idling | 0 | 1600 | 500 | compare metadata w/remote-node
leo_sync_obj_with_dc_queue | idling | 0 | 1600 | 500 | sync objs w/remote-node
leo_recovery_node_queue | idling | 0 | 1600 | 500 | recovery objs of node
leo_async_deletion_queue | idling | 122885 | 1600 | 500 | async deletion of objs
leo_rebalance_queue | idling | 0 | 1600 | 500 | rebalance objs
leo_sync_by_vnode_id_queue | idling | 0 | 1600 | 500 | sync objs by vnode-id
leo_per_object_queue | idling | 0 | 1600 | 500 | recover inconsistent objs
`
the leofs-adm whereis command did timeout on this type of objects and the NFS gateway is unable to retrieve any object of any type.
Any idea how to diagnose this ?
Regard's
Claude
The text was updated successfully, but these errors were encountered: