We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
We've recently added the hotreload.vshard.test_rebalancer test (#1565), but it seems to be flaky.
hotreload.vshard.test_rebalancer
Reproduced at least twice on Tarantool 1.10.11-0-gf0b0e7ecf
2021-10-07T17:19:08.5624411Z ========================================================= 2021-10-07T17:19:08.5626102Z 2021-10-07T17:19:08.5626847Z Failed tests: 2021-10-07T17:19:08.5628603Z ------------- 2021-10-07T17:19:08.5629264Z 2021-10-07T17:19:08.5630092Z 1) hotreload.vshard.test_rebalancer 2021-10-07T17:19:08.5631204Z ./test/hotreload/vshard_test.lua:194: expected: 2000, actual: 1500 2021-10-07T17:19:08.5631920Z stack traceback: 2021-10-07T17:19:08.5632816Z ./test/hotreload/vshard_test.lua:194: in function 'retrying' 2021-10-07T17:19:08.5634113Z ./test/hotreload/vshard_test.lua:192: in function 'hotreload.vshard.test_rebalancer' 2021-10-07T17:19:08.5634949Z ... 2021-10-07T17:19:08.5635612Z [C]: in function 'xpcall' 2021-10-07T17:19:08.5635976Z 2021-10-07T17:19:08.5636439Z Captured stdout: 2021-10-07T17:19:08.5637441Z SA-1 | 2021-10-07 17:13:32.702 [3508] main/139/main roles.lua:464 W> Reloading roles ... 2021-10-07T17:19:08.5638990Z SA-1 | 2021-10-07 17:13:32.702 [3508] main/139/main I> Instance state changed: RolesConfigured -> ReloadingRoles 2021-10-07T17:19:08.5640639Z SA-1 | 2021-10-07 17:13:32.702 [3508] main/139/main I> Starting reconfiguration of replica 59490f1c-3664-48b1-b4d9-3f8ef51e3580 2021-10-07T17:19:08.5642104Z SA-1 | 2021-10-07 17:13:32.702 [3508] main/139/main I> Resigning from the replicaset master role... 2021-10-07T17:19:08.5643297Z SA-1 | 2021-10-07 17:13:32.702 [3508] main/139/main I> Box has been configured 2021-10-07T17:19:08.5644529Z SA-1 | 2021-10-07 17:13:32.703 [3508] main/149/lua I> Old replicaset and replica objects are outdated. 2021-10-07T17:19:08.5646611Z SA-1 | 2021-10-07 17:13:32.703 [3508] main/139/main I> GC stopped 2021-10-07T17:19:08.5647808Z SA-1 | 2021-10-07 17:13:32.703 [3508] main/139/main I> Recovery stopped 2021-10-07T17:19:08.5648971Z SA-1 | 2021-10-07 17:13:32.703 [3508] main/139/main I> Resigned from the replicaset master role 2021-10-07T17:19:08.5650190Z SA-1 | 2021-10-07 17:13:32.703 [3508] main/139/main I> Rebalancer location has changed to nil 2021-10-07T17:19:08.5651875Z SA-1 | 2021-10-07 17:13:32.706 [3508] main/139/main I> disconnected from localhost:13302 2021-10-07T17:19:08.5653417Z SA-1 | 2021-10-07 17:13:32.706 [3508] main/139/main I> Unsetting global "__module_vshard_registry" 2021-10-07T17:19:08.5654509Z SA-1 | 2021-10-07 17:13:32.707 [3508] main/139/main I> Killing fiber "main" (137) 2021-10-07T17:19:08.5655627Z SA-1 | 2021-10-07 17:13:32.707 [3508] main/139/main I> Killing fiber "localhost:13304 (net.box)" (147) 2021-10-07T17:19:08.5656797Z SA-1 | 2021-10-07 17:13:32.707 [3508] main/139/main I> Killing fiber "localhost:13302 (net.box)" (148) 2021-10-07T17:19:08.5657989Z SA-1 | 2021-10-07 17:13:32.707 [3508] main/139/main I> Removing HTTP route "/custom-post" (POST) 2021-10-07T17:19:08.5659166Z SA-1 | 2021-10-07 17:13:32.707 [3508] main/139/main I> Removing HTTP route "/custom-get" (GET) 2021-10-07T17:19:08.5660564Z SA-1 | 2021-10-07 17:13:32.720 [3508] main/139/main I> Starting reconfiguration of replica 59490f1c-3664-48b1-b4d9-3f8ef51e3580 2021-10-07T17:19:08.5661977Z SA-1 | 2021-10-07 17:13:32.720 [3508] main/150/lua I> Old replicaset and replica objects are outdated. 2021-10-07T17:19:08.5663220Z SA-1 | 2021-10-07 17:13:32.724 [3508] main/139/main roles.lua:490 W> Roles reloaded successfully 2021-10-07T17:19:08.5664554Z SA-1 | 2021-10-07 17:13:32.724 [3508] main/139/main I> Instance state changed: ReloadingRoles -> BoxConfigured 2021-10-07T17:19:08.5665992Z SA-1 | 2021-10-07 17:13:32.724 [3508] main/139/main I> Instance state changed: BoxConfigured -> ConfiguringRoles 2021-10-07T17:19:08.5667201Z SA-1 | 2021-10-07 17:13:32.724 [3508] main/139/main I> Failover disabled 2021-10-07T17:19:08.5668312Z SA-1 | 2021-10-07 17:13:32.726 [3508] main/139/main I> Reconfiguring vshard.storage... 2021-10-07T17:19:08.5669740Z SA-1 | 2021-10-07 17:13:32.726 [3508] main/139/main I> Starting reconfiguration of replica 59490f1c-3664-48b1-b4d9-3f8ef51e3580 2021-10-07T17:19:08.5670932Z SA-1 | 2021-10-07 17:13:32.726 [3508] main/139/main I> I am master 2021-10-07T17:19:08.5671961Z SA-1 | 2021-10-07 17:13:32.726 [3508] main/139/main I> Taking on replicaset master role... 2021-10-07T17:19:08.5673038Z SA-1 | 2021-10-07 17:13:32.726 [3508] main/139/main I> Box has been configured 2021-10-07T17:19:08.5674197Z SA-1 | 2021-10-07 17:13:32.726 [3508] main/151/lua I> Old replicaset and replica objects are outdated. 2021-10-07T17:19:08.5675319Z SA-1 | 2021-10-07 17:13:32.726 [3508] main/152/lua I> gc_bucket_f has been started 2021-10-07T17:19:08.5676355Z SA-1 | 2021-10-07 17:13:32.726 [3508] main/153/lua I> recovery_f has been started 2021-10-07T17:19:08.5677438Z SA-1 | 2021-10-07 17:13:32.726 [3508] main/139/main I> Took on replicaset master role 2021-10-07T17:19:08.5678528Z SA-1 | 2021-10-07 17:13:32.726 [3508] main/154/lua I> rebalancer_f has been started 2021-10-07T17:19:08.5679567Z SA-1 | 2021-10-07 17:13:32.729 [3508] main/139/main I> --- init({is_master = true}) 2021-10-07T17:19:08.5680618Z SA-1 | 2021-10-07 17:13:32.729 [3508] main/139/main I> --- apply_config({is_master = true}) 2021-10-07T17:19:08.5681702Z SA-1 | 2021-10-07 17:13:32.729 [3508] main/139/main I> Roles configuration finished 2021-10-07T17:19:08.5683022Z SA-1 | 2021-10-07 17:13:32.729 [3508] main/139/main I> Instance state changed: ConfiguringRoles -> RolesConfigured 2021-10-07T17:19:08.5684302Z SB-1 | 2021-10-07 17:13:32.730 [3525] main/118/main roles.lua:464 W> Reloading roles ... 2021-10-07T17:19:08.5685592Z SB-1 | 2021-10-07 17:13:32.730 [3525] main/118/main I> Instance state changed: RolesConfigured -> ReloadingRoles 2021-10-07T17:19:08.5687276Z SB-1 | 2021-10-07 17:13:32.730 [3525] main/118/main I> Starting reconfiguration of replica 9b86d895-3965-4be0-9460-cec927c72e2c 2021-10-07T17:19:08.5688776Z SB-1 | 2021-10-07 17:13:32.730 [3525] main/118/main I> Resigning from the replicaset master role... 2021-10-07T17:19:08.5689888Z SB-1 | 2021-10-07 17:13:32.730 [3525] main/118/main I> Box has been configured 2021-10-07T17:19:08.5691056Z SB-1 | 2021-10-07 17:13:32.731 [3525] main/143/lua I> Old replicaset and replica objects are outdated. 2021-10-07T17:19:08.5692126Z SB-1 | 2021-10-07 17:13:32.731 [3525] main/118/main I> GC stopped 2021-10-07T17:19:08.5693335Z SB-1 | 2021-10-07 17:13:32.731 [3525] main/118/main I> Recovery stopped 2021-10-07T17:19:08.5694477Z SB-1 | 2021-10-07 17:13:32.731 [3525] main/118/main I> Resigned from the replicaset master role 2021-10-07T17:19:08.5695594Z SB-1 | 2021-10-07 17:13:32.737 [3525] main/118/main I> Unsetting global "get_uuid" 2021-10-07T17:19:08.5696722Z SB-1 | 2021-10-07 17:13:32.737 [3525] main/118/main I> Unsetting global "__module_vshard_registry" 2021-10-07T17:19:08.5697804Z SB-1 | 2021-10-07 17:13:32.739 [3525] main/118/main I> Killing fiber "main" (142) 2021-10-07T17:19:08.5698788Z SB-1 | 2021-10-07 17:13:32.739 [3525] main/118/main I> Killing fiber "main" (136) 2021-10-07T17:19:08.5699889Z SB-1 | 2021-10-07 17:13:32.739 [3525] main/118/main I> Removing HTTP route "/custom-post" (POST) 2021-10-07T17:19:08.5701058Z SB-1 | 2021-10-07 17:13:32.739 [3525] main/118/main I> Removing HTTP route "/custom-get" (GET) 2021-10-07T17:19:08.5702439Z SB-1 | 2021-10-07 17:13:32.752 [3525] main/118/main I> Starting reconfiguration of replica 9b86d895-3965-4be0-9460-cec927c72e2c 2021-10-07T17:19:08.5703848Z SB-1 | 2021-10-07 17:13:32.752 [3525] main/144/lua I> Old replicaset and replica objects are outdated. 2021-10-07T17:19:08.5705095Z SB-1 | 2021-10-07 17:13:32.756 [3525] main/118/main roles.lua:490 W> Roles reloaded successfully 2021-10-07T17:19:08.5706425Z SB-1 | 2021-10-07 17:13:32.756 [3525] main/118/main I> Instance state changed: ReloadingRoles -> BoxConfigured 2021-10-07T17:19:08.5709095Z SB-1 | 2021-10-07 17:13:32.756 [3525] main/118/main I> Instance state changed: BoxConfigured -> ConfiguringRoles 2021-10-07T17:19:08.5710374Z SB-1 | 2021-10-07 17:13:32.756 [3525] main/118/main I> Failover disabled 2021-10-07T17:19:08.5711487Z SB-1 | 2021-10-07 17:13:32.758 [3525] main/118/main I> Reconfiguring vshard.storage... 2021-10-07T17:19:08.5712919Z SB-1 | 2021-10-07 17:13:32.758 [3525] main/118/main I> Starting reconfiguration of replica 9b86d895-3965-4be0-9460-cec927c72e2c 2021-10-07T17:19:08.5714108Z SB-1 | 2021-10-07 17:13:32.758 [3525] main/118/main I> I am master 2021-10-07T17:19:08.5715150Z SB-1 | 2021-10-07 17:13:32.758 [3525] main/118/main I> Taking on replicaset master role... 2021-10-07T17:19:08.5716245Z SB-1 | 2021-10-07 17:13:32.758 [3525] main/118/main I> connecting to 2 replicas 2021-10-07T17:19:08.5717346Z SB-1 | 2021-10-07 17:13:32.758 [3525] main/118/main C> failed to connect to 2 out of 2 replicas 2021-10-07T17:19:08.5718391Z SB-1 | 2021-10-07 17:13:32.759 [3525] main/118/main C> leaving orphan mode 2021-10-07T17:19:08.5719725Z SB-1 | 2021-10-07 17:13:32.759 [3525] main/118/main I> set 'replication' configuration option to ["admin@localhost:13305","admin@localhost:13304"] 2021-10-07T17:19:08.5720979Z SB-1 | 2021-10-07 17:13:32.759 [3525] main/118/main I> Box has been configured 2021-10-07T17:19:08.5722133Z SB-1 | 2021-10-07 17:13:32.759 [3525] main/149/lua I> Old replicaset and replica objects are outdated. 2021-10-07T17:19:08.5723261Z SB-1 | 2021-10-07 17:13:32.759 [3525] main/150/lua I> gc_bucket_f has been started 2021-10-07T17:19:08.5725699Z SB-1 | 2021-10-07 17:13:32.759 [3525] main/151/lua I> recovery_f has been started 2021-10-07T17:19:08.5727103Z SB-1 | 2021-10-07 17:13:32.759 [3525] main/118/main I> Took on replicaset master role 2021-10-07T17:19:08.5728163Z SB-1 | 2021-10-07 17:13:32.762 [3525] main/118/main I> --- init({is_master = true}) 2021-10-07T17:19:08.5729197Z SB-1 | 2021-10-07 17:13:32.762 [3525] main/118/main I> --- apply_config({is_master = true}) 2021-10-07T17:19:08.5730284Z SB-1 | 2021-10-07 17:13:32.762 [3525] main/118/main I> Roles configuration finished 2021-10-07T17:19:08.5731602Z SB-1 | 2021-10-07 17:13:32.762 [3525] main/118/main I> Instance state changed: ConfiguringRoles -> RolesConfigured 2021-10-07T17:19:08.5736208Z SB-1 | 2021-10-07 17:13:32.762 [3525] main/148/applier/admin@localhost:13305 I> remote master 6cdd9796-ad2e-4bae-a730-367380b1ac0b at 127.0.0.1:13305 running Tarantool 1.10.11 2021-10-07T17:19:08.5737763Z SB-1 | 2021-10-07 17:13:32.762 [3525] main/148/applier/admin@localhost:13305 I> authenticated 2021-10-07T17:19:08.5738955Z SB-1 | 2021-10-07 17:13:32.763 [3525] main/148/applier/admin@localhost:13305 I> subscribed 2021-10-07T17:19:08.5740207Z SB-1 | 2021-10-07 17:13:32.763 [3525] main/148/applier/admin@localhost:13305 I> remote vclock {1: 1820} local vclock {1: 1820} 2021-10-07T17:19:08.5741460Z SB-1 | 2021-10-07 17:13:32.763 [3525] main/148/applier/admin@localhost:13305 C> leaving orphan mode 2021-10-07T17:19:08.5743783Z SA-1 | 2021-10-07 17:13:32.762 [3508] main/155/localhost:13304 (net.box) I> connected to localhost:13304 2021-10-07T17:19:08.5745250Z SB-2 | 2021-10-07 17:13:32.762 [3533] main/117/main I> subscribed replica 9b86d895-3965-4be0-9460-cec927c72e2c at fd 101, aka 127.0.0.1:13305, peer of 127.0.0.1:49908 2021-10-07T17:19:08.5746573Z SB-2 | 2021-10-07 17:13:32.762 [3533] main/117/main I> remote vclock {1: 1820} local vclock {1: 1820} 2021-10-07T17:19:08.5748082Z SB-2 | 2021-10-07 17:13:32.763 [3533] relay/127.0.0.1:49908/101/main I> recover from `/tmp/tmp.cartridge.Icc3tiwj9yjI/localhost-13305/00000000000000000040.xlog' 2021-10-07T17:19:08.5749878Z SB-2 | 2021-10-07 17:13:32.763 [3533] relay/127.0.0.1:49908/101/main I> done `/tmp/tmp.cartridge.Icc3tiwj9yjI/localhost-13305/00000000000000000040.xlog' 2021-10-07T17:19:08.5751413Z R-1 | 2021-10-07 17:13:32.764 [3500] main/171/http/127.0.0.1:57056 twophase.lua:497 W> Updating config clusterwide... 2021-10-07T17:19:08.5752792Z R-1 | 2021-10-07 17:13:32.765 [3500] main/171/http/127.0.0.1:57056 twophase.lua:373 W> (2PC) patch_clusterwide upload phase... 2021-10-07T17:19:08.5754394Z SB-1 | 2021-10-07 17:13:32.766 [3525] main/147/applier/admin@localhost:13304 I> remote master 9b86d895-3965-4be0-9460-cec927c72e2c at 127.0.0.1:13304 running Tarantool 1.10.11 2021-10-07T17:19:08.5755845Z SB-1 | 2021-10-07 17:13:32.767 [3525] main/147/applier/admin@localhost:13304 C> leaving orphan mode 2021-10-07T17:19:08.5757056Z SA-1 | 2021-10-07 17:13:32.770 [3508] main/156/localhost:13302 (net.box) I> connected to localhost:13302 2021-10-07T17:19:08.5758519Z SA-1 | 2021-10-07 17:13:32.770 [3508] main/154/vshard.rebalancer I> The cluster is balanced ok. Schedule next rebalancing after 3600.000000 seconds 2021-10-07T17:19:08.5760060Z R-1 | 2021-10-07 17:13:32.775 [3500] main/171/http/127.0.0.1:57056 twophase.lua:386 W> (2PC) patch_clusterwide prepare phase... 2021-10-07T17:19:08.5761440Z SB-2 | 2021-10-07 17:13:32.775 [3533] main/131/main I> Schema validation skipped because the instance isn't a leader 2021-10-07T17:19:08.5762803Z SA-2 | 2021-10-07 17:13:32.776 [3516] main/131/main I> Schema validation skipped because the instance isn't a leader 2021-10-07T17:19:08.5764236Z R-1 | 2021-10-07 17:13:32.783 [3500] main/171/http/127.0.0.1:57056 twophase.lua:395 W> Prepared for patch_clusterwide at localhost:13301 2021-10-07T17:19:08.5766033Z R-1 | 2021-10-07 17:13:32.783 [3500] main/171/http/127.0.0.1:57056 twophase.lua:395 W> Prepared for patch_clusterwide at localhost:13302 2021-10-07T17:19:08.5767614Z R-1 | 2021-10-07 17:13:32.783 [3500] main/171/http/127.0.0.1:57056 twophase.lua:395 W> Prepared for patch_clusterwide at localhost:13303 2021-10-07T17:19:08.5769069Z R-1 | 2021-10-07 17:13:32.783 [3500] main/171/http/127.0.0.1:57056 twophase.lua:395 W> Prepared for patch_clusterwide at localhost:13304 2021-10-07T17:19:08.5770530Z R-1 | 2021-10-07 17:13:32.783 [3500] main/171/http/127.0.0.1:57056 twophase.lua:395 W> Prepared for patch_clusterwide at localhost:13305 2021-10-07T17:19:08.5771939Z R-1 | 2021-10-07 17:13:32.783 [3500] main/171/http/127.0.0.1:57056 twophase.lua:419 W> (2PC) patch_clusterwide commit phase... 2021-10-07T17:19:08.5773983Z SB-2 | 2021-10-07 17:13:32.784 [3533] main/131/main I> Backup of active config created: "/tmp/tmp.cartridge.Icc3tiwj9yjI/localhost-13305/config.backup" 2021-10-07T17:19:08.5775720Z SB-2 | 2021-10-07 17:13:32.785 [3533] main/131/main I> Instance state changed: RolesConfigured -> ConfiguringRoles 2021-10-07T17:19:08.5776945Z SB-2 | 2021-10-07 17:13:32.785 [3533] main/131/main I> Failover disabled 2021-10-07T17:19:08.5778051Z SB-2 | 2021-10-07 17:13:32.785 [3533] main/131/main I> Reconfiguring vshard.storage... 2021-10-07T17:19:08.5779552Z SB-2 | 2021-10-07 17:13:32.785 [3533] main/131/main I> Starting reconfiguration of replica 6cdd9796-ad2e-4bae-a730-367380b1ac0b 2021-10-07T17:19:08.5780894Z SB-2 | 2021-10-07 17:13:32.785 [3533] main/131/main I> connecting to 2 replicas 2021-10-07T17:19:08.5781988Z SB-2 | 2021-10-07 17:13:32.785 [3533] main/131/main C> failed to connect to 2 out of 2 replicas 2021-10-07T17:19:08.5783036Z SB-2 | 2021-10-07 17:13:32.786 [3533] main/131/main C> leaving orphan mode 2021-10-07T17:19:08.5784357Z SB-2 | 2021-10-07 17:13:32.786 [3533] main/131/main I> set 'replication' configuration option to ["admin@localhost:13305","admin@localhost:13304"] 2021-10-07T17:19:08.5785617Z SB-2 | 2021-10-07 17:13:32.786 [3533] main/131/main I> Box has been configured 2021-10-07T17:19:08.5786771Z SB-2 | 2021-10-07 17:13:32.786 [3533] main/146/lua I> Old replicaset and replica objects are outdated. 2021-10-07T17:19:08.5788411Z SA-2 | 2021-10-07 17:13:32.786 [3516] main/131/main I> Backup of active config created: "/tmp/tmp.cartridge.Icc3tiwj9yjI/localhost-13303/config.backup" 2021-10-07T17:19:08.5790129Z SA-2 | 2021-10-07 17:13:32.787 [3516] main/131/main I> Instance state changed: RolesConfigured -> ConfiguringRoles 2021-10-07T17:19:08.5791341Z SA-2 | 2021-10-07 17:13:32.787 [3516] main/131/main I> Failover disabled 2021-10-07T17:19:08.5792444Z SA-2 | 2021-10-07 17:13:32.788 [3516] main/131/main I> Reconfiguring vshard.storage... 2021-10-07T17:19:08.5793984Z SA-2 | 2021-10-07 17:13:32.788 [3516] main/131/main I> Starting reconfiguration of replica cbd1842d-c439-461f-bde0-a8d137c001e8 2021-10-07T17:19:08.5795337Z SA-2 | 2021-10-07 17:13:32.788 [3516] main/131/main I> connecting to 2 replicas 2021-10-07T17:19:08.5796429Z SA-2 | 2021-10-07 17:13:32.788 [3516] main/131/main C> failed to connect to 2 out of 2 replicas 2021-10-07T17:19:08.5797473Z SA-2 | 2021-10-07 17:13:32.789 [3516] main/131/main C> leaving orphan mode 2021-10-07T17:19:08.5798837Z SA-2 | 2021-10-07 17:13:32.789 [3516] main/131/main I> set 'replication' configuration option to ["admin@localhost:13303","admin@localhost:13302"] 2021-10-07T17:19:08.5800086Z SA-2 | 2021-10-07 17:13:32.789 [3516] main/131/main I> Box has been configured 2021-10-07T17:19:08.5801241Z SA-2 | 2021-10-07 17:13:32.790 [3516] main/137/lua I> Old replicaset and replica objects are outdated. 2021-10-07T17:19:08.5803021Z SA-1 | 2021-10-07 17:13:32.786 [3508] main/139/main I> Backup of active config created: "/tmp/tmp.cartridge.Icc3tiwj9yjI/localhost-13302/config.backup" 2021-10-07T17:19:08.5804833Z SA-1 | 2021-10-07 17:13:32.786 [3508] main/139/main I> Instance state changed: RolesConfigured -> ConfiguringRoles 2021-10-07T17:19:08.5806056Z SA-1 | 2021-10-07 17:13:32.787 [3508] main/139/main I> Failover disabled 2021-10-07T17:19:08.5807159Z SA-1 | 2021-10-07 17:13:32.787 [3508] main/139/main I> Reconfiguring vshard.storage... 2021-10-07T17:19:08.5808573Z SA-1 | 2021-10-07 17:13:32.787 [3508] main/139/main I> Starting reconfiguration of replica 59490f1c-3664-48b1-b4d9-3f8ef51e3580 2021-10-07T17:19:08.5810459Z SA-1 | 2021-10-07 17:13:32.788 [3508] main/139/main I> I am master 2021-10-07T17:19:08.5811517Z SA-1 | 2021-10-07 17:13:32.788 [3508] main/139/main I> connecting to 2 replicas 2021-10-07T17:19:08.5815970Z SA-1 | 2021-10-07 17:13:32.788 [3508] main/139/main C> failed to connect to 2 out of 2 replicas 2021-10-07T17:19:08.5817209Z SA-1 | 2021-10-07 17:13:32.788 [3508] main/139/main C> leaving orphan mode 2021-10-07T17:19:08.5818536Z SA-1 | 2021-10-07 17:13:32.788 [3508] main/139/main I> set 'replication' configuration option to ["admin@localhost:13303","admin@localhost:13302"] 2021-10-07T17:19:08.5819796Z SA-1 | 2021-10-07 17:13:32.788 [3508] main/139/main I> Box has been configured 2021-10-07T17:19:08.5820956Z SA-1 | 2021-10-07 17:13:32.788 [3508] main/161/lua I> Old replicaset and replica objects are outdated. 2021-10-07T17:19:08.5822073Z SA-1 | 2021-10-07 17:13:32.788 [3508] main/139/main I> Wakeup rebalancer 2021-10-07T17:19:08.5823118Z SA-1 | 2021-10-07 17:13:32.796 [3508] main/139/main I> --- apply_config({is_master = true}) 2021-10-07T17:19:08.5824211Z SA-1 | 2021-10-07 17:13:32.796 [3508] main/139/main I> Roles configuration finished 2021-10-07T17:19:08.5825547Z SA-1 | 2021-10-07 17:13:32.796 [3508] main/139/main I> Instance state changed: ConfiguringRoles -> RolesConfigured 2021-10-07T17:19:08.5827288Z SA-1 | 2021-10-07 17:13:32.796 [3508] relay/127.0.0.1:37222/101/main coio.cc:370 !> SystemError unexpected EOF when reading from socket, called on fd 80, aka 127.0.0.1:13302, peer of 127.0.0.1:37222: Broken pipe 2021-10-07T17:19:08.5828728Z SA-1 | 2021-10-07 17:13:32.796 [3508] relay/127.0.0.1:37222/101/main C> exiting the relay loop 2021-10-07T17:19:08.5830215Z SA-1 | 2021-10-07 17:13:32.796 [3508] main/159/applier/admin@localhost:13302 I> remote master 59490f1c-3664-48b1-b4d9-3f8ef51e3580 at 127.0.0.1:13302 running Tarantool 1.10.11 2021-10-07T17:19:08.5831668Z SA-1 | 2021-10-07 17:13:32.797 [3508] main/159/applier/admin@localhost:13302 C> leaving orphan mode 2021-10-07T17:19:08.5833119Z SA-1 | 2021-10-07 17:13:32.800 [3508] main/154/vshard.rebalancer I> Rebalance routes are sent. Schedule next wakeup after 10.000000 seconds 2021-10-07T17:19:08.5834939Z R-1 | 2021-10-07 17:13:32.793 [3500] main/168/main I> Backup of active config created: "/tmp/tmp.cartridge.Icc3tiwj9yjI/localhost-13301/config.backup" 2021-10-07T17:19:08.5836652Z R-1 | 2021-10-07 17:13:32.794 [3500] main/168/main I> Instance state changed: RolesConfigured -> ConfiguringRoles 2021-10-07T17:19:08.5837864Z R-1 | 2021-10-07 17:13:32.794 [3500] main/168/main I> Failover disabled 2021-10-07T17:19:08.5838981Z R-1 | 2021-10-07 17:13:32.795 [3500] main/168/main I> Reconfiguring vshard-router/default ... 2021-10-07T17:19:08.5840153Z R-1 | 2021-10-07 17:13:32.795 [3500] main/168/main I> Starting router configuration 2021-10-07T17:19:08.5841175Z R-1 | 2021-10-07 17:13:32.795 [3500] main/168/main I> Calling box.cfg()... 2021-10-07T17:19:08.5842411Z R-1 | 2021-10-07 17:13:32.795 [3500] main/168/main I> {"read_only":false} 2021-10-07T17:19:08.5843497Z R-1 | 2021-10-07 17:13:32.795 [3500] main/168/main I> Box has been configured 2021-10-07T17:19:08.5844643Z R-1 | 2021-10-07 17:13:32.795 [3500] main/182/lua I> Old replicaset and replica objects are outdated. 2021-10-07T17:19:08.5845798Z R-1 | 2021-10-07 17:13:32.795 [3500] main/168/main I> --- apply_config({is_master = true}) 2021-10-07T17:19:08.5846870Z R-1 | 2021-10-07 17:13:32.795 [3500] main/168/main I> Roles configuration finished 2021-10-07T17:19:08.5848180Z R-1 | 2021-10-07 17:13:32.795 [3500] main/168/main I> Instance state changed: ConfiguringRoles -> RolesConfigured 2021-10-07T17:19:08.5849916Z SB-1 | 2021-10-07 17:13:32.797 [3525] relay/127.0.0.1:60240/101/main coio.cc:370 !> SystemError unexpected EOF when reading from socket, called on fd 99, aka 127.0.0.1:13304, peer of 127.0.0.1:60240: Broken pipe 2021-10-07T17:19:08.5852853Z SB-1 | 2021-10-07 17:13:32.797 [3525] relay/127.0.0.1:60240/101/main C> exiting the relay loop 2021-10-07T17:19:08.5854202Z SB-1 | 2021-10-07 17:13:32.797 [3525] main/154/vshard.rebalancer_applier I> Apply rebalancer routes with 1 workers: 2021-10-07T17:19:08.5855187Z SB-1 | --- 2021-10-07T17:19:08.5856065Z SB-1 | d24d570f-1967-4af1-8108-e1b1e8f925bb: 500 2021-10-07T17:19:08.5856863Z SB-1 | ... 2021-10-07T17:19:08.5857449Z SB-1 | 2021-10-07T17:19:08.5858743Z SB-1 | 2021-10-07 17:13:32.797 [3525] main/154/vshard.rebalancer_applier I> Rebalancer workers have started, wait for their termination 2021-10-07T17:19:08.5860610Z SB-1 | 2021-10-07 17:13:32.798 [3525] main/118/main I> Backup of active config created: "/tmp/tmp.cartridge.Icc3tiwj9yjI/localhost-13304/config.backup" 2021-10-07T17:19:08.5862349Z SB-1 | 2021-10-07 17:13:32.798 [3525] main/118/main I> Instance state changed: RolesConfigured -> ConfiguringRoles 2021-10-07T17:19:08.5863604Z SB-1 | 2021-10-07 17:13:32.798 [3525] main/118/main I> connecting to 2 replicas 2021-10-07T17:19:08.5864704Z SB-1 | 2021-10-07 17:13:32.799 [3525] main/118/main C> failed to connect to 2 out of 2 replicas 2021-10-07T17:19:08.5865755Z SB-1 | 2021-10-07 17:13:32.799 [3525] main/118/main C> leaving orphan mode 2021-10-07T17:19:08.5867081Z SB-1 | 2021-10-07 17:13:32.799 [3525] main/118/main I> set 'replication' configuration option to ["admin@localhost:13304","admin@localhost:13305"] 2021-10-07T17:19:08.5868326Z SB-1 | 2021-10-07 17:13:32.800 [3525] main/118/main I> Failover disabled 2021-10-07T17:19:08.5869435Z SB-1 | 2021-10-07 17:13:32.802 [3525] main/118/main I> Reconfiguring vshard.storage... 2021-10-07T17:19:08.5870871Z SB-1 | 2021-10-07 17:13:32.803 [3525] main/118/main I> Starting reconfiguration of replica 9b86d895-3965-4be0-9460-cec927c72e2c 2021-10-07T17:19:08.5872072Z SB-1 | 2021-10-07 17:13:32.803 [3525] main/118/main I> I am master 2021-10-07T17:19:08.5873046Z SB-1 | 2021-10-07 17:13:32.803 [3525] main/118/main I> connecting to 2 replicas 2021-10-07T17:19:08.5874143Z SB-1 | 2021-10-07 17:13:32.803 [3525] main/118/main C> failed to connect to 2 out of 2 replicas 2021-10-07T17:19:08.5875187Z SB-1 | 2021-10-07 17:13:32.803 [3525] main/118/main C> leaving orphan mode 2021-10-07T17:19:08.5876503Z SB-1 | 2021-10-07 17:13:32.803 [3525] main/118/main I> set 'replication' configuration option to ["admin@localhost:13305","admin@localhost:13304"] 2021-10-07T17:19:08.5877765Z SB-1 | 2021-10-07 17:13:32.803 [3525] main/118/main I> Box has been configured 2021-10-07T17:19:08.5878915Z SB-1 | 2021-10-07 17:13:32.804 [3525] main/165/lua I> Old replicaset and replica objects are outdated. 2021-10-07T17:19:08.5880077Z SB-1 | 2021-10-07 17:13:32.807 [3525] main/118/main I> --- apply_config({is_master = true}) 2021-10-07T17:19:08.5881429Z SB-1 | 2021-10-07 17:13:32.807 [3525] main/118/main I> Roles configuration finished 2021-10-07T17:19:08.5882865Z SB-1 | 2021-10-07 17:13:32.807 [3525] main/118/main I> Instance state changed: ConfiguringRoles -> RolesConfigured 2021-10-07T17:19:08.5884535Z SB-1 | 2021-10-07 17:13:32.811 [3525] main/163/applier/admin@localhost:13304 I> remote master 9b86d895-3965-4be0-9460-cec927c72e2c at 127.0.0.1:13304 running Tarantool 1.10.11 2021-10-07T17:19:08.5885985Z SB-1 | 2021-10-07 17:13:32.811 [3525] main/163/applier/admin@localhost:13304 C> leaving orphan mode 2021-10-07T17:19:08.5887587Z SB-1 | 2021-10-07 17:13:32.812 [3525] main/164/applier/admin@localhost:13305 I> remote master 6cdd9796-ad2e-4bae-a730-367380b1ac0b at 127.0.0.1:13305 running Tarantool 1.10.11 2021-10-07T17:19:08.5889100Z SB-1 | 2021-10-07 17:13:32.812 [3525] main/164/applier/admin@localhost:13305 I> authenticated 2021-10-07T17:19:08.5890271Z SB-1 | 2021-10-07 17:13:32.812 [3525] main/164/applier/admin@localhost:13305 I> subscribed 2021-10-07T17:19:08.5891522Z SB-1 | 2021-10-07 17:13:32.812 [3525] main/164/applier/admin@localhost:13305 I> remote vclock {1: 1820} local vclock {1: 1821} 2021-10-07T17:19:08.5897293Z SB-1 | 2021-10-07 17:13:32.813 [3525] main/118/main I> subscribed replica 6cdd9796-ad2e-4bae-a730-367380b1ac0b at fd 92, aka 127.0.0.1:13304, peer of 127.0.0.1:60322 2021-10-07T17:19:08.5898706Z SB-1 | 2021-10-07 17:13:32.813 [3525] main/118/main I> remote vclock {1: 1820} local vclock {1: 1821} 2021-10-07T17:19:08.5899863Z SB-1 | 2021-10-07 17:13:32.814 [3525] main/156/localhost:13302 (net.box) I> connected to localhost:13302 2021-10-07T17:19:08.5901411Z SB-1 | 2021-10-07 17:13:32.814 [3525] relay/127.0.0.1:60322/101/main I> recover from `/tmp/tmp.cartridge.Icc3tiwj9yjI/localhost-13304/00000000000000001540.xlog' 2021-10-07T17:19:08.5903177Z SB-1 | 2021-10-07 17:13:32.814 [3525] relay/127.0.0.1:60322/101/main I> done `/tmp/tmp.cartridge.Icc3tiwj9yjI/localhost-13304/00000000000000001540.xlog' 2021-10-07T17:19:08.5905531Z SB-1 | 2021-10-07 17:13:32.815 [3525] main/164/applier/admin@localhost:13305 C> leaving orphan mode 2021-10-07T17:19:08.5907978Z SB-1 | 2021-10-07 17:13:32.815 [3525] main/155/vshard.rebalancer_worker_1 init.lua:2228 E> Error during rebalancer routes applying: receiver d24d570f-1967-4af1-8108-e1b1e8f925bb, error {"type":"ShardingError","name":"OBJECT_IS_OUTDATED","message":"Object is outdated after module reload\/reconfigure. Use new instance.","code":20} 2021-10-07T17:19:08.5910604Z SB-1 | 2021-10-07 17:13:32.815 [3525] main/155/vshard.rebalancer_worker_1 I> Can not finish transfers to d24d570f-1967-4af1-8108-e1b1e8f925bb, skip to next round 2021-10-07T17:19:08.5921665Z SB-1 | 2021-10-07 17:13:32.815 [3525] main/154/vshard.rebalancer_applier I> Rebalancer routes are applied 2021-10-07T17:19:08.5922972Z SB-2 | 2021-10-07 17:13:32.799 [3533] main/131/main I> --- apply_config({is_master = false}) 2021-10-07T17:19:08.5926983Z SB-2 | 2021-10-07 17:13:32.799 [3533] main/131/main I> Roles configuration finished 2021-10-07T17:19:08.5928327Z SB-2 | 2021-10-07 17:13:32.799 [3533] main/131/main I> Instance state changed: ConfiguringRoles -> RolesConfigured 2021-10-07T17:19:08.5930077Z SB-2 | 2021-10-07 17:13:32.801 [3533] relay/127.0.0.1:49908/101/main coio.cc:370 !> SystemError unexpected EOF when reading from socket, called on fd 101, aka 127.0.0.1:13305, peer of 127.0.0.1:49908: Broken pipe 2021-10-07T17:19:08.5931495Z SB-2 | 2021-10-07 17:13:32.801 [3533] relay/127.0.0.1:49908/101/main C> exiting the relay loop 2021-10-07T17:19:08.5933353Z SB-2 | 2021-10-07 17:13:32.801 [3533] main/145/applier/admin@localhost:13305 I> remote master 6cdd9796-ad2e-4bae-a730-367380b1ac0b at 127.0.0.1:13305 running Tarantool 1.10.11 2021-10-07T17:19:08.5940678Z SB-2 | 2021-10-07 17:13:32.801 [3533] main/145/applier/admin@localhost:13305 C> leaving orphan mode 2021-10-07T17:19:08.5942390Z SB-2 | 2021-10-07 17:13:32.812 [3533] main/144/applier/admin@localhost:13304 I> remote master 9b86d895-3965-4be0-9460-cec927c72e2c at 127.0.0.1:13304 running Tarantool 1.10.11 2021-10-07T17:19:08.5943842Z SB-2 | 2021-10-07 17:13:32.812 [3533] main/144/applier/admin@localhost:13304 I> authenticated 2021-10-07T17:19:08.5945299Z SB-2 | 2021-10-07 17:13:32.812 [3533] main/131/main I> subscribed replica 9b86d895-3965-4be0-9460-cec927c72e2c at fd 101, aka 127.0.0.1:13305, peer of 127.0.0.1:49972 2021-10-07T17:19:08.5946610Z SB-2 | 2021-10-07 17:13:32.812 [3533] main/131/main I> remote vclock {1: 1821} local vclock {1: 1820} 2021-10-07T17:19:08.5947733Z SB-2 | 2021-10-07 17:13:32.813 [3533] main/144/applier/admin@localhost:13304 I> subscribed 2021-10-07T17:19:08.5948982Z SB-2 | 2021-10-07 17:13:32.813 [3533] main/144/applier/admin@localhost:13304 I> remote vclock {1: 1821} local vclock {1: 1820} 2021-10-07T17:19:08.5950226Z SB-2 | 2021-10-07 17:13:32.814 [3533] main/144/applier/admin@localhost:13304 C> leaving orphan mode 2021-10-07T17:19:08.5951772Z SB-2 | 2021-10-07 17:13:32.815 [3533] relay/127.0.0.1:49972/101/main I> recover from `/tmp/tmp.cartridge.Icc3tiwj9yjI/localhost-13305/00000000000000001820.xlog' 2021-10-07T17:19:08.5953193Z SA-2 | 2021-10-07 17:13:32.800 [3516] main/131/main I> --- apply_config({is_master = false}) 2021-10-07T17:19:08.5954297Z SA-2 | 2021-10-07 17:13:32.800 [3516] main/131/main I> Roles configuration finished 2021-10-07T17:19:08.5955628Z SA-2 | 2021-10-07 17:13:32.800 [3516] main/131/main I> Instance state changed: ConfiguringRoles -> RolesConfigured 2021-10-07T17:19:08.5957380Z SA-2 | 2021-10-07 17:13:32.801 [3516] relay/127.0.0.1:49984/101/main coio.cc:370 !> SystemError unexpected EOF when reading from socket, called on fd 83, aka 127.0.0.1:13303, peer of 127.0.0.1:49984: Broken pipe 2021-10-07T17:19:08.5958787Z SA-2 | 2021-10-07 17:13:32.801 [3516] relay/127.0.0.1:49984/101/main C> exiting the relay loop 2021-10-07T17:19:08.5960260Z SA-2 | 2021-10-07 17:13:32.802 [3516] main/135/applier/admin@localhost:13302 I> remote master 59490f1c-3664-48b1-b4d9-3f8ef51e3580 at 127.0.0.1:13302 running Tarantool 1.10.11 2021-10-07T17:19:08.5962089Z SA-2 | 2021-10-07 17:13:32.802 [3516] main/136/applier/admin@localhost:13303 I> remote master cbd1842d-c439-461f-bde0-a8d137c001e8 at 127.0.0.1:13303 running Tarantool 1.10.11 2021-10-07T17:19:08.5963812Z SA-2 | 2021-10-07 17:13:32.802 [3516] main/117/main I> subscribed replica 59490f1c-3664-48b1-b4d9-3f8ef51e3580 at fd 88, aka 127.0.0.1:13303, peer of 127.0.0.1:50130 2021-10-07T17:19:08.5965114Z SA-2 | 2021-10-07 17:13:32.802 [3516] main/117/main I> remote vclock {1: 1566} local vclock {1: 1566} 2021-10-07T17:19:08.5966263Z SA-2 | 2021-10-07 17:13:32.808 [3516] main/136/applier/admin@localhost:13303 C> leaving orphan mode 2021-10-07T17:19:08.5967457Z SA-2 | 2021-10-07 17:13:32.808 [3516] main/135/applier/admin@localhost:13302 I> authenticated 2021-10-07T17:19:08.5968998Z SA-2 | 2021-10-07 17:13:32.809 [3516] relay/127.0.0.1:50130/101/main I> recover from `/tmp/tmp.cartridge.Icc3tiwj9yjI/localhost-13303/00000000000000000040.xlog' 2021-10-07T17:19:08.5970752Z SA-2 | 2021-10-07 17:13:32.809 [3516] relay/127.0.0.1:50130/101/main I> done `/tmp/tmp.cartridge.Icc3tiwj9yjI/localhost-13303/00000000000000000040.xlog' 2021-10-07T17:19:08.5972144Z SA-2 | 2021-10-07 17:13:32.811 [3516] main/135/applier/admin@localhost:13302 I> subscribed 2021-10-07T17:19:08.5973630Z SA-2 | 2021-10-07 17:13:32.811 [3516] main/135/applier/admin@localhost:13302 I> remote vclock {1: 1566} local vclock {1: 1566} 2021-10-07T17:19:08.5975003Z SA-2 | 2021-10-07 17:13:32.816 [3516] main/135/applier/admin@localhost:13302 C> leaving orphan mode 2021-10-07T17:19:08.5976791Z SA-2 | 2021-10-07 17:13:32.816 [3516] relay/127.0.0.1:50130/101/main I> recover from `/tmp/tmp.cartridge.Icc3tiwj9yjI/localhost-13303/00000000000000001566.xlog' 2021-10-07T17:19:08.5978668Z SA-1 | 2021-10-07 17:13:32.801 [3508] main/160/applier/admin@localhost:13303 I> remote master cbd1842d-c439-461f-bde0-a8d137c001e8 at 127.0.0.1:13303 running Tarantool 1.10.11 2021-10-07T17:19:08.5980177Z SA-1 | 2021-10-07 17:13:32.802 [3508] main/160/applier/admin@localhost:13303 I> authenticated 2021-10-07T17:19:08.5981366Z SA-1 | 2021-10-07 17:13:32.802 [3508] main/160/applier/admin@localhost:13303 I> subscribed 2021-10-07T17:19:08.5982611Z SA-1 | 2021-10-07 17:13:32.802 [3508] main/160/applier/admin@localhost:13303 I> remote vclock {1: 1566} local vclock {1: 1566} 2021-10-07T17:19:08.5983865Z SA-1 | 2021-10-07 17:13:32.811 [3508] main/160/applier/admin@localhost:13303 C> leaving orphan mode 2021-10-07T17:19:08.5985377Z SA-1 | 2021-10-07 17:13:32.811 [3508] main/139/main I> subscribed replica cbd1842d-c439-461f-bde0-a8d137c001e8 at fd 76, aka 127.0.0.1:13302, peer of 127.0.0.1:37370 2021-10-07T17:19:08.5986760Z SA-1 | 2021-10-07 17:13:32.811 [3508] main/139/main I> remote vclock {1: 1566} local vclock {1: 1566} 2021-10-07T17:19:08.5988070Z SA-1 | 2021-10-07 17:13:32.814 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.5989773Z SA-1 | 2021-10-07 17:13:32.815 [3508] relay/127.0.0.1:37370/101/main I> recover from `/tmp/tmp.cartridge.Icc3tiwj9yjI/localhost-13302/00000000000000000040.xlog' 2021-10-07T17:19:08.5991522Z SA-1 | 2021-10-07 17:13:32.816 [3508] relay/127.0.0.1:37370/101/main I> done `/tmp/tmp.cartridge.Icc3tiwj9yjI/localhost-13302/00000000000000000040.xlog' 2021-10-07T17:19:08.5993278Z SA-1 | 2021-10-07 17:13:32.816 [3508] relay/127.0.0.1:37370/101/main I> recover from `/tmp/tmp.cartridge.Icc3tiwj9yjI/localhost-13302/00000000000000001566.xlog' 2021-10-07T17:19:08.5994886Z R-1 | 2021-10-07 17:13:32.808 [3500] main/171/http/127.0.0.1:57056 twophase.lua:428 W> Committed patch_clusterwide at localhost:13301 2021-10-07T17:19:08.5996922Z R-1 | 2021-10-07 17:13:32.809 [3500] main/171/http/127.0.0.1:57056 twophase.lua:428 W> Committed patch_clusterwide at localhost:13302 2021-10-07T17:19:08.5998352Z R-1 | 2021-10-07 17:13:32.809 [3500] main/171/http/127.0.0.1:57056 twophase.lua:428 W> Committed patch_clusterwide at localhost:13303 2021-10-07T17:19:08.5999788Z R-1 | 2021-10-07 17:13:32.809 [3500] main/171/http/127.0.0.1:57056 twophase.lua:428 W> Committed patch_clusterwide at localhost:13304 2021-10-07T17:19:08.6001276Z R-1 | 2021-10-07 17:13:32.809 [3500] main/171/http/127.0.0.1:57056 twophase.lua:428 W> Committed patch_clusterwide at localhost:13305 2021-10-07T17:19:08.6002711Z R-1 | 2021-10-07 17:13:32.809 [3500] main/171/http/127.0.0.1:57056 twophase.lua:573 W> Clusterwide config updated successfully 2021-10-07T17:19:08.6004168Z SA-1 | 2021-10-07 17:13:32.916 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6005668Z SA-1 | 2021-10-07 17:13:33.016 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6007154Z SA-1 | 2021-10-07 17:13:33.118 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6008643Z SA-1 | 2021-10-07 17:13:33.219 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6010087Z R-1 | 2021-10-07 17:13:33.304 [3500] main/162/vshard.failover._static_router I> module is reloaded, restarting 2021-10-07T17:19:08.6011628Z R-1 | 2021-10-07 17:13:33.304 [3500] main/162/vshard.failover._static_router I> failover_f has been started 2021-10-07T17:19:08.6014580Z R-1 | 2021-10-07 17:13:33.305 [3500] main/162/vshard.failover._static_router I> New replica localhost:13304(admin@localhost:13304) for replicaset(uuid="3420b2e2-9647-4ca1-b1b7-71f83560e725", master=localhost:13304(admin@localhost:13304)) 2021-10-07T17:19:08.6018123Z R-1 | 2021-10-07 17:13:33.305 [3500] main/162/vshard.failover._static_router I> New replica localhost:13302(admin@localhost:13302) for replicaset(uuid="d24d570f-1967-4af1-8108-e1b1e8f925bb", master=localhost:13302(admin@localhost:13302)) 2021-10-07T17:19:08.6020031Z R-1 | 2021-10-07 17:13:33.305 [3500] main/162/vshard.failover._static_router I> All replicas are ok 2021-10-07T17:19:08.6021578Z R-1 | 2021-10-07 17:13:33.305 [3500] main/162/vshard.failover._static_router I> Failovering step is finished. Schedule next after 1.000000 seconds 2021-10-07T17:19:08.6023184Z SA-1 | 2021-10-07 17:13:33.320 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6024643Z R-1 | 2021-10-07 17:13:33.328 [3500] main/163/vshard.discovery._static_router I> module is reloaded, restarting 2021-10-07T17:19:08.6026055Z R-1 | 2021-10-07 17:13:33.328 [3500] main/163/vshard.discovery._static_router I> discovery_f has been started 2021-10-07T17:19:08.6027962Z R-1 | 2021-10-07 17:13:33.349 [3500] main/163/vshard.discovery._static_router I> Updated replicaset(uuid="3420b2e2-9647-4ca1-b1b7-71f83560e725", master=localhost:13304(admin@localhost:13304)) buckets: was 1, became 1000 2021-10-07T17:19:08.6030272Z R-1 | 2021-10-07 17:13:33.350 [3500] main/163/vshard.discovery._static_router I> Updated replicaset(uuid="d24d570f-1967-4af1-8108-e1b1e8f925bb", master=localhost:13302(admin@localhost:13302)) buckets: was 0, became 999 2021-10-07T17:19:08.6032384Z R-1 | 2021-10-07 17:13:33.350 [3500] main/163/vshard.discovery._static_router I> Start aggressive discovery, 1001 buckets are unknown. Discovery works with 1 seconds interval 2021-10-07T17:19:08.6034107Z SA-1 | 2021-10-07 17:13:33.422 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6035597Z SA-1 | 2021-10-07 17:13:33.523 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6037073Z SA-1 | 2021-10-07 17:13:33.624 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6038559Z SA-1 | 2021-10-07 17:13:33.726 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6040034Z SA-1 | 2021-10-07 17:13:33.827 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6041517Z SA-1 | 2021-10-07 17:13:33.929 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6042996Z SA-1 | 2021-10-07 17:13:34.030 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6044483Z SA-1 | 2021-10-07 17:13:34.131 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6045958Z SA-1 | 2021-10-07 17:13:34.232 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6047319Z R-1 | 2021-10-07 17:13:34.305 [3500] main/162/vshard.failover._static_router I> All replicas are ok 2021-10-07T17:19:08.6048717Z SA-1 | 2021-10-07 17:13:34.333 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6050957Z R-1 | 2021-10-07 17:13:34.372 [3500] main/163/vshard.discovery._static_router I> Updated replicaset(uuid="3420b2e2-9647-4ca1-b1b7-71f83560e725", master=localhost:13304(admin@localhost:13304)) buckets: was 1000, became 1500 2021-10-07T17:19:08.6053618Z R-1 | 2021-10-07 17:13:34.372 [3500] main/163/vshard.discovery._static_router I> Updated replicaset(uuid="d24d570f-1967-4af1-8108-e1b1e8f925bb", master=localhost:13302(admin@localhost:13302)) buckets: was 999, became 1500 2021-10-07T17:19:08.6055731Z R-1 | 2021-10-07 17:13:34.372 [3500] main/163/vshard.discovery._static_router I> Discovery enters idle mode, all buckets are known. Discovery works with 10 seconds interval now 2021-10-07T17:19:08.6057424Z SA-1 | 2021-10-07 17:13:34.435 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6058919Z SA-1 | 2021-10-07 17:13:34.536 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6060415Z SA-1 | 2021-10-07 17:13:34.637 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6061900Z SA-1 | 2021-10-07 17:13:34.738 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6063374Z SA-1 | 2021-10-07 17:13:34.839 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6064859Z SA-1 | 2021-10-07 17:13:34.940 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6066336Z SA-1 | 2021-10-07 17:13:35.041 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6067809Z SA-1 | 2021-10-07 17:13:35.142 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6069301Z SA-1 | 2021-10-07 17:13:35.243 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6070780Z SA-1 | 2021-10-07 17:13:35.344 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6072248Z SA-1 | 2021-10-07 17:13:35.446 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6073714Z SA-1 | 2021-10-07 17:13:35.547 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6075190Z SA-1 | 2021-10-07 17:13:35.648 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6076651Z SA-1 | 2021-10-07 17:13:35.749 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6078132Z SA-1 | 2021-10-07 17:13:35.851 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6079608Z SA-1 | 2021-10-07 17:13:35.952 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6081089Z SA-1 | 2021-10-07 17:13:36.053 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6082563Z SA-1 | 2021-10-07 17:13:36.154 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6084043Z SA-1 | 2021-10-07 17:13:36.256 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6085517Z SA-1 | 2021-10-07 17:13:36.358 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6086985Z SA-1 | 2021-10-07 17:13:36.458 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6089777Z SA-1 | 2021-10-07 17:13:36.560 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6091508Z SA-1 | 2021-10-07 17:13:36.660 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6093271Z SA-1 | 2021-10-07 17:13:36.762 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6094742Z SA-1 | 2021-10-07 17:13:36.863 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6096214Z SA-1 | 2021-10-07 17:13:36.964 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6097682Z SA-1 | 2021-10-07 17:13:37.065 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6099163Z SA-1 | 2021-10-07 17:13:37.166 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6100679Z SA-1 | 2021-10-07 17:13:37.267 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6102154Z SA-1 | 2021-10-07 17:13:37.368 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6103626Z SA-1 | 2021-10-07 17:13:37.469 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6105095Z SA-1 | 2021-10-07 17:13:37.570 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6106568Z SA-1 | 2021-10-07 17:13:37.672 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6108048Z SA-1 | 2021-10-07 17:13:37.774 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6109455Z SB-1 | 2021-10-07 17:13:37.799 [3525] main/151/vshard.recovery I> Starting sending buckets recovery step 2021-10-07T17:19:08.6111154Z SB-1 | 2021-10-07 17:13:37.799 [3525] main/151/vshard.recovery I> Bucket 1 is sending local and receiving on replicaset d24d570f-1967-4af1-8108-e1b1e8f925bb, waiting 2021-10-07T17:19:08.6112907Z SB-1 | 2021-10-07 17:13:37.799 [3525] main/151/vshard.recovery I> Finish bucket recovery step, 0 sending buckets are recovered among 1 2021-10-07T17:19:08.6114378Z SA-1 | 2021-10-07 17:13:37.815 [3508] main/153/vshard.recovery I> Starting receiving buckets recovery step 2021-10-07T17:19:08.6117747Z SA-1 | 2021-10-07 17:13:37.816 [3508] main/153/vshard.recovery I> Finish bucket recovery step, 1 receiving buckets are recovered among 1 2021-10-07T17:19:08.6119319Z SA-1 | 2021-10-07 17:13:37.876 [3508] main/154/vshard.rebalancer I> Some buckets are not active, retry rebalancing later 2021-10-07T17:19:08.6120024Z 2021-10-07T17:19:08.6120284Z 2021-10-07T17:19:08.6121126Z Ran 365 tests in 341.705 seconds, 364 successes, 1 fail, 7 skipped
The text was updated successfully, but these errors were encountered:
Closed in #1669
Sorry, something went wrong.
Successfully merging a pull request may close this issue.
We've recently added the
hotreload.vshard.test_rebalancer
test (#1565), but it seems to be flaky.Reproduced at least twice on Tarantool 1.10.11-0-gf0b0e7ecf
Test log
The text was updated successfully, but these errors were encountered: