Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

msgps value too low, cannot sync #624

Closed
uniftyitadmin opened this issue Dec 2, 2021 · 13 comments
Closed

msgps value too low, cannot sync #624

uniftyitadmin opened this issue Dec 2, 2021 · 13 comments
Labels
question Further information is requested

Comments

@uniftyitadmin
Copy link

uniftyitadmin commented Dec 2, 2021

System information

Geth version: 1.1.5.
OS & Version: Linux Debian 11
Commit hash

Expected behaviour

mgasps should be as much as higher

Actual behaviour

mgasps is too low

t=2021-12-02T07:48:03+0000 lvl=info msg="Deep froze chain segment" blocks=6 elapsed=29.162ms number=12,920,769 hash=0x3e4e5a6d216ca653733ea40af53d2dd953ee693ce38b74cf3be63a728ef7b437
t=2021-12-02T07:48:09+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=692 mgas=99.507 elapsed=11.647s mgasps=8.543 number=13,010,770 hash=0x1d9611e4c668362bdceab4dead0a4c724d7a5fd91c05f2fc75a1e7cc89fe738f age=4d6h47m dirty="325.92 MiB"
t=2021-12-02T07:48:19+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=658 mgas=99.514 elapsed=10.222s mgasps=9.735 number=13,010,771 hash=0x56ef2fcf1354d4f455742c1b9f16e8826d1ee9db45a9b01b8812715db5ba4d40 age=4d6h47m dirty="336.31 MiB"
t=2021-12-02T07:48:29+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=534 mgas=84.786 elapsed=9.182s mgasps=9.233 number=13,010,772 hash=0xa221a26ea1a948a9e616bb98e55eb11d9206de90bbedec946b302ad19c3dfd9a age=4d6h47m dirty="345.40 MiB"
t=2021-12-02T07:48:40+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=676 mgas=99.499 elapsed=10.967s mgasps=9.072 number=13,010,773 hash=0x841f6530d7988bb6608a6050907b95bb91c493eaf974ccfefd9022ca85eaf837 age=4d6h47m dirty="355.85 MiB"
t=2021-12-02T07:48:50+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=653 mgas=99.500 elapsed=10.306s mgasps=9.654 number=13,010,774 hash=0x7a432a989395295a90a79b0add3d7598b33da2b2b14cdcd6286598f579d63377 age=4d6h47m dirty="366.51 MiB"
t=2021-12-02T07:49:00+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=680 mgas=99.517 elapsed=10.631s mgasps=9.361 number=13,010,775 hash=0x9ee63d1b5b512cf3f8954a7819260277bed99560789cda9b16379c6c6cbe44ed age=4d6h47m dirty="376.83 MiB"
t=2021-12-02T07:49:03+0000 lvl=info msg="Deep froze chain segment" blocks=6 elapsed=14.823ms number=12,920,775 hash=0x3c1430714f8f2d0e7232a3b5e195678f8dc8e521b28591e483e1bad3ca96b86e
t=2021-12-02T07:49:10+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=687 mgas=99.461 elapsed=9.467s mgasps=10.506 number=13,010,776 hash=0xd3866ab567330c3be8ea94cc5fb8c4b88e06279101a43cbe11624ab77170d0d7 age=4d6h47m dirty="386.97 MiB"

My hardware (Hetzner AX101):
AMD Ryzen 9 5950X (12 cores)
48 GB RAM
3 TB NVME (Gen4) - currently 1.1 TB/3 TB full

I used latest snapshot (node installed on 28.11. , so latest snapshot was 26112021 in that time),in the start i was behind about days,but now i am behind more then 4 days.

config.toml and geth.service files:
config.toml.txt
geth.service.txt

Does anyone have/had the same problem and is there any solution. Please help!!!

@bonhardt
Copy link

bonhardt commented Dec 2, 2021

well the is multiple issue in your "comment"

first the AX101 is indeed a ryzen 5950x but its not 12 but 16 cores and comes with 128GB ram by default
so you have some serious issue with your config or it is not what you have, if its some virtual machine you share on the ryzen probably that is your issue.

second threre is no such thing as msgps
there is mgas and mgasps
mgas says how much gas is used in the block (million gas)
mgasps is a performance indicator(million gas per second) means how much gas you can compute in one second
since each block lately contains nearly 100 mgas if your node performance is exactly 100 mgasps then you would process each block in 1 second, you would want mgasps as high as possible.

@uniftyitadmin
Copy link
Author

Sorry,i meant mgasps (corrected it). I have AX101 but it is virtualised using Proxmox VE. I allocated 12 cores and 48 GB to bsc node and tracked its performance via Proxmox GUI,it's never exceeded or created bottleneck on allocated resources.

So you are saying that my mgasps is actually low? Is there a way to increase it.

@bonhardt
Copy link

bonhardt commented Dec 2, 2021

yes you need as high as possible, i dont know how you allocated the space in proxmox maybe through some container file which may limit the IOPS of the drive, i would deffenetly not use a VM for bsc node.

I have an AX101 without VM and it runs perfectly fine

INFO [12-02|13:42:47.245] Imported new chain segment blocks=1 txs=520 mgas=63.692 elapsed=220.987ms mgasps=288.214 number=13,130,801 hash=3218ab..1d024f dirty=1021.70MiB
INFO [12-02|13:42:47.680] Imported new chain segment blocks=1 txs=459 mgas=63.465 elapsed=364.181ms mgasps=174.267 number=13,130,802 hash=c49514..d0ff16 dirty=1.00GiB
INFO [12-02|13:42:50.879] Imported new chain segment blocks=1 txs=494 mgas=69.312 elapsed=547.379ms mgasps=126.626 number=13,130,803 hash=39b6d7..2d9fad dirty=1.00GiB
INFO [12-02|13:42:57.314] Imported new chain segment blocks=1 txs=958 mgas=99.443 elapsed=749.938ms mgasps=132.602 number=13,130,804 hash=a7bb5f..f454de dirty=1.01GiB

@uniftyitadmin
Copy link
Author

Thanks i will try that as my last hope.
Also, with atop i'am getting this output:
image
Disk is busy all the time,do you have the same thing or this is actual problem with my setup?

@uniftyitadmin uniftyitadmin changed the title msgps value too high, cannot sync msgps value too low, cannot sync Dec 3, 2021
@keefel
Copy link
Contributor

keefel commented Dec 3, 2021

It seems the storage performance is the throttle, what is your disk specifications, IOPS and latency is quite important.

@keefel keefel added the question Further information is requested label Dec 3, 2021
@uniftyitadmin
Copy link
Author

It seems the storage performance is the throttle, what is your disk specifications, IOPS and latency is quite important.

image

@keefel
Copy link
Contributor

keefel commented Dec 3, 2021

As you are using a virtual machine, the result here may not precise, could you use fio tool to have a performance test, you can get the real performance data.

@uniftyitadmin
Copy link
Author

Output of fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75:

Jobs: 1 (f=1): [m(1)][100.0%][r=92.5MiB/s,w=29.9MiB/s][r=23.7k,w=7651 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=27184: Fri Dec 3 09:35:52 2021
read: IOPS=23.6k, BW=92.0MiB/s (96.5MB/s)(3070MiB/33367msec)
bw ( KiB/s): min=85872, max=104256, per=100.00%, avg=94339.64, stdev=4405.83, samples=66
iops : min=21468, max=26064, avg=23584.83, stdev=1101.44, samples=66
write: IOPS=7871, BW=30.7MiB/s (32.2MB/s)(1026MiB/33367msec); 0 zone resets
bw ( KiB/s): min=28664, max=35664, per=100.00%, avg=31524.67, stdev=1554.12, samples=66
iops : min= 7166, max= 8916, avg=7881.09, stdev=388.51, samples=66
cpu : usr=1.96%, sys=97.88%, ctx=3176, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=785920,262656,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: bw=92.0MiB/s (96.5MB/s), 92.0MiB/s-92.0MiB/s (96.5MB/s-96.5MB/s), io=3070MiB (3219MB), run=33367-33367msec
WRITE: bw=30.7MiB/s (32.2MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=1026MiB (1076MB), run=33367-33367msec

Disk stats (read/write):
dm-7: ios=821473/262064, merge=0/0, ticks=648428/246204, in_queue=894632, util=100.00%, aggrios=824111/262873, aggrmerge=0/0, aggrticks=648236/246216, aggrin_queue=894452, aggrutil=100.00%
dm-4: ios=824111/262873, merge=0/0, ticks=648236/246216, in_queue=894452, util=100.00%, aggrios=412063/131436, aggrmerge=0/0, aggrticks=323934/123046, aggrin_queue=446980, aggrutil=100.00%
dm-2: ios=18/0, merge=0/0, ticks=12/0, in_queue=12, util=0.24%, aggrios=824149/263159, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
md1: ios=824149/263159, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=411969/262964, aggrmerge=113/193, aggrticks=323031/199291, aggrin_queue=61278, aggrutil=100.00%
nvme1n1: ios=467318/262964, merge=121/193, ticks=351902/152574, in_queue=1940, util=99.82%
nvme0n1: ios=356621/262964, merge=105/193, ticks=294161/246009, in_queue=120616, util=100.00%
dm-3: ios=824108/262873, merge=0/0, ticks=647856/246092, in_queue=893948, util=100.00%

I am using Proxmox LXC container.

@bonhardt
Copy link

bonhardt commented Dec 3, 2021

just to compare here is my output of the same test from the same type of AX101 server but not VM, your IOPS is the fraction of my speed.

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.16
Starting 1 process
test: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [m(1)][-.-%][r=1136MiB/s,w=381MiB/s][r=291k,w=97.5k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=1179264: Fri Dec 3 11:39:57 2021
read: IOPS=290k, BW=1135MiB/s (1190MB/s)(3070MiB/2706msec)
bw ( MiB/s): min= 1128, max= 1138, per=100.00%, avg=1134.62, stdev= 3.60, samples=5
iops : min=288950, max=291354, avg=290463.80, stdev=919.87, samples=5
write: IOPS=97.1k, BW=379MiB/s (398MB/s)(1026MiB/2706msec); 0 zone resets
bw ( KiB/s): min=385056, max=391944, per=100.00%, avg=388702.80, stdev=2772.13, samples=5
iops : min=96264, max=97986, avg=97175.60, stdev=693.12, samples=5
cpu : usr=19.19%, sys=80.70%, ctx=290, majf=0, minf=8
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=785920,262656,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: bw=1135MiB/s (1190MB/s), 1135MiB/s-1135MiB/s (1190MB/s-1190MB/s), io=3070MiB (3219MB), run=2706-2706msec
WRITE: bw=379MiB/s (398MB/s), 379MiB/s-379MiB/s (398MB/s-398MB/s), io=1026MiB (1076MB), run=2706-2706msec

Disk stats (read/write):
md2: ios=726492/243041, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=392960/131328, aggrmerge=0/0, aggrticks=37613/1284, aggrin_queue=0, aggrutil=94.62%
nvme0n1: ios=392855/131433, merge=0/0, ticks=37892/1290, in_queue=0, util=94.62%
nvme1n1: ios=393065/131223, merge=0/0, ticks=37334/1278, in_queue=0, util=94.62%

@uniftyitadmin
Copy link
Author

ok thx @bonhardt ,i will do bare metal install, can you share your setup (OS,RAID etc.)?

@xiangjie256329
Copy link

@uniftyitadmin how about metal,which version you use?

@RumeelHussainbnb
Copy link
Contributor

RumeelHussainbnb commented Dec 21, 2021

Please check #338
Please proceed to join our Discord channel for more discussion at https://discord.com/invite/binancesmartchain

@Classical1956
Copy link

Hi bro. Can you give me a connection to the server you purchased

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

7 participants