-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade storage for mainnet fleet #184
Comments
You can find more example of me using Just need to click the |
Ticket Created #351756 |
Asked about 4TB price and 3TB (if they have) |
4TB will cost twice of that, will go ahead. Pros:
Cons:
|
|
Disk are installed:
linux-04 has a different slot:
|
IH had an issue with disks. They fixed in on linux-01 and I was able to set them up. Was done with approximately these actions:
Need to fine-tune a bit:
|
BTW if docs or support asks about using the different tool - here is the timeline
|
Done, disks are setup with these commands:
Smth is missing before the second
I didn't research much and just put command above. One finding -
Looks like it's some USB 256Mb drive. ChatGPT says it can smth to do with HP iLO (Integrated Lights-Out) LUN (Logical Unit Number). I have no idea, whats that. |
Thanks for getting this done. |
It's about time we increase the storage available for both Docker containers(Geth) and Systemd services(Beacon Nodes):
The current layout involves single logical volume per single physical volume(SSD) configured in the controller.
The migration to RAID0 logical volumes using two SSDs using a HPE Smart Array utility is documented here:
https://docs.infra.status.im/general/hp_smart_array_raid.html
The steps for the migration of each host will look like this:
/data
files to temporary migration SSD./data
logical volume and re-create it with two physical volumes(SSDs) as one RAID0 logical volume./data
volume./docker
volume.I would recommend creating a single support ticket to order 2 extra SSDs of the same type for all
nimbus.mainnet
hosts, and then manage migration of each host in the comments of that ticket.The text was updated successfully, but these errors were encountered: