Skip to content
This repository has been archived by the owner on Nov 6, 2020. It is now read-only.

RAM and open files very high #6516

Closed
carver opened this issue Sep 13, 2017 · 10 comments
Closed

RAM and open files very high #6516

carver opened this issue Sep 13, 2017 · 10 comments
Labels
F3-annoyance 💩 The client behaves within expectations, however this “expected behaviour” itself is at issue. M4-core ⛓ Core client code / Rust. P5-sometimesoon 🌲 Issue is worth doing soon. Z0-unconfirmed 🤔 Issue might be valid, but it’s not yet known.
Milestone

Comments

@carver
Copy link

carver commented Sep 13, 2017

I'm running:

  • Parity version: 1.7.0
  • Operating system: Xubuntu 17.04
  • And installed: via *.deb from parity.io website

I'm running with command:
parity --auto-update all --base-path /mnt/ssd-drive --no-ui --no-ws --no-jsonrpc --no-dapps --db-compaction ssd --geth --log-file /var/log/parity.log

My RAM usage increases over a day to 12+ GB, and number of open files clocked in at 792204 (after increasing the default open file limit, of course).

I've tried various settings, like --cache-size 1024 to try to limit the RAM use. Nothing has changed.

I'm doing lots of contract reads, if that's related.


I'm having some other problems, so I'll include them just in case it helps identify a cluster of issues:

  • peers trend to 0 over time and stay there. The only way I can stay synced is with a hardcoded geth peer on my local network (NTP shows me at 50-100ms offset)
  • UPnP isn't working. My geth client already reserved 30303, but I expect it to still work and just broadcast a different external port.

I'd really rather use parity because that initial sync is so fast. This and the peers->0 issue are holding me back.

@carver
Copy link
Author

carver commented Sep 14, 2017

Interesting, lsof -p $MY_PARITY_PID shows that the vast majority of the open handles are: $BASE/jsonrpc.ipc.

I'm making a lot of requests over IPC, and each one does open a connection, but AFAICT the code is closing all of them: https://github.com/pipermerriam/web3.py/blob/master/web3/providers/ipc.py#L34

Maybe parity is not letting go of the open file handle when the client closes the stream?

@5chdn 5chdn added the Z1-question 🙋‍♀️ Issue is a question. Closer should answer. label Sep 14, 2017
@5chdn
Copy link
Contributor

5chdn commented Sep 14, 2017

I'm doing lots of contract reads, if that's related.

What exactly are you doing? 12 GB memory are either a memory leak (unlikely) or releated to whatever you are doing there :)

Same could apply to the number of open files, but please try to share more details.

peers trend to 0 over time and stay there. The only way I can stay synced is with a hardcoded geth peer on my local network (NTP shows me at 50-100ms offset)

How long have you been running this node? Can you try again after removing ~/.local/share/io.parity.ethereum/chains/ethereum/network/nodes.json ?

UPnP isn't working. My geth client already reserved 30303, but I expect it to still work and just broadcast a different external port.

Have you seen the convenience options? Try --ports-shift 1:

Convenience Options:
  -c --config CONFIG           Specify a filename containing a configuration file.
                               (default: $BASE/config.toml)
  --ports-shift SHIFT          Add SHIFT to all port numbers Parity is listening on.
                               Includes network port and all servers (RPC, WebSockets, UI, IPFS, SecretStore).
                               (default: 0)

@carver
Copy link
Author

carver commented Sep 14, 2017

What exactly are you doing?

I'm doing a bunch of mapping lookups back to back. So maybe 10-20k reads, pause a bit, then run it again.

It sure looks like it's keeping an open file handle for every request to the IPC. I sampled this while my reader wasn't running:

$ lsof -p $(pgrep parity) | awk '{$3=""; $4=""; $8=""; print}' | sort | uniq -c | sort -n | tail -n 4
      3 parity 21011   CHR 136,3 0t0  /dev/pts/3
     12 parity 21011   FIFO 0,10 0t0  pipe
     46 parity 21011   sock 0,8 0t0  protocol: TCP
  86370 parity 21011   unix 0x0000000000000000 0t0  $BASE/jsonrpc.ipc type=STREAM

Then ran the reader again until completion. Then counted the open files again:

$ lsof -p $(pgrep parity) | awk '{$3=""; $4=""; $8=""; print}' | sort | uniq -c | sort -n | tail -n 4
      3 parity 21011   CHR 136,3 0t0  /dev/pts/3
     12 parity 21011   FIFO 0,10 0t0  pipe
     43 parity 21011   sock 0,8 0t0  protocol: TCP
 104069 parity 21011   unix 0x0000000000000000 0t0  $BASE/jsonrpc.ipc type=STREAM

~18k new file handles opened, permanently.


peers trend to 0 over time and stay there

How long have you been running this node?

Maybe a week. I'm pretty sure this is just a side effect of hitting the open file limit, because I have plenty of peers for the first few hours. Let's punt on this until the file limit issue is resolved.


Have you seen the convenience options? Try --ports-shift 1

Excellent, thanks. I tried it, but my node ID still shows my local network IP, and my router doesn't show a UPnP entry (it does for geth):

Public node URL: enode://a<snip>0@192.168.<snip>:30304

The UPnP issue seems unrelated, so I'm happy to open a separate issue.

@5chdn 5chdn added F3-annoyance 💩 The client behaves within expectations, however this “expected behaviour” itself is at issue. M4-core ⛓ Core client code / Rust. P5-sometimesoon 🌲 Issue is worth doing soon. Z0-unconfirmed 🤔 Issue might be valid, but it’s not yet known. and removed Z1-question 🙋‍♀️ Issue is a question. Closer should answer. labels Sep 15, 2017
@5chdn
Copy link
Contributor

5chdn commented Sep 15, 2017

Thanks for sharing the details. I'll try to reproduce the IPC/open-files issue without web3.py to confirm this is a parity problem soon.

@mikhail-manuilov
Copy link

Running parity of various versions in kubernetes. Memory usage grows with version numbers: 1.6.8 - 2.5-3.5GB, 1.7.0 - 6-8.2GB, 1.7.2 - 8GB-11.7GB. These parities are not loaded at all, 3 connections to jsonrpc max. What's happening?

@carver
Copy link
Author

carver commented Sep 27, 2017

@5chdn any luck reproducing?

@5chdn 5chdn added this to the 1.9 milestone Oct 5, 2017
@5chdn
Copy link
Contributor

5chdn commented Oct 18, 2017

hey @carver sorry for not getting back to you yet.

could you look into #6575? could you try to reuse the IPC connection instead of creating new ones?

@iFA88
Copy link

iFA88 commented Oct 18, 2017

If you like:
https://github.com/iFA88/web3.py/blob/master/web3/providers/ipc.py

This is a little recoded, this let the IPC sock open and with that you can do 1000+ request / s.

@5chdn
Copy link
Contributor

5chdn commented Oct 18, 2017

thanks for sharing!

@5chdn 5chdn closed this as completed Oct 18, 2017
carver added a commit to ethereum/web3.py that referenced this issue Oct 19, 2017
carver added a commit to ethereum/web3.py that referenced this issue Oct 19, 2017
carver added a commit to ethereum/web3.py that referenced this issue Oct 19, 2017
@carver
Copy link
Author

carver commented Nov 22, 2017

FYI, persistent IPC connections were released in web3.py v3.16.3 and v4.0.0-beta.1

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
F3-annoyance 💩 The client behaves within expectations, however this “expected behaviour” itself is at issue. M4-core ⛓ Core client code / Rust. P5-sometimesoon 🌲 Issue is worth doing soon. Z0-unconfirmed 🤔 Issue might be valid, but it’s not yet known.
Projects
None yet
Development

No branches or pull requests

4 participants