Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Large memory usage on Windows 7, freezes on Fedora 24 #2247

Closed
CsatiZoltan opened this issue Nov 8, 2016 · 27 comments
Closed

Large memory usage on Windows 7, freezes on Fedora 24 #2247

CsatiZoltan opened this issue Nov 8, 2016 · 27 comments

Comments

@CsatiZoltan
Copy link

JabRef version 3.6 consumes a lot of memory under Windows 7 (more than 500 MB, see the attached screenshot) and sometimes it freezes (than it's allocating more and more, unbounded. I killed the process after 1500 MB). On Fedora 24 freezes much more often. The former versions were totally stable on Windows 7.
jabref

@stefan-kolb
Copy link
Member

Thank you for your report 👍
This should be fixed in current master. Please try the latest build from http://builds.jabref.org/master.

@stefan-kolb stefan-kolb added the status: waiting-for-feedback The submitter or other users need to provide more information about the issue label Nov 8, 2016
@CsatiZoltan
Copy link
Author

After installing the latest build from your link, it didn't become better. When I wanted to close it, it didn't exit, but the allocated memory was not freed (see the screenshot). I had to kill the process.
capture

@stefan-kolb
Copy link
Member

Can you give us some steps to reproduce the memory leak?

@CsatiZoltan
Copy link
Author

I just opened it and added an article. Here is the .bib file.
SaddlePoint.zip

@stefan-kolb
Copy link
Member

Ah thanks great! We will investigate this!

@grimes2
Copy link
Contributor

grimes2 commented Nov 8, 2016

500 MB Memory is normal for JabRef. Your system seems to have only 2 GB RAM.

@CsatiZoltan
Copy link
Author

No, my system has 16 GB of RAM.

@Siedlerchr
Copy link
Member

Siedlerchr commented Nov 8, 2016

Do you have the 64bit java runtime environment installed?

@koppor
Copy link
Member

koppor commented Nov 8, 2016

Refs #2166 #2175

@tobiasdiez
Copy link
Member

I can confirm at least the large memory footprint. After opening a normal db (mine has 500 entries and a few groups) JabRef eats only 200 MB of RAM. Now open the entry editor and run through a few entries (using the arrow keys so that a new entry editor is generated for the entries). Result: over 1 GB of RAM usage. Btw: the same db with 3.6 only needed < 100 MB RAM.

@stefan-kolb
Copy link
Member

stefan-kolb commented Nov 9, 2016

image

  • The spikes are running through the entries.
  • Strangely enough the Heap is growing continuously without any interaction with the program.
    image

As long as I keep the entry editor closed nothing happens to the RAM.

@stefan-kolb stefan-kolb added this to the v3.7 milestone Nov 9, 2016
@grimes2
Copy link
Contributor

grimes2 commented Nov 9, 2016

JabRef 3.7-dev--snapshot--2016-11-08--master--fffad83
windows 10 10.0 amd64
Java 1.8.0_111

I can't reproduce. For me this issue was fixed in #2175.

Is this issue valid for BibTeX or BibLaTeX mode?

@matthiasgeiger
Copy link
Member

Both.

@CsatiZoltan
Copy link
Author

Using JabRef 3.7-dev--snapshot--2016-11-08--master--fffad83, with OpenJDK 1.8.0_111 under Fedora 24, seems to have solved the problem. Doing the same procedure as @tobiasdiez , i.e.

I can confirm at least the large memory footprint. After opening a normal db (mine has 500 entries and a few groups) JabRef eats only 200 MB of RAM. Now open the entry editor and run through a few entries (using the arrow keys so that a new entry editor is generated for the entries). Result: over 1 GB of RAM usage. Btw: the same db with 3.6 only needed < 100 MB RAM.

does not increase the memory usage on Fedora.

@matthiasgeiger matthiasgeiger added the bug Confirmed bugs or reports that are very likely to be bugs label Nov 10, 2016
@matthiasgeiger
Copy link
Member

Okay I investigated this a bit - but I'm not sure what conclusions to draw.
Have a look at the following heap diagram:

grabbed_20161111-093629

The first spikes (without marking) are produced by constantly switching from entry to entry by pressing down the cursor without releasing it - if the entry editor is closed.

Opening the entry editor and slowly switching from one entry to the next produces the blue marked footprint: More space is allocated, but quickly GCed.

Having the entry editor open and then press down the cursor without releasing it produces the footprint marked red: Due to high CPU load the newly created EntryEditors are not GCed, thus more and more memory is allocated. (Sidenote: This is massively improved if no DatePickers are created!)

Performing a manual GC (green) frees most allocated memory.

Thus there is no "memory leak" in the strict sense as all resources are freed eventually.

However, there is another strange effect I was not able to pin down: Without any interaction more and more heap space is used:

grabbed_20161111-094723

Which is also not a "real" memory leak as this memory also will be freed upon garbage collection.

Note: This behavior is not new, but all JabRef version since at least 2.10 do this. But I was not able to find the reason for this behavior...

To conclude: The large memory consumption of JabRef is not really nice, but should not be a big issue for itself. If someone is able to track down this strange constant growing of needed heap space it would be cool. However, I assume that is something strange happing in some AWT thread we cannot really solve easily...
Thus: This should not be a blocker for 3.7 - but we should consider removing those buggy DatePickers...

@grimes2
Copy link
Contributor

grimes2 commented Nov 11, 2016

Ref #2176

@matthiasgeiger matthiasgeiger added type: enhancement and removed bug Confirmed bugs or reports that are very likely to be bugs status: waiting-for-feedback The submitter or other users need to provide more information about the issue labels Nov 11, 2016
@matthiasgeiger matthiasgeiger modified the milestones: v3.8, v3.7 Nov 11, 2016
@lenhard
Copy link
Member

lenhard commented Dec 5, 2016

Now that #2176 is fixed with #2340, we can close this issues as well, can't we?

After all, replacing the data picker was all we decided to do about this?

@matthiasgeiger
Copy link
Member

More or less, yes.

I'll check whether this is now changed with LGoodDatePicker.

@stefan-kolb stefan-kolb added the status: waiting-for-feedback The submitter or other users need to provide more information about the issue label Dec 6, 2016
@matthiasgeiger
Copy link
Member

No change with LGoodDatePicker: Opening an EntryEditor and than cycling through the maintable by pressing down the cursors will create massive CPU load and memory usage - which will eventually by GCed.

However, this is not the real issue here. The constantly growing used heap space (even if JabRef is just idling) will lead potentially to more and more memory consumption as the JVM increases (and decreases) the heap size after each automatic GC. And as the whole heap space is assigned to the JVM the whole size will be marked as "used" in the OS.

As already written above: I could not find the reason for the constant changes in the used heap space. Perhaps someone else is able to find the reason for this...

@matthiasgeiger matthiasgeiger removed the status: waiting-for-feedback The submitter or other users need to provide more information about the issue label Dec 6, 2016
@lenhard
Copy link
Member

lenhard commented Dec 6, 2016

Ok, what a pity. To proceed, two things:

  1. If we actually want to improve memory usage, we need to have a reproducible and easily executable benchmark that we can optimize for. Otherwise, it is hard to test a potential solution or to monitor JabRefs memory consumption over time. I guess you do not want to redo this analysis every time we try out something... @matthiasgeiger or @tobiasdiez Do you see potential for implementing a memory test with JMH that allows to reproduce the problem here?

  2. Since this is about unused heap usage, we may play around with different command line args, like we have in the case of string deduplication. However, a reproducible benchmark we can use to assess progress (see point 1) would be beneficial first. Regarding command line args for reducing unused heap space, see http://stackoverflow.com/q/38295692/1127892 The suggestion seems to be to use -XX:+UseG1GC

@lenhard
Copy link
Member

lenhard commented Dec 9, 2016

So, I did some preliminary testing with the -XX:+UseG1GC arg and a very small 40-entry database: here are the results when cycling through the main table:

Current master without additional args
without-args

Current master with args:
with-args

Note that with the args the total heap is much smaller and it stays constant. I say we test this some more with a gigantic database and if it works there as well, go for it.

Documentation of the garbage collector I have set here: http://www.oracle.com/technetwork/articles/java/g1gc-1984535.html We are currently using the default, which resolves to -XX:+UseParallelGC

@lenhard
Copy link
Member

lenhard commented Dec 9, 2016

And another nice summary of the available garbage collectors: http://blog.takipi.com/garbage-collectors-serial-vs-parallel-vs-cms-vs-the-g1-and-whats-new-in-java-8/

Note that the G1GC comes with the string deduplication that we use anyway.

@lenhard
Copy link
Member

lenhard commented Dec 9, 2016

And here is some data for cycling through a 6500 entry database:

With G1GC (intial growth when I opened the database):
very-large-with-args

With the default parallel GC:
very-large-without-args

@lenhard
Copy link
Member

lenhard commented Dec 9, 2016

Decision at the devcall:

  • give @matthiasgeiger some more time to look into this
  • If it works for him, switch to the G1GC

@koppor
Copy link
Member

koppor commented Jan 20, 2017

Thank you for reporting this issue. We think, that is already fixed in our development version and consequently the change will be included in the next release.

We would like to ask you to use a development build from https://builds.jabref.org/master and report back if it works for you.

@koppor koppor closed this as completed Jan 20, 2017
@lenhard
Copy link
Member

lenhard commented Jan 20, 2017

Just to note: We switched to G1GC, which is integrated in master now.

@andselisk
Copy link

andselisk commented May 16, 2017

This problem persists for me too. Windows 7 64 bit, JabRef 3.8.2 64 bit is installed using Chocolatey, as well as required jre8. No matter how big the *.bib file is, upon opening any database file JabRef quickly starts to consume RAM (I have 8 Gb) and becomes slow and unresponsive as it reaches around 1.5 Gb of memory consumption (after a minute or so). Neither resetting preferences, nor installing 32/64-bit versions of stable 3.8.2 version as well as latest snapshots (4.0.0 15-05-2017) from http://builds.jabref.org/master/ were of any help.

It's been mentioned above that 500 Mb for an average database is normal for JabRef. Well, I vastly disagree. A reference manager should not consume that much memory for what it delivers. Zotero with about 5K entries containing attachments and full-text index of all *.pdf files, notes, tags and all the bells and whistles rarely consumes more than 200 Mb of RAM on the same computer, and also practically never becomes slow or unresponsive.

P.S. I'm back to JabRef 2.10.0, this version doesn't have any of these issues with high RAM usage. I tried test database with 1K entries and it's steady at 160 Mb RAM with no freezes whatsoever.

Update: I thought that it maybe has something to do with Windows 7, so I checked whether it is the case or not once I've gotten a second laptop. So, on a freshly installed Windows 10 Pro 64 bit with the latest JRE8.0.131 and JabRef 3.8.2 -- the only two programs installed -- JabRef somehow manages to take over 1.3 Gb of RAM and makes CPU throttle after adding a single entry from DOI. I would conclude that current STABLE JabRef version is severely broken, and the upcoming dev versions are still not fixing this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants