Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stop caching the result of numpy.concatenate in LazyFrames #332

Merged
merged 1 commit into from
Oct 25, 2018

Conversation

muupan
Copy link
Member

@muupan muupan commented Oct 16, 2018

This change saves a lot of memory. See openai/baselines#295.

  • check if it does not affect scores

This change saves a lot of memory.
@muupan
Copy link
Member Author

muupan commented Oct 24, 2018

I compared the results of examples/ale/train_dqn_ale.py --eval-interval 1000000 --prioritized --lr 6.25e-5 --env BreakoutNoFrameskip-v4 before and after this PR. It seems this PR does not affect the scores.

Maximum memory usage before this PR: 29890MB
Maximum memory usage after this PR: 9896MB

Duration before this PR: 123:46:46
Duration after this PR: 129:01:28

image

@muupan muupan changed the title [WIP] Stop caching the result of numpy.concatenate in LazyFrames Stop caching the result of numpy.concatenate in LazyFrames Oct 24, 2018
@toslunar toslunar self-assigned this Oct 24, 2018
Copy link
Member

@toslunar toslunar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@toslunar toslunar merged commit 8809b65 into chainer:master Oct 25, 2018
@muupan muupan deleted the reduce-memory-usage branch October 25, 2018 03:58
@muupan muupan modified the milestone: v0.5 Nov 13, 2018
@muupan muupan added the bug label Nov 13, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants