Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR: Out of GPU memory #174

Open
smart-fr opened this issue Feb 8, 2023 · 9 comments
Open

ERROR: Out of GPU memory #174

smart-fr opened this issue Feb 8, 2023 · 9 comments

Comments

@smart-fr
Copy link

smart-fr commented Feb 8, 2023

The 10th training iteration for my game crashes with the following error.
This happens not only on my PC with a 16GB RTX3080 Laptop, but also on a cloud VM with a 40GB A100 GPU.
Could be related to #1 (comment)?
I am not quite sure how to work around this one.

Starting iteration 10

  Starting self-play

        Progress: 100%|████████████████████████████████████████████████| Time: 0:38:08     

    Generating 4 samples per second on average
    Average exploration depth: 2.8
    MCTS memory footprint per worker: 564.22MB
    Experience buffer size: 88,950 (83,510 distinct boards)
ERROR: Out of GPU memory trying to allocate 256.000 KiB
Effective GPU memory usage: 100.00% (16.000 GiB/16.000 GiB)
Memory pool usage: 15.235 GiB (15.281 GiB reserved)
Stacktrace:
  [1] macro expansion
    @ C:\Users\smart\.julia\packages\CUDA\BbliS\src\pool.jl:411 [inlined]
  [2] macro expansion
    @ .\timing.jl:382 [inlined]
  [3] #_alloc#174
    @ C:\Users\smart\.julia\packages\CUDA\BbliS\src\pool.jl:404 [inlined]
  [4] #alloc#173
    @ C:\Users\smart\.julia\packages\CUDA\BbliS\src\pool.jl:389 [inlined]
  [5] alloc
    @ C:\Users\smart\.julia\packages\CUDA\BbliS\src\pool.jl:383 [inlined]
  [6] CUDA.CuArray{Float32, 4, CUDA.Mem.DeviceBuffer}(#unused#::UndefInitializer, dims::NTuple{4, Int64})
    @ CUDA C:\Users\smart\.julia\packages\CUDA\BbliS\src\array.jl:42
  [7] CuArray
    @ C:\Users\smart\.julia\packages\CUDA\BbliS\src\array.jl:291 [inlined]
  [8] adapt_storage(#unused#::CUDA.CuArrayAdaptor{CUDA.Mem.DeviceBuffer}, xs::Array{Float32, 4})
    @ CUDA C:\Users\smart\.julia\packages\CUDA\BbliS\src\array.jl:543
  [9] adapt_structure
    @ C:\Users\smart\.julia\packages\Adapt\0zP2x\src\Adapt.jl:57 [inlined]
 [10] adapt
    @ C:\Users\smart\.julia\packages\Adapt\0zP2x\src\Adapt.jl:40 [inlined]
 [11] #cu#197
    @ C:\Users\smart\.julia\packages\CUDA\BbliS\src\array.jl:595 [inlined]
 [12] cu
    @ C:\Users\smart\.julia\packages\CUDA\BbliS\src\array.jl:595 [inlined]
 [13] adapt_storage
    @ C:\Users\smart\.julia\packages\Flux\OxB4x\src\functor.jl:98 [inlined]
 [14] adapt_structure
    @ C:\Users\smart\.julia\packages\Adapt\0zP2x\src\Adapt.jl:57 [inlined]
 [15] adapt
    @ C:\Users\smart\.julia\packages\Adapt\0zP2x\src\Adapt.jl:40 [inlined]
 [16] #164
    @ C:\Users\smart\.julia\packages\Flux\OxB4x\src\functor.jl:203 [inlined]
 [17] ExcludeWalk
    @ C:\Users\smart\.julia\packages\Functors\dFhrk\src\walks.jl:92 [inlined]
 [18] (::Functors.CachedWalk{Functors.ExcludeWalk{Functors.DefaultWalk, Flux.var"#164#165", typeof(Flux._isleaf)}, Functors.NoKeyword})(::Function, ::Array{Float32, 4})
    @ Functors C:\Users\smart\.julia\packages\Functors\dFhrk\src\walks.jl:132
 [19] fmap
    @ C:\Users\smart\.julia\packages\Functors\dFhrk\src\maps.jl:1 [inlined]
 [20] #fmap#27
    @ C:\Users\smart\.julia\packages\Functors\dFhrk\src\maps.jl:11 [inlined]
 [21] gpu
    @ C:\Users\smart\.julia\packages\Flux\OxB4x\src\functor.jl:203 [inlined]
 [22] convert_input
    @ C:\Projets\BonbonRectangle\IA\dev\AlphaZero.jl\src\networks\flux.jl:62 [inlined]     
 [23] #1
    @ C:\Projets\BonbonRectangle\IA\dev\AlphaZero.jl\src\networks\network.jl:100 [inlined] 
 [24] map (repeats 2 times)
    @ .\tuple.jl:224 [inlined]
 [25] map(::Function, ::NamedTuple{(:W, :X, :A, :P, :V), Tuple{Matrix{Float32}, Array{Float32, 4}, Matrix{Float32}, Matrix{Float32}, Matrix{Float32}}})
    @ Base .\namedtuple.jl:219
 [26] convert_input_tuple
    @ C:\Projets\BonbonRectangle\IA\dev\AlphaZero.jl\src\networks\network.jl:99 [inlined]  
 [27] (::AlphaZero.var"#116#118"{ResNet})(b::NamedTuple{(:W, :X, :A, :P, :V), Tuple{Matrix{Float32}, Array{Float32, 4}, Matrix{Float32}, Matrix{Float32}, Matrix{Float32}}})
    @ AlphaZero C:\Projets\BonbonRectangle\IA\dev\AlphaZero.jl\src\learning.jl:111
 [28] iterate
    @ .\generator.jl:47 [inlined]
 [29] collect_to!(dest::Vector{NamedTuple{(:W, :X, :A, :P, :V), Tuple{CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Float32, 4, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}}}}, itr::Base.Generator{MLUtils.DataLoader{NamedTuple{(:W, :X, :A, :P, :V), Tuple{Matrix{Float32}, Array{Float32, 4}, Matrix{Float32}, Matrix{Float32}, Matrix{Float32}}}, Random._GLOBAL_RNG, Val{nothing}}, AlphaZero.var"#116#118"{ResNet}}, offs::Int64, st::Tuple{Base.Generator{UnitRange{Int64}, MLUtils.var"#38#40"}, Int64})
    @ Base .\array.jl:845
 [30] collect_to_with_first!(dest::Vector{NamedTuple{(:W, :X, :A, :P, :V), Tuple{CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Float32, 4, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}}}}, v1::NamedTuple{(:W, :X, :A, :P, :V), Tuple{CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Float32, 4, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}}}, itr::Base.Generator{MLUtils.DataLoader{NamedTuple{(:W, :X, :A, :P, :V), Tuple{Matrix{Float32}, Array{Float32, 4}, Matrix{Float32}, Matrix{Float32}, Matrix{Float32}}}, Random._GLOBAL_RNG, Val{nothing}}, AlphaZero.var"#116#118"{ResNet}}, st::Tuple{Base.Generator{UnitRange{Int64}, MLUtils.var"#38#40"}, Int64})
    @ Base .\array.jl:823
 [31] collect(itr::Base.Generator{MLUtils.DataLoader{NamedTuple{(:W, :X, :A, :P, :V), Tuple{Matrix{Float32}, Array{Float32, 4}, Matrix{Float32}, Matrix{Float32}, Matrix{Float32}}}, Random._GLOBAL_RNG, Val{nothing}}, AlphaZero.var"#116#118"{ResNet}})
    @ Base .\array.jl:797
 [32] map(f::Function, A::MLUtils.DataLoader{NamedTuple{(:W, :X, :A, :P, :V), Tuple{Matrix{Float32}, Array{Float32, 4}, Matrix{Float32}, Matrix{Float32}, Matrix{Float32}}}, Random._GLOBAL_RNG, Val{nothing}})
    @ Base .\abstractarray.jl:2961
 [33] AlphaZero.Trainer(gspec::AlphaZero.Examples.BonbonRectangle.GameSpec, network::ResNet, samples::Vector{AlphaZero.TrainingSample{NamedTuple{(:board, :impact, :actions_hook, :curplayer), Tuple{StaticArraysCore.SMatrix{16, 16, UInt8, 256}, StaticArraysCore.SMatrix{16, 16, UInt8, 256}, StaticArraysCore.SMatrix{16, 16, Tuple{UInt8, UInt8}, 256}, UInt8}}}}, params::LearningParams; test_mode::Bool)
    @ AlphaZero C:\Projets\BonbonRectangle\IA\dev\AlphaZero.jl\src\learning.jl:110
 [34] Trainer
    @ C:\Projets\BonbonRectangle\IA\dev\AlphaZero.jl\src\learning.jl:98 [inlined]
 [35] macro expansion
    @ .\timing.jl:463 [inlined]
 [36] learning_step!(env::Env{AlphaZero.Examples.BonbonRectangle.GameSpec, ResNet, NamedTuple{(:board, :impact, :actions_hook, :curplayer), Tuple{StaticArraysCore.SMatrix{16, 16, UInt8, 256}, StaticArraysCore.SMatrix{16, 16, UInt8, 256}, StaticArraysCore.SMatrix{16, 16, Tuple{UInt8, UInt8}, 256}, UInt8}}}, handler::Session{Env{AlphaZero.Examples.BonbonRectangle.GameSpec, ResNet, NamedTuple{(:board, :impact, :actions_hook, :curplayer), Tuple{StaticArraysCore.SMatrix{16, 16, UInt8, 256}, StaticArraysCore.SMatrix{16, 16, UInt8, 256}, StaticArraysCore.SMatrix{16, 16, Tuple{UInt8, UInt8}, 256}, UInt8}}}})
    @ AlphaZero C:\Projets\BonbonRectangle\IA\dev\AlphaZero.jl\src\training.jl:207
 [37] macro expansion
    @ .\timing.jl:463 [inlined]
 [38] macro expansion
    @ C:\Projets\BonbonRectangle\IA\dev\AlphaZero.jl\src\report.jl:267 [inlined]
 [39] train!(env::Env{AlphaZero.Examples.BonbonRectangle.GameSpec, ResNet, NamedTuple{(:board, :impact, :actions_hook, :curplayer), Tuple{StaticArraysCore.SMatrix{16, 16, UInt8, 256}, StaticArraysCore.SMatrix{16, 16, UInt8, 256}, StaticArraysCore.SMatrix{16, 16, Tuple{UInt8, UInt8}, 256}, UInt8}}}, handler::Session{Env{AlphaZero.Examples.BonbonRectangle.GameSpec, ResNet, NamedTuple{(:board, :impact, :actions_hook, :curplayer), Tuple{StaticArraysCore.SMatrix{16, 16, UInt8, 256}, StaticArraysCore.SMatrix{16, 16, UInt8, 256}, StaticArraysCore.SMatrix{16, 16, Tuple{UInt8, UInt8}, 256}, UInt8}}}})
    @ AlphaZero C:\Projets\BonbonRectangle\IA\dev\AlphaZero.jl\src\training.jl:327
 [40] resume!(session::Session{Env{AlphaZero.Examples.BonbonRectangle.GameSpec, ResNet, NamedTuple{(:board, :impact, :actions_hook, :curplayer), Tuple{StaticArraysCore.SMatrix{16, 16, UInt8, 256}, StaticArraysCore.SMatrix{16, 16, UInt8, 256}, StaticArraysCore.SMatrix{16, 16, Tuple{UInt8, UInt8}, 256}, UInt8}}}})
    @ AlphaZero.UserInterface C:\Projets\BonbonRectangle\IA\dev\AlphaZero.jl\src\ui\session.jl:318
 [41] train(e::Experiment; args::Base.Pairs{Symbol, Bool, Tuple{Symbol}, NamedTuple{(:save_intermediate,), Tuple{Bool}}})
    @ AlphaZero.Scripts C:\Projets\BonbonRectangle\IA\dev\AlphaZero.jl\src\scripts\scripts.jl:26
 [42] #train#15
    @ C:\Projets\BonbonRectangle\IA\dev\AlphaZero.jl\src\scripts\scripts.jl:28 [inlined]   
 [43] top-level scope
    @ none:1
@smart-fr
Copy link
Author

smart-fr commented Feb 9, 2023

I could resolve via reducing the memory buffer size to 40000 samples as suggested by @EngrStudent in #116 (comment).

This raises the question: do you still have plans to add an option to store memory buffer samples on disk?

@jonathan-laurent
Copy link
Owner

A large memory buffer should normally not cause GPU out-of-memory errors.
Indeed, the memory buffer is stored on CPU RAM.
What could cause a GPU OOM is using too large a batch-size but it is unlikely to be happening here if the problem only arises after ten iterations.

Therefore, I see two possibilities:

  1. This is another of these mysterious CUDA.jl memory leaks that have been bothering me from the start (Connect Four training must be restarted about every 24 hours due to an OOM error #1), although the situation seemed to get progressively better with successive CUDA.jl versions. Maybe diminishing the memory buffer size only has an impact for random reasons like the GC happening to be called at different times.
  2. Maybe AlphaZero.jl is doing something suboptimal when batching training samples, making memory consumption depend on the total number of samples (it shouldn't). I doubt it but it is worth having a look.

@smart-fr
Copy link
Author

A good compromise for my game looks like batch_size=16 (yes, only 16) and mem_buffer_size=PLSchedule([0], [80_000]). Any higher value for either variable (as well as both variables, obviously) leads to a GPU OOM, as curious as it seems.

Re: it is worth having a look (whether AlphaZero.jl is doing something suboptimal when batching training samples, making memory consumption depend on the total number of samples): how can I achieve this?

@smart-fr
Copy link
Author

Unfortunately my agent seemed to stop improving despite training over more than 50 iterations, and I suspect it is because of a limited memory buffer size (80000) which prevents the MCTS from retaining enough experience, while the NN doesn't exactly compensate through "intuition".
If there is a chance that storing the samples on disk would allow a larger mem_buffer_size, I am definitely interested!

@jonathan-laurent
Copy link
Owner

Once again, I do not think storing samples on disk is going to solve any problem here since you are encountering GPU OOMs. Implementing this is also not a priority right now. If you really want this feature though, it should not be too hard to implement and this may even be a nice opportunity for a contribution. :-)

@smart-fr
Copy link
Author

Thank you, I got that this shouldn't solve my problem.

Maybe I would rather have a look whether AlphaZero.jl is doing something suboptimal when batching training samples, making memory consumption depend on the total number of samples -do you have an intuition which would help me start?

@jonathan-laurent
Copy link
Owner

Can you try and run some training where you restart Julia after each iteration?
If the problem is related to memory leaks, then it should disappear.
Otherwise, it will be pointing to the second category of problems I mentioned.

@smart-fr
Copy link
Author

Will do and report here.

@smart-fr
Copy link
Author

When the problem arises, it appears even after a fresh restart of Julia.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants