Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Low volatile GPU-Util but high GPU Memory Usage #59

Open
deipss opened this issue Nov 11, 2019 · 2 comments
Open

Low volatile GPU-Util but high GPU Memory Usage #59

deipss opened this issue Nov 11, 2019 · 2 comments

Comments

@deipss
Copy link

deipss commented Nov 11, 2019

When I run MLP.py ,GMF.py or ,NeuMF.py with my GPU (GTX 1080), the memory of which is fulled while the volatile GPU-Util are always around 20%. And the time-spending with GPU is slower than CPU (intel i5)。

I guess the problem are caused by

        # Training        
        hist = model.fit([np.array(user_input), np.array(item_input)], #input
                         np.array(labels), # labels 
                         batch_size=batch_size, epochs=1, verbose=0, shuffle=True)

in the codes of above, np.array(user_input), np.array(item_input),np.array(labels) is runing on the CPU but GPU.
Therefore, the useage of GPU-Util can not raise.

@jsmatte
Copy link

jsmatte commented Mar 26, 2020

@deipss any suggestions on how to solve this issue? I came to the same realization and am trying to find a work around. Any help is much appreciated!

@edervishaj
Copy link

I have experienced the same slower training times with a GPU. Investigating the execution with py-spy, it seems that a big chunk of the training time is spent building user_input, item_input and labels.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants