We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi, i use Allowing GPU memory growth like followings which worked well in tensorflow:
config = tf.ConfigProto() config.gpu_options.allow_growth = True session = tf.Session(config=config, ...)
But it still occupies all of the GPU memory of all GPUs in this project.
How can i fix it ?
The text was updated successfully, but these errors were encountered:
An temporary workaround is here below
import tensorflow as tf tf_config = tf.ConfigProto() tf_config.gpu_options.allow_growth = True sess = tf.Session(config=tf_config, graph=None) import tensorflow_fold as td
and then create another session with your graph and the same tf_config, like:
with tf.Session(config=tf_config, graph=your_graph) as sess: # your code for training or test
I think this is an inherent feature bug the owner should check and fix
Sorry, something went wrong.
No branches or pull requests
Hi, i use Allowing GPU memory growth like followings which worked well in tensorflow:
But it still occupies all of the GPU memory of all GPUs in this project.
How can i fix it ?
The text was updated successfully, but these errors were encountered: