import osįrom _backend import set_session With Keras, these 2 functions allow the selection of CPU or GPU and in the case of GPU the fraction of memory that will be used. You can have the list of the GPU available by typing the command nvidia-smi in the terminal prompt. "0" is here the name of the GPU you want to use. Sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) You can modify the GPU options settings by adding at the begining of your python script: gpu_options = tf.GPUOptions(visible_device_list="0") config = tf.ConfigProto()Ĭonfig.gpu_options.visible_device_list = "0,1" Because it tend to be the least reliable of all methods, especially with keras. I can recommend it only if setting environment variable is not an options (i.e. Tensorflow/Keras also allows to specify gpu to be used with session config. # Visible devices must be set at program startup # Restrict TensorFlow to only use the first GPU Tensorflow 2.0 suggest yet another method: gpus = tf._physical_devices('GPU') In case it is a problem for you, you can use a randomized version as in original source code: mask_busy_gpus() Limitations: if you start multiple scripts at once it might cause a collision, because memory is not allocated immediately when you construct a session. Print('"nvidia-smi" is probably not installed. Os.environ = ','.join(map(str, available_gpus)) ![]() If len(available_gpus) < leave_unmasked: raise ValueError('Found only %d usable GPUs in the system' % len(available_gpus)) Memory_free_values = ) for i, x in enumerate(memory_free_info)]Īvailable_gpus = Memory_free_info = _output_to_list(sp.check_output(COMMAND.split())) _output_to_list = lambda x: x.decode('ascii').split('\n') This way you can run multiple instances of your script at once without changing your code or setting console parameters.ĬOMMAND = "nvidia-smi -query-gpu=ee -format=csv" ![]() It will filter out GPUs by current memory usage. You have to call mask_unused_gpus before constructing a session. Method below will automatically detect GPU devices that are not used by other scripts and set CUDA_VISIBLE_DEVICES for you. run next 2 lines of code before constructing a session import osĪutomated solution. Set CUDA_VISIBLE_DEVICES=0,1 in your terminal/console before starting python or jupyter notebook: CUDA_VISIBLE_DEVICES=0,1 python script.py Here are 5 ways to stick to just one (or a few) GPUs.īash solution. TF would allocate all available memory on each visible GPU if not told otherwise.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |