Skip to content Skip to sidebar Skip to footer

Gpu Oom: Hyperparameter Tuning Loop With Varying Models

I'm grid-searching hyperparameters using itertools.product() and overwriting the model variable with each loop. However, at 2nd iteration, it crashes due to Out Of Memory: import i

Solution 1:

It seems like there are 2 possible causes:

  1. Memory is not released after training the previous network
  2. The given model is simply too big

For the first case, check Keras: release memory after finish training process

For the second case, try decreasing the batch_size in your data generator and see whether it fixes the problem. Alternatively, use multiple GPUs or change the architecture so that it can fit into memory.

Post a Comment for "Gpu Oom: Hyperparameter Tuning Loop With Varying Models"