The software is available for downlad here.
This library simplifies the workflow of training multiple DNN models using Keras and Theano frameworks and automatically presents the results. It is very useful for hyperparameter tuning when you need to train lots of models to find the best performing one.
Kex will wrap all defined models into one experiment, train them and save all the configuration files and results into one folder, so you can easily compare which model performed better.
- Configuration of all models resides in just one file, so you can configure all models at once easily.
- Uses data generator as input, so it can handle large amounts of data.
- Data preparation is done beforehand on CPU and training on GPU, so only the speed of your GPU is the bottleneck.
- Utilizes multiple threads to run models on multiple GPUs, so you can train multiple models at once.
WARNING: it does not spread one model over multiple GPUs, it just spreads multiple models over multiple GPUs.
- It handles GPU Memory errors. If you, by mistake, define a model that won’t fit in your GPU memory,
the program won’t crash and continues to train the next model, so you won’t have to start over.
- It can resume the experiment after marginal crash such as machine shutdown etc., so you won’t have to start over.
- It assures reproducible results if the model definition and initial weights did not change.
This project is licensed under the terms of the MIT license.