FreezeOut
Layer freezeout
A technique to accelerate neural network training by progressively freezing layers
Accelerate Neural Net Training by Progressively Freezing Layers
211 stars
8 watching
31 forks
Language: Python
last commit: over 6 years ago deep-learningdensenetmachine-learningmemesneural-networkspytorchvgg16wide-residual-networks
Related projects:
Repository | Description | Stars |
---|---|---|
| An experimental method for efficiently searching neural network architectures using a single training run. | 489 |
| Automates the search for optimal neural network configurations in deep learning applications | 468 |
| An implementation of a multi-layer neural network in Python, allowing users to train and use the network for classification tasks. | 5 |
| A DSL and toolkit for designing and optimizing deep neural networks in Haskell | 702 |
| Trains artificial neural networks using the genetic algorithm | 241 |
| An implementation of a novel neural network training method that builds and trains networks one layer at a time. | 66 |
| A Python framework for building deep learning models with optimized encoding layers and batch normalization. | 2,044 |
| Enables the creation of smaller neural network models through efficient pruning and quantization techniques | 2,083 |
| A distributed learning framework that enables peer-to-peer parameter averaging and asynchronous training of deep neural networks | 53 |
| Improves the performance of deep neural networks by selectively stopping training at different stages | 29 |
| An artificial neural network library for rapid prototyping and extension in Haskell. | 378 |
| A PyTorch framework simplifying neural network training with automated boilerplate code and callback utilities | 572 |
| A deep learning framework on top of PyTorch for building neural networks. | 61 |
| An implementation of a deep neural network architecture in PyTorch | 833 |
| An implementation of a deep neural network architecture using boosting theory to improve its performance | 5 |