icml2019_zeno
Fault-tolerant training method
An implementation of distributed stochastic gradient descent with fault-tolerance using suspicion-based methods.
14 stars
2 watching
5 forks
Language: Python
last commit: almost 6 years ago Related projects:
Repository | Description | Stars |
---|---|---|
| An approach and implementation for distributed training of machine learning models on iOS devices using a central server. | 8 |
| A tool to test the vulnerability of machine learning models to adversarial attacks | 562 |
| An implementation of a federated learning algorithm for optimization problems with compositional pairwise risk optimization. | 2 |
| A tool for training federated learning models with adaptive gradient balancing to handle class imbalance in multi-client scenarios. | 14 |
| Combating heterogeneity in federated learning by combining adversarial training with client-wise slack during aggregation | 28 |
| A deep learning method for optimizing convolutional neural networks by reducing computational cost while improving regularization and inference efficiency. | 18 |
| Trains neural networks to be provably robust against adversarial examples using abstract interpretation techniques. | 219 |
| This project presents a federated semi-supervised learning approach to improve model performance on multiple datasets by leveraging random sampling consensus. | 47 |
| Assesses generalization of multi-agent reinforcement learning algorithms to novel social situations | 637 |
| An implementation of a novel neural network training method that builds and trains networks one layer at a time. | 66 |
| An implementation of cross-silo federated learning with adaptability to statistical heterogeneity | 12 |
| An algorithm to improve convergence rates and protect privacy in Federated Learning by addressing the catastrophic forgetting issue during local training | 26 |
| An implementation of a method to invert gradients in federated learning to potentially reveal sensitive client data | 39 |
| A method for personalizing machine learning models in federated learning settings with adaptive differential privacy to improve performance and robustness | 57 |
| A Python implementation of a distributed machine learning framework for training neural networks on multiple GPUs | 6 |