fednar
FedOpt Algorithm
A Python implementation of federated optimization algorithm with normalized annealing regularization.
6 stars
1 watching
0 forks
Language: Python
last commit: over 1 year ago Related projects:
Repository | Description | Stars |
---|---|---|
| An implementation of algorithms for nonconvex federated learning optimization | 8 |
| An implementation of a federated optimization algorithm for distributed machine learning | 6 |
| An implementation of a federated learning algorithm for handling heterogeneous data | 6 |
| An implementation of a federated averaging algorithm with an extrapolation approach to speed up distributed machine learning training on client-held data. | 9 |
| An implementation of Bayesian network structure learning with continuous optimization for federated learning. | 10 |
| An implementation of federated learning algorithm to reduce local learning bias and improve convergence on heterogeneous data | 25 |
| An implementation of federated learning with prototype-based methods across heterogeneous clients | 134 |
| A modular JAX implementation of federated learning via posterior averaging for decentralized optimization | 50 |
| An optimization framework designed to address heterogeneity in federated learning across distributed networks | 655 |
| Federated learning algorithm that adapts to non-IID data by decoupling and correcting for local drift | 81 |
| An implementation of a federated learning algorithm for optimization problems with compositional pairwise risk optimization. | 2 |
| Improving generalization in federated learning by seeking flat minima through optimization techniques | 82 |
| Provides code for a federated learning algorithm to optimize machine learning models in a distributed setting. | 14 |
| The purpose of this project is to investigate the convergence of a federated learning algorithm on non-IID (non-identically and independently distributed) data. | 255 |
| An implementation of federated learning with sparse training and readjustment mechanisms to reduce communication overhead while maintaining model performance. | 29 |