FATE-LLM
LLM collaborator
A framework for collaborative training of large language models in a privacy-preserving manner
Federated Learning for LLMs.
166 stars
18 watching
28 forks
Language: Python
last commit: 11 months ago llmsslms
Related projects:
| Repository | Description | Stars |
|---|---|---|
| | Provides tools and APIs for designing, scheduling, and running federated machine learning jobs in a secure and efficient manner. | 3 |
| | An end-to-end federated learning workflow platform for managing data and models across multiple parties | 53 |
| | Documentation repository for a community-driven project focused on federated AI technology development and governance. | 25 |
| | A high-performance serving system for federated learning models, providing support for online algorithms, real-time inference, and model management. | 138 |
| | An infrastructure tool for managing and securing collaborative data networks across organizations | 30 |
| | A visualization tool for federated learning modeling to monitor and improve models | 101 |
| | A collection of tools and tests for evaluating the performance of federated machine learning systems | 1 |
| | Develops and evaluates federated learning algorithms for personalizing machine learning models across heterogeneous client data distributions. | 157 |
| | Investigates transfer learning in federated learning by guiding the last layer with pre-trained models | 7 |
| | An implementation of Bayesian network structure learning with continuous optimization for federated learning. | 10 |
| | An API that provides a unified interface to multiple large language models for chat fine-tuning | 79 |
| | A framework for personalized federated learning to balance fairness and robustness in decentralized machine learning systems. | 138 |
| | A framework for serving large language models with a robust and efficient API | 909 |
| | Enables secure collaboration on data among multiple parties while protecting privacy and security | 5,750 |
| | An approach to heterogeneous federated learning allowing for model training on diverse devices with varying resources. | 61 |