SPACE
Client Contribution Evaluator
A framework for evaluating contribution of individual clients in federated learning systems.
SPACE: Single-round Participant Amalgamation for Contribution Evaluation in Federated Learning
7 stars
1 watching
3 forks
Language: Python
last commit: over 1 year ago Related projects:
Repository | Description | Stars |
---|---|---|
| An evaluation suite providing multiple-choice questions for foundation models in various disciplines, with tools for assessing model performance. | 1,650 |
| An open-source benchmark and evaluation tool for assessing multimodal large language models' performance in embodied decision-making tasks | 99 |
| A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. | 22 |
| Evaluating and improving large multimodal models through in-context learning | 21 |
| An interactive environment for evaluating code within a running program. | 1,806 |
| Evaluates foundation models on human-centric tasks with diverse exams and question types | 714 |
| A build-time code evaluation tool for JavaScript | 127 |
| An online platform offering innovative evaluation and certification of digital skills | 6 |
| An implementation of a fully convolutional instance-aware semantic segmentation framework using CUDA. | 1,567 |
| Proposes a method for selecting a diverse subset of clients in federated learning to improve convergence and fairness | 29 |
| Evaluates and compares the performance of multimodal large language models on various tasks | 56 |
| An adaptive federated learning framework for heterogeneous clients with resource constraints. | 30 |
| An implementation of Fair and Consistent Federated Learning using Python. | 20 |
| An evaluation tool for Retrieval-augmented Generation methods | 141 |
| This repository provides scripts and instructions for reproducing experiments on efficient federated learning via guided participant selection | 126 |