offsite-tuning

Foundation Model Adapter

An open-source project that enables private and efficient adaptation of large foundation models to downstream tasks without requiring access to the full model weights.

Offsite-Tuning: Transfer Learning without Full Model

GitHub

368 stars
8 watching
39 forks
Language: Python
last commit: 12 months ago
deep-learningtransfer-learning

Related projects:

Repository Description Stars
deepset-ai/farm An open-source framework for adapting representation models to various tasks and industries 1,741
google-research/flan A repository providing tools and datasets to fine-tune language models for specific tasks 1,474
wenkehuang/rethinkfl Improves federated learning performance by incorporating domain knowledge and regularization to adapt models across diverse domains 91
locuslab/e2e-model-learning Develops an approach to learning probabilistic models in stochastic optimization problems 200
wasidennis/adaptsegnet This project implements a deep learning-based approach to adapt semantic segmentation models from one domain to another. 849
baowenxuan/atp An implementation of adaptive test-time personalization for federated learning in deep neural networks. 16
chrisallenming/ltc-msda An implementation of a knowledge aggregation method for adapting to multiple domains using a graph-based framework. 68
bupt-ai-cz/meta-selflearning Develops a method to improve performance of computer vision tasks by adapting models to new domains and data sources through meta-learning and self-learning techniques. 199
openai/lm-human-preferences Training methods and tools for fine-tuning language models using human preferences 1,229
roboflow/maestro A tool to streamline fine-tuning of multimodal models for vision-language tasks 1,386
declare-lab/instruct-eval An evaluation framework for large language models trained with instruction tuning methods 528
spandan-madan/pytorch_fine_tuning_tutorial Provides guidance on fine-tuning pre-trained models for image classification tasks using PyTorch. 279
shi-labs/vcoder An adapter for improving large language models at object-level perception tasks with auxiliary perception modalities 261
ebhy/budgetml Simplifies deployment of machine learning models to production-ready endpoints with minimal configuration and cost. 1,338
lge-arc-advancedai/auptimizer Automates model building and deployment process by optimizing hyperparameters and compressing models for edge computing. 200