nllb-tuning
Language Model Tuner
This is an experimental project for fine-tuning the NLB language model with a specific dataset and evaluating its performance on translation tasks.
7 stars
0 watching
0 forks
Language: Python
last commit: 4 months ago Related projects:
Repository | Description | Stars |
---|---|---|
| A repository providing tools and datasets to fine-tune language models for specific tasks | 1,484 |
| A toolkit for fine-tuning pre-trained language models with knowledge graph representations to improve performance on entity typing and relation classification tasks. | 1,413 |
| A codebase for fine-tuning the ChatGLM-6b language model using low-rank adaptation (LoRA) and providing finetuned weights. | 727 |
| An open-source implementation of a vision-language instructed large language model | 513 |
| A shared task for fine-tuning large language models to answer questions and generate responses in Ukrainian. | 13 |
| A lightweight, multilingual language model with a long context length | 920 |
| Trains a language model using a RoBERTa architecture on high-quality Polish text data | 33 |
| This repository contains source files and training scripts for language models. | 12 |
| Training methods and tools for fine-tuning language models using human preferences | 1,240 |
| Improves safety and helpfulness of large language models by fine-tuning them using safety-critical tasks | 47 |
| Lemmatization tool for natural language processing | 146 |
| An integration of memory-based natural language processing modules for Dutch | 75 |
| A pre-trained language model designed to leverage linguistic features and outperform comparable baselines on Chinese natural language understanding tasks. | 202 |
| A benchmarking framework for large language models | 81 |
| A tool to evaluate and track the performance of large language model (LLM) experiments | 2,233 |