ToolBench
Tool learning platform
A platform for training, serving, and evaluating large language models to enable tool use capability
[ICLR'24 spotlight] An open platform for training, serving, and evaluating large language model for tool learning.
5k stars
49 watching
430 forks
Language: Python
last commit: 12 months ago
Linked from 1 awesome list
Related projects:
| Repository | Description | Stars |
|---|---|---|
| | Tools and platform for building and extending large language models | 2,907 |
| | An open-source implementation of a large bilingual language model pre-trained on vast amounts of text data. | 7,672 |
| | Provides a unified framework to test generative language models on various evaluation tasks. | 7,200 |
| | A live training platform for large-scale deep learning models, allowing community participation and collaboration in model development and deployment. | 511 |
| | A curated collection of high-quality datasets for training large language models. | 2,708 |
| | An interactive software framework built on large language models to facilitate collaborative development and task-oriented interactions among multiple agents. | 25,916 |
| | Fine-tuned language models trained on mixed-quality data | 5,273 |
| | An open-source toolkit for pretraining and fine-tuning large language models | 2,732 |
| | A curated list of resources to help developers navigate the landscape of large language models and their applications in NLP | 9,551 |
| | A unified framework for evaluating large language models' performance and robustness in various scenarios. | 2,487 |
| | An open platform for training, serving, and evaluating large language models used in chatbots. | 37,269 |
| | This project generates instruction-following data using GPT-4 to fine-tune large language models for real-world tasks. | 4,244 |
| | Enables large language models to interact with external APIs using natural language queries | 11,564 |
| | Developing and pretraining a GPT-like Large Language Model from scratch | 35,405 |
| | An implementation of a method for fine-tuning language models to follow instructions with high efficiency and accuracy | 5,775 |