gorilla
API toolkit
Enables large language models to interact with external APIs using natural language queries
Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)
11k stars
99 watching
998 forks
Language: Python
last commit: 6 days ago
Linked from 3 awesome lists
apiapi-documentationchatgptclaude-apigpt-4-apillmopenai-apiopenai-functions
Related projects:
Repository | Description | Stars |
---|---|---|
gorilla-llm/gorilla-cli | An AI-powered command-line interface that generates potential commands based on user input and suggests the best course of action. | 1,297 |
explodinggradients/ragas | A toolkit for evaluating and optimizing Large Language Model applications with data-driven insights | 7,233 |
openbmb/toolbench | A platform for training, serving, and evaluating large language models to enable tool use capability | 4,843 |
sgl-project/sglang | A framework for serving large language models and vision models with efficient runtime and flexible interface. | 6,082 |
microsoft/semantic-kernel | An SDK that integrates LLMs with conventional programming languages to create AI-powered applications. | 21,946 |
nlpxucan/wizardlm | Large pre-trained language models trained to follow complex instructions using an evolutionary instruction framework | 9,268 |
freedomintelligence/llmzoo | A platform providing data, models, and evaluation benchmarks for large language models to promote accessibility and democratization of AI technology | 2,934 |
mlabonne/llm-course | A comprehensive course and resource package on building and deploying Large Language Models (LLMs) | 39,120 |
scisharp/llamasharp | A C#/.NET library to efficiently run Large Language Models (LLMs) on local devices | 2,673 |
tmc/langchaingo | Provides a Go implementation of LangChain for generating text based on large language models. | 4,635 |
ianarawjo/chainforge | An environment for battle-testing prompts to Large Language Models (LLMs) to evaluate response quality and performance. | 2,334 |
mooler0410/llmspracticalguide | A curated list of resources to help developers navigate the landscape of large language models and their applications in NLP | 9,489 |
google/big-bench | A benchmark designed to evaluate the capabilities of large language models by simulating various tasks and measuring their performance | 2,868 |
optimalscale/lmflow | A toolkit for finetuning large language models and providing efficient inference capabilities | 8,273 |
confident-ai/deepeval | A framework for evaluating large language models | 3,669 |