gpt4all
LLM client
An open-source Python client for running Large Language Models (LLMs) locally on any device.
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
71k stars
649 watching
8k forks
Language: C++
last commit: 2 months ago
Linked from 5 awesome lists
ai-chatllm-inference
Related projects:
Repository | Description | Stars |
---|---|---|
| A command-line tool and workflow manager for interacting with large language models like ChatGPT/GPT4. | 3,674 |
| Conversational AI system allowing secure local interactions with documents using various open-source models and frameworks | 20,163 |
| Enables LLM inference with minimal setup and high performance on various hardware platforms | 69,185 |
| An open-source toolkit for pretraining and fine-tuning large language models | 2,732 |
| An implementation of a method for fine-tuning language models to follow instructions with high efficiency and accuracy | 5,775 |
| Enables users to interact with large language models within Neovim for tasks like code optimization and translation. | 65 |
| Developing and pretraining a GPT-like Large Language Model from scratch | 35,405 |
| An implementation of a large language model using the nanoGPT architecture | 6,013 |
| An inference and serving engine for large language models | 31,982 |
| Optimizes large language model inference on limited GPU resources | 5,446 |
| A Python-based framework for serving large language models with low latency and high scalability. | 2,691 |
| A semantic cache designed to reduce the cost and improve the speed of LLM API calls by storing responses. | 7,293 |
| A framework for building enterprise LLM-based applications using small, specialized models | 8,303 |
| A plugin that integrates Large Language Models into Neovim for natural language generation and conversation | 176 |
| A command-line tool using AI-powered language models to generate shell commands and code snippets | 9,933 |