Baichuan-7B
Language Model
Develops a large-scale pretraining language model with state-of-the-art performance on various benchmarks for natural language understanding and generation tasks.
A large-scale 7B pretraining language model developed by BaiChuan-Inc.
6k stars
67 watching
505 forks
Language: Python
last commit: 7 months ago artificial-intelligencecevalchatgptchinesegpt-4huggingfacelarge-language-modelsllamammlunatural-language-processing
Related projects:
Repository | Description | Stars |
---|---|---|
| Sharing technical knowledge and practical experience on large language models | 11,871 |
| A comprehensive NLP and LLM library that provides an easy-to-use interface for a wide range of tasks, including text classification, neural search, question answering, information extraction, and more. | 12,224 |
| A collection of pre-trained language models for Chinese text processing and dialogue generation. | 3,034 |
| A language model designed to surpass the capabilities of GPT-3.5-Turbo on various tasks such as text generation, tool calling, and long-text processing | 7,209 |
| A tool for efficiently fine-tuning large language models across multiple architectures and methods. | 36,219 |
| Develops and deploys large language models for natural language processing tasks, including text generation, question answering, and more. | 2,247 |
| Develops and releases large language models with significant training data and competitive performance on various benchmarks. | 2,977 |
| A comprehensive benchmarking platform for large language models, evaluating their performance across various capabilities and providing rankings and detailed results. | 3,063 |
| An instruction-following Chinese LLaMA-based model project aimed at training and fine-tuning models on specific hardware configurations for efficient deployment. | 4,152 |
| Develops large multimodal models for various computer vision tasks including image and video analysis | 3,099 |
| A system that uses large language and vision models to generate and process visual instructions | 20,683 |
| A Chinese finance-focused large language model fine-tuning framework | 596 |
| Optimizes large language model inference on limited GPU resources | 5,446 |
| Tuning a large language model on consumer hardware using low-rank adaptation | 18,710 |
| Provides pre-trained and instruction-tuned Llama 3 language models and tools for loading and running inference | 27,527 |