GLM-130B
Bilingual Language Model
An open-source implementation of a large bilingual language model pre-trained on vast amounts of text data.
GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
8k stars
99 watching
608 forks
Language: Python
last commit: over 1 year ago Related projects:
Repository | Description | Stars |
---|---|---|
| A general-purpose language model pre-trained with an autoregressive blank-filling objective and designed for various natural language understanding and generation tasks. | 3,207 |
| A platform for training, serving, and evaluating large language models to enable tool use capability | 4,888 |
| Generates large language model outputs in high-throughput mode on single GPUs | 9,236 |
| Large-scale dialogue data and models for training chatbots and conversational AI systems | 2,276 |
| Optimizes large language model inference on limited GPU resources | 5,446 |
| A framework for training and serving large language models using JAX/Flax | 2,428 |
| An efficient Large Language Model inference engine leveraging consumer-grade GPUs on PCs | 8,011 |
| This repository provides large language models and chat capabilities based on pre-trained Chinese models. | 14,797 |
| A collection of pre-trained language models and optimization techniques for efficient natural language processing | 3,039 |
| Tools and platform for building and extending large language models | 2,907 |
| Provides an online UI for deploying large language models based on LangChain and ChatGLM | 3,194 |
| Compiles and organizes key papers on pre-trained language models, providing a resource for developers and researchers. | 3,331 |
| A fast serving framework for large language models and vision language models. | 6,551 |
| A Python-based framework for serving large language models with low latency and high scalability. | 2,691 |
| Guides software developers on how to effectively use and build systems around Large Language Models like GPT-4. | 8,487 |