MMCU
Chinese understanding benchmark
Measures the understanding of massive multitask Chinese datasets using large language models
MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING
87 stars
2 watching
12 forks
Language: Python
last commit: 11 months ago Related projects:
Repository | Description | Stars |
---|---|---|
| A high-performance language model designed to excel in tasks like natural language understanding, mathematical computation, and code generation | 182 |
| An implementation of a large language model for Chinese text processing, focusing on MoE (Multi-Headed Attention) architecture and incorporating a vast vocabulary. | 645 |
| Develops a large-scale dataset and benchmark for training multimodal chart understanding models using large language models. | 87 |
| An evaluation suite to assess language models' performance in multi-choice questions | 93 |
| Evaluates and benchmarks large language models' video understanding capabilities | 121 |
| Provides pre-trained models for Chinese language tasks with improved performance and smaller model sizes compared to existing models. | 806 |
| This project makes a large language model accessible for research and development | 1,245 |
| A benchmarking framework for large language models | 81 |
| Evaluates and compares the performance of multimodal large language models on various tasks | 56 |
| This project provides pre-trained models and tools for natural language understanding (NLU) and generation (NLG) tasks in Chinese. | 439 |
| A benchmark for evaluating large language models in multiple languages and formats | 93 |
| Trains a large Chinese language model on massive data and provides a pre-trained model for downstream tasks | 230 |
| Trains and evaluates a Chinese language model using adversarial training on a large corpus. | 140 |
| Large-scale language model with improved performance on NLP tasks through distributed training and efficient data processing | 591 |
| An evaluation benchmark for OCR capabilities in large multmodal models. | 484 |