QA-CLIP
Chinese CLIP model
Provides Chinese language models with high performance for image-text retrieval and classification tasks.
Chinese CLIP models with SOTA performance.
51 stars
3 watching
5 forks
Language: Python
last commit: over 1 year ago Related projects:
Repository | Description | Stars |
---|---|---|
| This project makes a large language model accessible for research and development | 1,245 |
| This project provides code for training image question answering models using stacked attention networks and convolutional neural networks. | 108 |
| Provides pre-trained models for Chinese language tasks with improved performance and smaller model sizes compared to existing models. | 806 |
| Trains a large Chinese language model on massive data and provides a pre-trained model for downstream tasks | 230 |
| A pre-trained Chinese language model with a modest parameter count, designed to be accessible and useful for researchers with limited computing resources. | 18 |
| Provides pre-trained models for Chinese natural language processing tasks using the XLNet architecture | 1,652 |
| A collection of Python-related questions and answers from Stack Overflow, with some answers translated into Chinese. | 856 |
| Evaluates and compares the performance of various CLIP-like models on different tasks and datasets. | 632 |
| An efficient framework for end-to-end learning on image-text and video-text tasks | 709 |
| Evaluates and benchmarks large language models' video understanding capabilities | 121 |
| An implementation of a large language model for Chinese text processing, focusing on MoE (Multi-Headed Attention) architecture and incorporating a vast vocabulary. | 645 |
| Evaluates and aligns the values of Chinese large language models with safety and responsibility standards | 481 |
| Provides pre-trained Chinese language models based on the ELECTRA framework for natural language processing tasks | 1,405 |
| Measures the understanding of massive multitask Chinese datasets using large language models | 87 |
| Tools and codebase for training neural question answering models on multiple paragraphs of text data | 435 |