XVERSE-MoE-A4.2B

Mixture-of-Experts Model

Developed by XVERSE Technology Inc. as a multilingual large language model with a unique mixture-of-experts architecture and fine-tuned for various tasks such as conversation, question answering, and natural language understanding.

XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.

GitHub

36 stars
5 watching
6 forks
Language: Python
last commit: 7 months ago

Related projects:

Repository Description Stars
xverse-ai/xverse-moe-a36b Develops and publishes large multilingual language models with advanced mixing-of-experts architecture. 36
xverse-ai/xverse-13b A large language model developed to support multiple languages and applications 649
xverse-ai/xverse-65b A large language model developed by XVERSE Technology Inc. using transformer architecture and fine-tuned on diverse data sets for various applications. 132
xverse-ai/xverse-v-13b A large multimodal model for visual question answering, trained on a dataset of 2.1B image-text pairs and 8.2M instruction sequences. 77
xverse-ai/xverse-7b A multilingual large language model developed by XVERSE Technology Inc. 50
pku-yuangroup/moe-llava Develops a neural network architecture for multi-modal learning with large vision-language models 1,980
antoine77340/mixture-of-embedding-experts An open-source implementation of the Mixture-of-Embeddings-Experts model in Pytorch for video-text retrieval tasks. 118
ieit-yuan/yuan2.0-m32 A high-performance language model designed to excel in tasks like natural language understanding, mathematical computation, and code generation 180
skyworkai/skywork-moe A high-performance mixture-of-experts model with innovative training techniques for language processing tasks 126
sergioburdisso/pyss3 A Python package implementing an interpretable machine learning model for text classification with visualization tools 336
deepseek-ai/deepseek-moe A large language model with improved efficiency and performance compared to similar models 1,006
shi-labs/cumo A method for scaling multimodal large language models by combining multiple experts and fine-tuning them together 134
shawn-ieitsystems/yuan-1.0 Large-scale language model with improved performance on NLP tasks through distributed training and efficient data processing 591
yfzhang114/slime Develops large multimodal models for high-resolution understanding and analysis of text, images, and other data types. 137
byungkwanlee/moai Improves performance of vision language tasks by integrating computer vision capabilities into large language models 311