lynx-llm
Multimodal LLM framework
A framework for training GPT4-style language models with multimodal inputs using large datasets and pre-trained models
paper: https://arxiv.org/abs/2307.02469 page: https://lynx-llm.github.io/
231 stars
8 watching
8 forks
Language: Python
last commit: over 1 year ago research
Related projects:
Repository | Description | Stars |
---|---|---|
| A framework that enables large language models to process and understand multimodal inputs from various sources such as images and speech. | 308 |
| A multi-modal language model that integrates image, video, audio, and text data to improve language understanding and generation | 1,568 |
| A multi-modal large language model that integrates natural language and visual capabilities with fine-tuning for various tasks | 73 |
| A large-scale language model for scientific domain training on redpajama arXiv split | 125 |
| An end-to-end image captioning system that uses large multi-modal models and provides tools for training, inference, and demo usage. | 1,849 |
| A polyglot large language model designed to address limitations in current LLM research and provide better multilingual instruction-following capability. | 77 |
| A framework for training and fine-tuning multimodal language models on various data types | 601 |
| A framework to build versatile Multimodal Large Language Models with synergistic comprehension and creation capabilities | 402 |
| An implementation of a multimodal language model with capabilities for comprehension and generation | 585 |
| An open-source implementation of a vision-language instructed large language model | 513 |
| A tool for training and fine-tuning large language models using advanced techniques | 387 |
| A benchmarking framework for large language models | 81 |
| A PyTorch-based framework for training large language models in parallel on multiple devices | 679 |
| A framework for managing and testing large language models to evaluate their performance and optimize user experiences. | 451 |
| A lightweight framework for building agent-based applications using LLMs and transformer architectures | 1,924 |