 DDCOT
 DDCOT 
 Prompting library
 This implementation provides tools and methods for multimodal reasoning in language models through prompting.
[NeurIPS 2023]DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models
35 stars
 2 watching
 1 forks
 
Language: Python 
last commit: over 1 year ago  Related projects:
| Repository | Description | Stars | 
|---|---|---|
|  | An interactive control system for text generation in multi-modal language models | 135 | 
|  | Provides syntactic sugar for writing custom LangChain prompts and chains, making it easier to write more pythonic code. | 228 | 
|  | An implementation of a multimodal language model with capabilities for comprehension and generation | 585 | 
|  | A benchmark for evaluating large language models' ability to process multimodal input | 322 | 
|  | Client library for interacting with the Gremlin Server protocol in Python | 28 | 
|  | A fast and simple library for multi-class and multi-label classification | 65 | 
|  | A comprehensive collection of customizable prompts for Generative Pre-trained Transformers (GPTs) designed specifically for educational use. | 77 | 
|  | Evaluating and improving large multimodal models through in-context learning | 21 | 
|  | A template-based text parsing library | 353 | 
|  | A large language model designed to understand and generate instructions with accompanying visual content | 360 | 
|  | Provides a benchmarking framework and dataset for evaluating the performance of large language models in text-to-image tasks | 30 | 
|  | Enables collaboration between Celery tasks and multi-tenancy in Django applications. | 183 | 
|  | A dynamic shell prompt that displays various information in ASCII format, including username, session type, Git branch and status, exit status, and virtual environment information. | 182 | 
|  | Evaluates and benchmarks multimodal language models' ability to process visual, acoustic, and textual inputs simultaneously. | 15 | 
|  | A dataset and tools package designed to support the training and evaluation of large language models for molecular biology tasks | 255 |