DDCOT
Prompting library
This implementation provides tools and methods for multimodal reasoning in language models through prompting.
[NeurIPS 2023]DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models
35 stars
2 watching
1 forks
Language: Python
last commit: 10 months ago Related projects:
Repository | Description | Stars |
---|---|---|
dvlab-research/prompt-highlighter | An interactive control system for text generation in multi-modal language models | 135 |
ju-bezdek/langchain-decorators | Provides syntactic sugar for writing custom LangChain prompts and chains, making it easier to write more pythonic code. | 228 |
ailab-cvc/seed | An implementation of a multimodal language model with capabilities for comprehension and generation | 585 |
ailab-cvc/seed-bench | A benchmark for evaluating large language models' ability to process multimodal input | 322 |
davebshow/gremlinclient | Client library for interacting with the Gremlin Server protocol in Python | 28 |
mwydmuch/napkinxc | A fast and simple library for multi-class and multi-label classification | 65 |
ncwilson78/system-prompt-library | A comprehensive collection of customizable prompts for Generative Pre-trained Transformers (GPTs) designed specifically for educational use. | 77 |
mshukor/evalign-icl | Evaluating and improving large multimodal models through in-context learning | 21 |
dmulyalin/ttp | A template-based text parsing library | 353 |
dcdmllm/cheetah | A large language model designed to understand and generate instructions with accompanying visual content | 360 |
uw-madison-lee-lab/cobsat | Provides a benchmarking framework and dataset for evaluating the performance of large language models in text-to-image tasks | 30 |
maciej-gol/tenant-schemas-celery | Enables collaboration between Celery tasks and multi-tenancy in Django applications. | 183 |
agkozak/polyglot | A dynamic shell prompt that displays various information in ASCII format, including username, session type, Git branch and status, exit status, and virtual environment information. | 182 |
multimodal-art-projection/omnibench | Evaluates and benchmarks multimodal language models' ability to process visual, acoustic, and textual inputs simultaneously. | 15 |
zjunlp/mol-instructions | A dataset and tools package designed to support the training and evaluation of large language models for molecular biology tasks | 255 |