DDCOT
Prompting library
This implementation provides tools and methods for multimodal reasoning in language models through prompting.
[NeurIPS 2023]DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models
33 stars
2 watching
1 forks
Language: Python
last commit: 8 months ago Related projects:
Repository | Description | Stars |
---|---|---|
dvlab-research/prompt-highlighter | An interactive control system for text generation in multi-modal language models | 132 |
ju-bezdek/langchain-decorators | Provides syntactic sugar for writing custom LangChain prompts and chains, making it easier to write more pythonic code. | 228 |
ailab-cvc/seed | An implementation of a multimodal language model with capabilities for comprehension and generation | 576 |
ailab-cvc/seed-bench | A benchmark for evaluating large language models' ability to process multimodal input | 315 |
davebshow/gremlinclient | Client library for interacting with the Gremlin Server protocol in Python | 28 |
mwydmuch/napkinxc | A fast and simple library for multi-class and multi-label classification | 64 |
ncwilson78/system-prompt-library | A comprehensive collection of customizable prompts for Generative Pre-trained Transformers (GPTs) designed specifically for educational use. | 65 |
mshukor/evalign-icl | Evaluating and improving large multimodal models through in-context learning | 20 |
dmulyalin/ttp | A template-based text parsing library | 349 |
dcdmllm/cheetah | A large language model designed to understand and generate instructions with accompanying visual content | 356 |
uw-madison-lee-lab/cobsat | Provides a benchmarking framework and dataset for evaluating the performance of large language models in text-to-image tasks | 28 |
maciej-gol/tenant-schemas-celery | Enables collaboration between Celery tasks and multi-tenancy in Django applications. | 179 |
agkozak/polyglot | A dynamic shell prompt that displays various information in ASCII format, including username, session type, Git branch and status, exit status, and virtual environment information. | 181 |
multimodal-art-projection/omnibench | Evaluates and benchmarks multimodal language models' ability to process visual, acoustic, and textual inputs simultaneously. | 14 |
zjunlp/mol-instructions | A dataset and tools package designed to support the training and evaluation of large language models for molecular biology tasks | 252 |