Woodpecker
Hallucination corrector
A method to correct hallucinations in large language models
✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models. The first work to correct hallucinations in MLLMs.
613 stars
15 watching
29 forks
Language: Python
last commit: 6 months ago hallucinationhallucinationslarge-language-modelsllmmllmmultimodal-large-language-modelsmultimodality
Related projects:
Repository | Description | Stars |
---|---|---|
1zhou-wang/memvr | An implementation of a method to mitigate hallucinations in large language models using visual re-tracing | 27 |
x-plug/mplug-halowl | Evaluates and mitigates hallucinations in multimodal large language models | 80 |
amazon-science/refchecker | Automates fine-grained hallucination detection in large language model outputs | 306 |
lalbj/pai | Improves the performance of large language models by intervening in their internal workings to reduce hallucinations | 74 |
yfzhang114/llava-align | Debiasing techniques to minimize hallucinations in large visual language models | 74 |
opendatalab/ha-dpo | A framework to improve large language model performance by mitigating hallucination effects through data and optimization techniques. | 67 |
tianyi-lab/hallusionbench | An image-context reasoning benchmark designed to challenge large vision-language models and help improve their accuracy | 254 |
yiyangzhou/lure | Analyzing and mitigating object hallucination in large vision-language models to improve their accuracy and reliability. | 135 |
junyangwang0410/haelm | A framework for detecting hallucinations in large language models | 17 |
billchan226/halc | An implementation of an object hallucination reduction method using a PyTorch framework and various decoding algorithms. | 70 |
yuqifan1117/hallucidoctor | This project provides tools and frameworks to mitigate hallucinatory toxicity in visual instruction data, allowing researchers to fine-tune MLLM models on specific datasets. | 41 |
fuxiaoliu/lrv-instruction | A research project focused on mitigating hallucinations in large multi-modal models by improving instruction tuning through robust training methods. | 259 |
assafbk/mocha_code | A unified framework and benchmark for detecting and mitigating hallucinations in open-vocabulary image captioning models | 12 |
bcdnlp/faithscore | Evaluates answers generated by large vision-language models to assess hallucinations | 26 |
bronyayang/halle_control | Controlling object hallucination in large multimodal models | 28 |