HalluciDoctor
Data processing framework
This project provides tools and frameworks to mitigate hallucinatory toxicity in visual instruction data, allowing researchers to fine-tune MLLM models on specific datasets.
HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)
41 stars
1 watching
0 forks
Language: Python
last commit: 4 months ago Related projects:
Repository | Description | Stars |
---|---|---|
fuxiaoliu/lrv-instruction | A research project focused on mitigating hallucinations in large multi-modal models by improving instruction tuning through robust training methods. | 255 |
opendatalab/ha-dpo | A framework to improve large language model performance by mitigating hallucination effects through data and optimization techniques. | 65 |
bradyfu/woodpecker | A method to correct hallucinations in multimodal large language models during text generation | 611 |
x-plug/mplug-halowl | Evaluates and mitigates hallucinations in multimodal large language models | 79 |
1zhou-wang/memvr | An implementation of a method to mitigate hallucinations in large language models using visual re-tracing | 27 |
yiyangzhou/lure | Analyzing and mitigating object hallucination in large vision-language models to improve their accuracy and reliability. | 134 |
junyangwang0410/haelm | A framework for detecting hallucinations in large language models | 17 |
tianyi-lab/hallusionbench | An image-context reasoning benchmark designed to challenge large vision-language models and help improve their accuracy | 243 |
billchan226/halc | An implementation of an object hallucination reduction method using a PyTorch framework and various decoding algorithms. | 69 |
damo-nlp-sg/vcd | An approach to reduce object hallucinations in large vision-language models by contrasting output distributions derived from original and distorted visual inputs | 209 |
yfzhang114/llava-align | Debiasing techniques to minimize hallucinations in large visual language models | 71 |
lalbj/pai | Improves the performance of large language models by intervening in their internal workings to reduce hallucinations | 67 |
sarababakn/mfcl-neurips23 | A framework for mitigating catastrophic forgetting in federated learning for vision tasks using data synthesis from past distributions. | 15 |
openmoss/halluqa | An evaluation framework for assessing the performance of large language models on question-answering tasks with hallucination detection | 109 |
bcdnlp/faithscore | Evaluates answers generated by large vision-language models to assess hallucinations | 25 |