HA-DPO

Hallucination fixer

A framework to improve large language model performance by mitigating hallucination effects through data and optimization techniques.

Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization

GitHub

73 stars
4 watching
6 forks
Language: Python
last commit: 12 months ago

Related projects:

Repository Description Stars
1zhou-wang/memvr An implementation of a method to mitigate hallucinations in large language models using visual re-tracing 28
lalbj/pai Improves the performance of large language models by intervening in their internal workings to reduce hallucinations 83
bradyfu/woodpecker A method to correct hallucinations in multimodal large language models without requiring retraining 617
yuqifan1117/hallucidoctor This project provides tools and frameworks to mitigate hallucinatory toxicity in visual instruction data, allowing researchers to fine-tune MLLM models on specific datasets. 41
openmoss/halluqa An evaluation framework for assessing the performance of large language models on question-answering tasks with hallucination detection 111
openkg-org/easydetect A framework to detect and mitigate hallucinations in multimodal large language models 48
junyangwang0410/haelm A framework for detecting hallucinations in large language models 17
fuxiaoliu/lrv-instruction A research project focused on mitigating hallucinations in large multi-modal models by improving instruction tuning through robust training methods. 262
billchan226/halc An implementation of an object hallucination reduction method using a PyTorch framework and various decoding algorithms. 72
x-plug/mplug-halowl Evaluates and mitigates hallucinations in multimodal large language models 82
damo-nlp-sg/vcd An approach to reduce object hallucinations in large vision-language models by contrasting output distributions derived from original and distorted visual inputs 222
yfzhang114/llava-align Debiasing techniques to minimize hallucinations in large visual language models 75
assafbk/mocha_code A unified framework and benchmark for detecting and mitigating hallucinations in open-vocabulary image captioning models 13
amazon-science/refchecker Automates fine-grained hallucination detection in large language model outputs 325
tianyi-lab/hallusionbench An image-context reasoning benchmark designed to challenge large vision-language models and help improve their accuracy 259