OPERA
Token penalty
A method to alleviate hallucination in large language models by penalizing over-trust and re-allocation of tokens during decoding
[CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allocation
287 stars
2 watching
26 forks
Language: Python
last commit: 3 months ago chatbotchatgptgpt-4large-multimodal-modelsllamamultimodalvision-language-learningvision-language-model
Related projects:
Repository | Description | Stars |
---|---|---|
tianyi-lab/hallusionbench | An image-context reasoning benchmark designed to challenge large vision-language models and help improve their accuracy | 243 |
shibli2700/randomrepo | Proof-of-concept project demonstrating a GitHub account hijacking vulnerability using Baidu and a specific security testing methodology. | 0 |
x-plug/mplug-halowl | Evaluates and mitigates hallucinations in multimodal large language models | 79 |
fuxiaoliu/lrv-instruction | A research project focused on mitigating hallucinations in large multi-modal models by improving instruction tuning through robust training methods. | 255 |
bradyfu/woodpecker | A method to correct hallucinations in multimodal large language models during text generation | 611 |
yiyangzhou/lure | Analyzing and mitigating object hallucination in large vision-language models to improve their accuracy and reliability. | 134 |
yuezih/less-is-more | Improving multimodal hallucination mitigation in EOS decision-making by selectively supervising training data | 31 |
rucaibox/pope | An evaluation framework for detecting object hallucinations in vision-language models | 179 |
shi-labs/vcoder | An adapter for improving large language models at object-level perception tasks with auxiliary perception modalities | 261 |
jshilong/gpt4roi | Training and deploying large language models on computer vision tasks using region-of-interest inputs | 506 |
yxuansu/tacl | Improves pre-trained language models by encouraging an isotropic and discriminative distribution of token representations. | 92 |
yfzhang114/llava-align | Debiasing techniques to minimize hallucinations in large visual language models | 71 |
openmoss/halluqa | An evaluation framework for assessing the performance of large language models on question-answering tasks with hallucination detection | 109 |
nick-frischkorn/tokenstripbof | A tool that weakens antivirus and EDR products by deleting process token privileges and lowering integrity level to untrusted. | 32 |
1zhou-wang/memvr | An implementation of a method to mitigate hallucinations in large language models using visual re-tracing | 27 |