vl-interp
Hallucination mitigation
This project provides an official PyTorch implementation of a method to interpret and edit vision-language representations to mitigate hallucinations in image captions.
Official Pytorch implementation of "Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations"
46 stars
5 watching
5 forks
Language: Python
last commit: 3 months ago Related projects:
Repository | Description | Stars |
---|---|---|
| PyTorch implementation of video captioning, combining deep learning and computer vision techniques. | 402 |
| A PyTorch implementation of visual-semantic embedding methods for image-caption retrieval | 492 |
| A PyTorch implementation of a framework for generating captions with styles for images and videos. | 63 |
| An implementation of Self-critical Sequence Training for Image Captioning and related techniques. | 998 |
| A PyTorch implementation of image captioning models via scene graph decomposition. | 96 |
| A Python-based framework for training and testing image captioning models using PyTorch. | 1,458 |
| A PyTorch toolbox for supporting research and development of domain adaptation, generalization, and semi-supervised learning methods in computer vision. | 1,236 |
| An implementation of an object hallucination reduction method using a PyTorch framework and various decoding algorithms. | 72 |
| Develops a PyTorch implementation of an enhanced vision language model | 93 |
| An implementation of semantic image synthesis via adversarial learning using PyTorch | 145 |
| An implementation of an image-to-image translation algorithm using deep learning and PyTorch | 428 |
| Improves performance of vision language tasks by integrating computer vision capabilities into large language models | 314 |
| A PyTorch implementation of a deep learning model for inpainting images using contextual information | 366 |
| Analyzing and mitigating object hallucination in large vision-language models to improve their accuracy and reliability. | 136 |
| Improves the performance of large language models by intervening in their internal workings to reduce hallucinations | 83 |