image-paragraph-captioning
Caption generator
Trains image paragraph captioning models to generate diverse and accurate captions
[EMNLP 2018] Training for Diversity in Image Paragraph Captioning
90 stars
6 watching
23 forks
Language: Python
last commit: about 5 years ago Related projects:
Repository | Description | Stars |
---|---|---|
eladhoffer/captiongen | A PyTorch-based tool for generating captions from images | 128 |
datamllab/mitigating_gender_bias_in_captioning_system | An investigation into bias in image captioning systems using a dataset and a new model design to mitigate this bias | 13 |
luoweizhou/vlp | A project for pre-training models to support image captioning and question answering tasks. | 412 |
contextualai/lens | Enhances language models to generate text based on visual descriptions of images | 351 |
ibm/max-image-caption-generator | An image caption generation system utilizing machine learning models and deep neural networks. | 84 |
apple2373/chainer-caption | An image caption generation system using a neural network architecture with pre-trained models. | 64 |
fengyang0317/unsupervised_captioning | An unsupervised image captioning framework that allows generating captions from images without paired data. | 215 |
kacky24/stylenet | A PyTorch implementation of a framework for generating captions with styles for images and videos. | 63 |
chapternewscu/image-captioning-with-semantic-attention | A deep learning model for generating image captions with semantic attention | 51 |
rmokady/clip_prefix_caption | An approach to image captioning that leverages the CLIP model and fine-tunes a language model without requiring additional supervision or object annotation. | 1,315 |
deeprnn/image_captioning | This implementation allows users to generate captions from images using a neural network model with visual attention. | 786 |
jamespark3922/adv-inf | A method for generating and evaluating video captions using adversarial inference, trained on large datasets of text and multimedia features. | 34 |
cshizhe/asg2cap | An image caption generation model that uses abstract scene graphs to fine-grained control and generate captions | 200 |
mansimov/text2image | A model that generates image patches from natural language descriptions by iteratively drawing and attending to relevant words. | 592 |
anonymousanoy/fohe | Automates the process of generating multiple rewritten image captions by fine-tuning large vision-language models | 7 |