CLIP_prefix_caption
Image captioning method
An approach to image captioning that leverages the CLIP model and fine-tunes a language model without requiring additional supervision or object annotation.
Simple image captioning model
1k stars
7 watching
220 forks
Language: Jupyter Notebook
last commit: 9 months ago Related projects:
Repository | Description | Stars |
---|---|---|
| A deep learning model for generating image captions with semantic attention | 51 |
| An unsupervised image captioning framework that allows generating captions from images without paired data. | 215 |
| A project for pre-training models to support image captioning and question answering tasks. | 416 |
| An image caption generation system using a neural network architecture with pre-trained models. | 64 |
| Trains image paragraph captioning models to generate diverse and accurate captions | 90 |
| Enhances language models to generate text based on visual descriptions of images | 352 |
| This implementation allows users to generate captions from images using a neural network model with visual attention. | 790 |
| Automated captioning and transcription tool for video and audio files | 74 |
| Automates the process of generating multiple rewritten image captions by fine-tuning large vision-language models | 8 |
| An image caption generation model that uses abstract scene graphs to fine-grained control and generate captions | 200 |
| A pretraining approach that uses semantically dense captions to learn visual representations and improve image understanding tasks. | 556 |
| A Python-based framework for training and testing image captioning models using PyTorch. | 1,458 |
| An image caption generation system utilizing machine learning models and deep neural networks. | 84 |
| A PyTorch implementation of a framework for generating captions with styles for images and videos. | 63 |
| A PyTorch implementation of image captioning models via scene graph decomposition. | 96 |