PMC-VQA
Medical image understanding toolkit
A medical visual question-answering dataset and toolkit for training models to understand medical images and instructions.
PMC-VQA is a large-scale medical visual question-answering dataset, which contains 227k VQA pairs of 149k images that cover various modalities or diseases.
174 stars
3 watching
11 forks
Language: Python
last commit: 8 months ago Related projects:
Repository | Description | Stars |
---|---|---|
cadene/vqa.pytorch | A PyTorch implementation of visual question answering with multimodal representation learning | 716 |
zcyang/imageqa-san | This project provides code for training image question answering models using stacked attention networks and convolutional neural networks. | 107 |
milvlg/prophet | An implementation of a two-stage framework designed to prompt large language models with answer heuristics for knowledge-based visual question answering tasks. | 267 |
gt-vision-lab/vqa_lstm_cnn | A Visual Question Answering model using a deeper LSTM and normalized CNN architecture. | 376 |
akirafukui/vqa-mcb | A software framework for training and deploying multimodal visual question answering models using compact bilinear pooling. | 222 |
hengyuan-hu/bottom-up-attention-vqa | An implementation of a VQA system using bottom-up attention, aiming to improve the efficiency and speed of visual question answering tasks. | 754 |
jnhwkim/nips-mrn-vqa | This project presents a neural network model designed to answer visual questions by combining question and image features in a residual learning framework. | 39 |
jayleicn/tvqa | PyTorch implementation of video question answering system based on TVQA dataset | 172 |
mlpc-ucsd/bliva | A multimodal LLM designed to handle text-rich visual questions | 269 |
open-mmlab/mmaction | An open-source toolbox for action understanding from video data using PyTorch. | 1,863 |
viame/viame | A comprehensive computer vision toolkit with tools and algorithms for video and image analytics in multiple environments. | 288 |
guoyang9/unk-vqa | A VQA dataset with unanswerable questions designed to test the limits of large models' knowledge and reasoning abilities. | 2 |
danmcduff/iphys-toolbox | MATLAB implementations of algorithms for non-contact physiological measurement using low-cost cameras | 192 |
hms-dbmi/viv | A toolkit for interactive visualization of high-resolution bioimaging data. | 286 |
hyeonwoonoh/vqa-transfer-externaldata | Tools and scripts for training and evaluating a visual question answering model using transfer learning from an external data source. | 20 |