LRV-Instruction

[ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning

GitHub

246 stars
11 watching
13 forks
Language: Python
last commit: 7 months ago
chatgptevaluationevaluation-metricsfoundation-modelsgptgpt-4hallucinationiclriclr2024llamallavamultimodalobject-detectionprompt-engineeringvicunavisionvision-and-languagevqa