ChronoMagic-Bench
Video generation benchmark
A benchmark and dataset for evaluating text-to-video generation models' ability to generate coherent and varied metamorphic time-lapse videos.
[NeurIPS 2024 D&B Spotlightš„] ChronoMagic-Bench: A Benchmark for Metamorphic Evaluation of Text-to-Time-lapse Video Generation
187 stars
3 watching
14 forks
Language: Python
last commit: 5 days ago
Linked from 1 awesome list
aigcbenchmarkdatasetdiffusion-modelsevaluationevaluation-kitgen-aimetamorphic-video-generationopen-sora-plantext-to-videotime-lapsetime-lapse-datasetvideo-generation
Related projects:
Repository | Description | Stars |
---|---|---|
pku-yuangroup/magictime | Tools and models for generating time-lapse videos from text prompts | 1,303 |
pku-yuangroup/video-bench | Evaluates and benchmarks large language models' video understanding capabilities | 117 |
kaihuatang/scene-graph-benchmark.pytorch | A PyTorch implementation of Scene Graph Generation methods with support for visualization and evaluation on custom images | 1,075 |
pku-yuangroup/open-sora-dataset | A large video dataset collected from various open-source websites for use in computer vision and multimedia applications. | 94 |
aliaksandrsiarohin/video-preprocessing | Tools for preprocessing videos for various datasets, including video cropping and annotation. | 519 |
huaxiuyao/wild-time | A benchmark of in-the-wild distribution shifts over time for evaluating machine learning models | 61 |
pkhungurn/talking-head-anime-demo | Creates anime characters with realistic head movements from single images or webcam feeds using deep learning and computer vision techniques. | 1,999 |
pku-yuangroup/languagebind | Extending pretraining models to handle multiple modalities by aligning language and video representations | 723 |
openai/procgen | A benchmark for evaluating reinforcement learning agent performance on procedurally generated game-like environments. | 1,021 |
jshilong/gpt4roi | Training and deploying large language models on computer vision tasks using region-of-interest inputs | 506 |
tianyi-lab/hallusionbench | An image-context reasoning benchmark designed to challenge large vision-language models and help improve their accuracy | 243 |
bradyfu/video-mme | An evaluation framework for large language models in video analysis, providing a comprehensive benchmark of their capabilities. | 406 |
jaywongwang/densevideocaptioning | An implementation of a dense video captioning model with attention-based fusion and context gating | 148 |
laomao0/bin | Software to interpolate blurry video frames and enhance image quality | 210 |
mbzuai-oryx/video-chatgpt | A video conversation model that generates meaningful conversations about videos using large vision and language models | 1,213 |