zero123plus
Image generator
A 3D image generation model that takes a single image as input and produces a consistent set of multi-view images from different perspectives.
Code repository for Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model.
2k stars
29 watching
127 forks
Language: Python
last commit: about 1 year ago 3d3d-graphicsaigcdiffusersdiffusion-modelsimage-to-3dresearch-projecttext-to-3d
Related projects:
Repository | Description | Stars |
---|---|---|
| A Python library for running state-of-the-art diffusion models, supporting text-to-image and image-to-image generation. | 399 |
| An open-source project that enables the generation of 3D mesh models from single images in under a minute. | 1,582 |
| A Blender add-on that integrates Stable Diffusion for AI-generated image rendering | 1,111 |
| An end-to-end neural network model that generates images from scene graphs by processing input graph information through multiple layers of networks | 1,302 |
| A service for generating new images by mixing the content of an input image with the style of another image. | 51 |
| An autoregressive text-to-image generation model that generates photorealistic images from text prompts and leverages advances in large language models. | 1,554 |
| This project aims to develop a generative model for 3D multi-object scenes using a novel network architecture inspired by auto-encoding and generative adversarial networks. | 103 |
| A system for generating 3D meshes from input images using learned implicit representations | 805 |
| Generates detailed images by combining multiple diffusion processes focused on different regions of the image. | 418 |
| An integrated framework for training custom generative AI models | 246 |
| Automates large batches of AI-generated artwork locally using GPU acceleration. | 633 |
| An image generation system built around CLIP and GAN techniques. | 1,030 |
| Discord bot and interface for the Stable Diffusion image generation tool | 280 |
| An AI-powered plugin for Krita that enables img2img generation using Stable Diffusion models | 445 |
| A model that generates image patches from natural language descriptions by iteratively drawing and attending to relevant words. | 594 |