awesome-implicit-representations

Representation guide

A curated list of resources on neural representations that do not require explicit parameters to define them.

A curated list of resources on implicit neural representations.

GitHub

2k stars
114 watching
140 forks
last commit: 9 months ago
Linked from 1 awesome list


Awesome Implicit Neural Representations / Disclaimer

Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations
MetaSDF: MetaSDF: Meta-Learning Signed Distance Functions
Implicit Neural Representations with Periodic Activation Functions
Inferring Semantic Information with 3D Neural Scene Representations
Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering

Colabs

Implicit Neural Representations with Periodic Activation Functions shows how to fit images, audio signals, and even solve simple Partial Differential Equations with the SIREN architecture
Neural Radiance Fields (NeRF) shows how to fit a neural radiance field, allowing novel view synthesis of a single 3D scene
MetaSDF & MetaSiren shows how you can leverage gradient-based meta-learning to generalize across neural implicit representations
Neural Descriptor Fields Learn how you can use globally conditioned neural implicit representations as self-supervised correspondence learners, enabling robotics imitation tasks

Papers / Implicit Neural Representations of Geometry

DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation (Park et al. 2019)
Occupancy Networks: Learning 3D Reconstruction in Function Space (Mescheder et al. 2019)
IM-Net: Learning Implicit Fields for Generative Shape Modeling (Chen et al. 2018)
Sal: Sign agnostic learning of shapes from raw data 89 almost 4 years ago (Atzmon et al. 2019) shows how we may learn SDFs from raw data (i.e., without ground-truth signed distance values)
Implicit Geometric Regularization for Learning Shapes 399 almost 3 years ago (Gropp et al. 2020) shows how we may learn SDFs from raw data (i.e., without ground-truth signed distance values)
Local Implicit Grid Representations for 3D Scenes , , concurrently proposed hybrid voxelgrid/implicit representations to fit large-scale 3D scenes
Implicit Neural Representations with Periodic Activation Functions (Sitzmann et al. 2020) demonstrates how we may parameterize room-scale 3D scenes via a single implicit neural representation by leveraging sinusoidal activation functions
Neural Unsigned Distance Fields for Implicit Function Learning (Chibane et al. 2020) proposes to learn unsigned distance fields from raw point clouds, doing away with the requirement of water-tight surfaces

Papers / Implicit representations of Geometry and Appearance / From 2D supervision only (“inverse graphics”)

Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations proposed to learn an implicit representations of 3D shape and geometry given only 2D images, via a differentiable ray-marcher, and generalizes across 3D scenes for reconstruction from a single image via hyper-networks. This was demonstrated for single-object scenes, but also for simple room-scale scenes (see talk)
Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision 804 about 3 years ago (Niemeyer et al. 2020), replaces LSTM-based ray-marcher in SRNs with a fully-connected neural network & analytical gradients, enabling easy extraction of the final 3D geometry
Neural Radiance Fields (NeRF) (Mildenhall et al. 2020) proposes positional encodings, volumetric rendering & ray-direction conditioning for high-quality reconstruction of single scenes, and has spawned a large amount of follow-up work on volumetric rendering of 3D implicit representations. For a curated list of NeRF follow-up work specifically, see
SDF-SRN: Learning Signed Distance 3D Object Reconstruction from Static Images 125 almost 4 years ago (Lin et al. 2020), demonstrates how we may train Scene Representation Networks from a single observation only
Pixel-NERF (Yu et al. 2020) proposes to condition a NeRF on local features lying on camera rays, extracted from contact images, as proposed in PiFU (see "from 3D supervision")
Multiview neural surface reconstruction by disentangling geometry and appearance (Yariv et al. 2020) demonstrates sphere-tracing with positional encodings for reconstruction of complex 3D scenes, and proposes a surface normal and view-direction dependent rendering network for capturing view-dependent effects
Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering (Sitzmann et al. 2021) proposes to represent 3D scenes via their 360-degree light field parameterized as a neural implicit representation

Papers / Implicit representations of Geometry and Appearance / From 3D supervision

Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization (Saito et al. 2019) Pifu first introduced the concept of conditioning an implicit representation on local features extracted from context images. Follow-up work achieves photo-realistic, real-time re-rendering
Texture Fields: Learning Texture Representations in Function Space (Oechsle et al.)

Papers / Implicit representations of Geometry and Appearance / For dynamic scenes

Occupancy flow: 4d reconstruction by learning particle dynamics (Niemeyer et al. 2019) first proposed to learn a space-time neural implicit representation by representing a 4D warp field with an implicit neural representation
D-NeRF: Neural Radiance Fields for Dynamic Scenes
Deformable Neural Radiance Fields
Neural Radiance Flow for 4D View Synthesis and Video Processing
Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes
Space-time Neural Irradiance Fields for Free-Viewpoint Video
Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video

Papers / Symmetries in Implicit Neural Representations

Vector Neurons: A General Framework for SO(3)-Equivariant Networks (Deng et al. 2021) makes conditional implicit neural representations equivariant to SO(3), enabling the learning of a rotation-equivariant shape space and subsequent reconstruction of 3D geometry of single objects in unseen poses

Papers / Hybrid implicit / explicit (condition implicit on local features)

Implicit Functions in Feature Space for 3D ShapeReconstruction and Completion
Local Implicit Grid Representations for 3D Scenes
Convolutional Occupancy Networks
Deep Local Shapes: Learning Local SDF Priors for Detailed 3D Reconstruction
Neural Sparse Voxel Fields 805 over 1 year ago Applies a similar concept to neural radiance fields
Pixel-NERF (Yu et al. 2020) proposes to condition a NeRF on local features lying on camera rays, extracted from contact images, as proposed in PiFU (see "from 3D supervision")
Local Deep Implicit Functions for 3D Shape
PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations

Papers / Learning correspondence with Neural Implicit Representations

Inferring Semantic Information with 3D Neural Scene Representations leverages features learned by Scene Representation Networks for weakly supervised semantic segmentation of 3D objects
Neural Descriptor Fields: SE(3)-Equvariant Object Representations for Manipulation leverages features learned by occupancy networks to establish correspondence, used for robotics imitation learning

Papers / Robotics Applications

3D Neural Scene Representations for Visuomotor Control learns latent state space for robotics tasks using neural rendering, and subsequently expresses policies in that latent space
Full-Body Visual Self-Modeling of Robot Morphologies uses neural implicit geometry representation for learning a robot self-model, enabling space occupancy queries for given joint angles
Neural Descriptor Fields: SE(3)-Equvariant Object Representations for Manipulation leverages neural fields & vector neurons as an object-centric representation that enables imitation learning of pick-and-place tasks, generalizing across SE(3) poses

Papers / Generalization & Meta-Learning with Neural Implicit Representations

Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization (Saito et al. 2019) proposed to locally condition implicit representations on ray features extracted from context images
Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations (Sitzmann et al. 2019) proposed meta-learning via hypernetworks
MetaSDF: MetaSDF: Meta-Learning Signed Distance Functions (Sitzmann et al. 2020) proposed gradient-based meta-learning for implicit neural representations
SDF-SRN: Learning Signed Distance 3D Object Reconstruction from Static Images 125 almost 4 years ago (Lin et al. 2020) show how to learn 3D implicit representations from single-image supervision only
Learned Initializations for Optimizing Coordinate-Based Neural Representations (Tancik et al. 2020) explored gradient-based meta-learning for NeRF

Papers / Fitting high-frequency detail with positional encoding & periodic nonlinearities

Neural Radiance Fields (NeRF) (Mildenhall et al. 2020) proposed positional encodings
Implicit Neural Representations with Periodic Activation Functions (Sitzmann et al. 2020) proposed implicit representations with periodic nonlinearities
Fourier features let networks learn high frequency functions in low dimensional domains (Tancik et al. 2020) explores positional encodings in an NTK framework

Papers / Implicit Neural Representations of Images

Compositional Pattern-Producing Networks: Compositional pattern producing networks: A novel abstraction of development (Stanley et al. 2007) first proposed to parameterize images implicitly via neural networks
Implicit Neural Representations with Periodic Activation Functions (Sitzmann et al. 2020) proposed to generalize across implicit representations of images via hypernetworks
X-Fields: Implicit Neural View-, Light- and Time-Image Interpolation (Bemana et al. 2020) parameterizes the Jacobian of pixel position with respect to view, time, illumination, etc. to naturally interpolate images
Learning Continuous Image Representation with Local Implicit Image Function 1,271 about 3 years ago (Chen et al. 2020) proposed a hypernetwork-based GAN for images
Alias-Free Generative Adversarial Networks (StyleGAN3) uses FILM-conditioned MLP as an image GAN

Papers / Composing implicit neural representations

GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields (Niemeyer et al. 2021)
Object-centric Neural Rendering (Guo et al. 2020)
Unsupervised Discovery of Object Radiance Fields (Yu et al. 2021)

Papers / Implicit Representations for Partial Differential Equations & Boundary Value Problems

Implicit Geometric Regularization for Learning Shapes 399 almost 3 years ago (Gropp et al. 2020) learns SDFs by enforcing constraints of the Eikonal equation via the loss
Implicit Neural Representations with Periodic Activation Functions (Sitzmann et al. 2020) proposes to leverage the periodic sine as an activation function, enabling the parameterization of functions with non-trivial higher-order derivatives and the solution of complicated PDEs
AutoInt: Automatic Integration for Fast Neural Volume Rendering (Lindell et al. 2020)
MeshfreeFlowNet: Physics-Constrained Deep Continuous Space-Time Super-Resolution Framework (Jiang et al. 2020) performs super-resolution for spatio-temporal flow functions using local implicit representaitons, with auxiliary PDE losses

Papers / Generative Adverserial Networks with Implicit Representations / For 3D

Generative Radiance Fields for 3D-Aware Image Synthesis (Schwarz et al. 2020)
pi-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis (Chan et al. 2020)
Unconstrained Scene Generation with Locally Conditioned Radiance Fields (DeVries et al. 2021) Leverage a hybrid implicit-explicit representation, by generating a 2D feature grid floorplan with a classic convolutional GAN, and then conditioning a 3D neural implicit representation on these features. This enables generation of room-scale 3D scenes
Alias-Free Generative Adversarial Networks (StyleGAN3) uses FILM-conditioned MLP as an image GAN

Papers / Generative Adverserial Networks with Implicit Representations / For 2D

Adversarial Generation of Continuous Images (Skorokhodov et al. 2020)
Learning Continuous Image Representation with Local Implicit Image Function 1,271 about 3 years ago (Chen et al. 2020)
Image Generators with Conditionally-Independent Pixel Synthesis (Anokhin et al. 2020)
Alias-Free GAN (Karras et al. 2021)

Papers / Image-to-image translation

Spatially-Adaptive Pixelwise Networks for Fast Image Translation (Shaham et al. 2020) leverages a hybrid implicit-explicit representation for fast high-resolution image2image translation

Papers / Articulated representations

NASA: Neural Articulated Shape Approximation (Deng et al. 2020) represents an articulated object as a composition of local, deformable implicit elements

Talks

Vincent Sitzmann: Implicit Neural Scene Representations (Scene Representation Networks, MetaSDF, Semantic Segmentation with Implicit Neural Representations, SIREN)
Andreas Geiger: Neural Implicit Representations for 3D Vision (Occupancy Networks, Texture Fields, Occupancy Flow, Differentiable Volumetric Rendering, GRAF)
Gerard Pons-Moll: Shape Representations: Parametric Meshes vs Implicit Functions
Yaron Lipman: Implicit Neural Representations
awesome-NeRF 6,527 23 days ago List of implicit representations specifically on neural radiance fields (NeRF)

Backlinks from these awesome lists:

More related projects: