Apple AI researchers offer GAUDI: a generative model that captures the distributions of complex and realistic 3D scenes

Advances in 3D generative models are desperately needed if learning systems are to understand and construct 3D spaces. The researchers pay homage to Antoni Gaud, whose remark “Invention continues continually through men”, named after their approach, as you can see in the title of the article. In order to display views of scenes sampled from the learned distribution, they are interested in generative models capable of recording the distribution of 3D scenes. Such generative model extensions to conditional inference problems could dramatically improve a variety of machine learning and computer vision tasks. For example, a written description or a sample of possible scenario realizations can be helpful.

A Generative Adversarial Network (GAN), a parametric function that grabs the coordinates of a point in 3D space and the position of the camera, and returns a density scalar and RGB value for that 3D point, has been used recently. in the generative modeling of 3D objects or scenes. By placing the interrogated 3D points in the volume rendering equation and projecting them onto any 2D camera view, images can be created from the radiation field created by the model. Additionally, these models would be beneficial in SLAM, model-based reinforcement learning, or 3D content development.

While effective on small or simple 3D datasets (such as single objects or a limited number of interior scenes), these datasets have a canonical coordinate system. GANs are difficult to train on data for which a canonical coordinate system does not exist, as is the case for 3D scenes, and they suffer from training pathologies such as mode collapse. Also, when modeling 3D object distributions, camera postures are often thought to be sampled from a distribution shared between objects (i.e. usually over SO(3)) . However, this is not the case when modeling scene distributions.

This is due to each scene’s independent reliance on the distribution of viable camera postures (based on the structure and location of walls and other objects). Moreover, this distribution can include all the postures of the SE(3) group for the scenarios. This fact becomes more evident when they consider camera postures as a route through the scene. Each trajectory in GAUDI, a collection of posed photos of a 3D scene, is converted into a latent representation that decouples the camera trajectory from the radiation field, like a 3D scene. They identify these latent representations by considering them as free parameters and present an optimization problem in which the reconstruction objective optimizes the latent representation for each trajectory.

This simple training method can handle thousands of trajectory combinations. It is also easy to manage a large and variable number of views for each trajectory when the latent representation of each trajectory is interpreted as a free parameter instead of needing a complex encoder architecture to pool a large number of views. . They develop a generative model using the set of latent representations after optimizing them for an observed empirical distribution of trajectories. The model can create scenes by interpolating in latent space in the unconditional case since it can fully sample luminance fields from the previous distribution it has learned. The conditional case allows the generation of luminance fields compatible with conditional variables made available to the model at training time (such as images and text prompts).

These contributions can be summarized as follows:

  • They scale 3D scene production to thousands of indoor scenes with hundreds of thousands of photos without encountering mode collapse or canonical orientation issues during training.
  • To identify latent representations that unravel simultaneously describing a radiation field and camera positions, they develop a unique denoising optimization goal.
  • By using a variety of datasets, the method achieves state-of-the-art generation performance.
  • This method supports both unconditional and conditional build configurations, based on text or images.

The code implementation is available on Apple’s GitHub repository.

This Article is written as a research summary article by Marktechpost Staff based on the research paper 'GAUDI: A Neural Architect for Immersive 3D Scene Generation'. All Credit For This Research Goes To Researchers on This Project. Checkout the paper and github link 

Please Don't Forget To Join Our ML Subreddit


Consultant intern in content writing at Marktechpost.


James G. Williams