NVIDIA is bringing a host of advancements in rendering, simulation, and generative AI to SIGGRAPH 2024the premier computer graphics conference, taking place July 28-August 1 in Denver.
More than 20 NVIDIA Research papers feature innovations that improve synthetic data generators and inverse rendering tools that can help train next-generation models. NVIDIA AI research is improving simulation by increasing image quality and uncovering new ways to create 3D representations of real or imagined worlds.
The papers focus on diffusion models for visual generative AI, physically-based simulation, and increasingly realistic AI-driven rendering. They include two techniques Best Article Award Winners and collaborations with universities in the United States, Canada, China, Israel and Japan, as well as with researchers from companies such as Adobe and Roblox.
These initiatives will help create tools that developers and businesses can use to generate complex virtual objects, characters and environments. Synthetic data generation They can then be leveraged to tell powerful visual stories, help scientists understand natural phenomena, or assist in simulation-based training of robots and autonomous vehicles.
Diffusion models improve texture painting and text-to-image generation
Broadcast templates, a popular tool for transforming text prompts into images, can help artists, designers, and other creators quickly generate visual elements for storyboards or production, reducing the time it takes to bring ideas to life.
Two papers written by NVIDIA are improving the capabilities of these generative AI models.
ConsistoryA collaboration between researchers at NVIDIA and Tel Aviv University, it makes it easier to generate multiple images with a consistent main character, an essential capability for narrative use cases such as illustrating a comic strip or developing a storyboard. The researchers’ approach introduces a technique called subject-driven shared attention, which reduces the time it takes to generate consistent images from 13 minutes to about 30 seconds.
NVIDIA researchers won the award last year Best in Show Award at SIGGRAPH Real-Time Live event for AI models that convert text or image cues into custom textured materials. This year, they will present a paper that applies 2D generative diffusion models for interactive texture painting on 3D meshes, allowing artists to paint in real time with complex textures based on any reference image.
Driving the development of physics-based simulation
Graphics researchers are bridging the gap between physical objects and their virtual representations with physically-based simulation—a range of techniques for making digital objects and characters move the same way they would in the real world.
Several NVIDIA research papers present advances in the field, including SuperPADL, a project that addresses the challenge of Simulation of complex human movements based on text prompts. (see video above).
Using a combination of reinforcement learning and supervised learning, the researchers demonstrated how the SuperPADL framework can be trained to reproduce the motion of over 5,000 skills and can run in real time on a consumer-grade NVIDIA GPU.
Another NVIDIA article presents a neural physics method that applies AI to learn how objects (whether represented as a 3D mesh, a NeRF, or a solid object generated by a text-to-3D model) would behave as they move through an environment.
A paper co-authored with researchers at Carnegie Mellon University develops a new type of renderer, one that, instead of modeling physical light, can Perform thermal, electrostatic and fluid mechanics analysis.Named one of the top five papers at SIGGRAPH, the method is easy to parallelize and does not require cumbersome model cleanup, offering new opportunities to accelerate engineering design cycles.
In the example above, the renderer performs a thermal analysis of the Mars Curiosity rover, where keeping temperatures within a specific range is critical to mission success.
Additional simulation papers introduce a more efficient technique for hair strand modeling and a Pipeline that speeds up fluid simulation by 10x.
Raising the bar for rendering realism, diffraction simulation
Another set of papers authored by NVIDIA introduces new techniques for modeling visible light up to 25 times faster and simulating diffraction effects (such as those used in radar simulation to train self-driving cars) up to 1,000 times faster.
A paper by NVIDIA and researchers at the University of Waterloo addresses free space diffractionan optical phenomenon in which light stretches or bends around the edges of objects. The team’s method can be integrated with path tracing workflows to increase the efficiency of diffraction simulation in complex scenes, offering a speedup of up to 1000x. In addition to representing visible light, the model could also be used to simulate the longer wavelengths of radar, sound, or radio waves.
Route tracking samples numerous paths—light rays bouncing in multiple directions across a scene—to create a photorealistic image. Two SIGGRAPH papers improve the sampling quality of ReSTIR, a path-tracing algorithm first introduced by researchers at NVIDIA and Dartmouth College at SIGGRAPH 2020 that has been key to bringing path tracing to games and other real-time rendering products.
One of these articles, a collaboration with the University of Utah, shares a new way to reuse calculated routes that Increases effective sample counting by up to 25 timessignificantly improving image quality. The other improves sample quality by randomly mutating a subset of the light path. This helps denoising algorithms perform better and produce fewer visual artifacts in the final rendering.
Teaching AI to think in 3D
NVIDIA researchers are also showcasing multi-purpose AI tools for 3D rendering and design at SIGGRAPH.
An article presents fVDB, A GPU-optimized framework for 3D deep learning that matches the scale of the real world. The fVDB framework provides an AI infrastructure for the large spatial scale and high resolution of city-scale 3D models and NeRFand segmentation and reconstruction of large-scale point clouds.
An award-winning paper for best technical paper written in collaboration with researchers at Dartmouth College presents a theory for Representing how 3D objects interact with light.The theory unifies a diverse spectrum of appearances into a single model.
And a collaboration with the University of Tokyo, the University of Toronto and Adobe Research presents an algorithm that Generates smooth, space-filling curves in 3D meshes. In real time. While previous methods took hours, this framework runs in seconds and gives users a high degree of control over the outcome to enable interactive design.
NVIDIA at SIGGRAPH
Learn more about NVIDIA at SIGGRAPHwith special events including a Fireside chat between NVIDIA Founder and CEO Jensen Huang and Lauren Goodesenior writer at WIRED, on the impact of robotics and AI on industrial digitalization.
NVIDIA researchers will also present NVIDIA OpenUSD Daya full-day event showcasing how developers and industry leaders are adopting and evolving OpenUSD to build AI-enabled 3D pipelines.
NVIDIA Research It has hundreds of scientists and engineers around the world, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics. See More of his latest works.
Leave feedback about this