32

SIGGRAPH 2018 Papers: Machine Learning, Graphics, and Rendering

 5 years ago
source link: https://www.tuicool.com/articles/hit/jYzaeuQ
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

While the SIGGRAPH 2018 talks and exhibitor sessions were dominated by ray tracing, research was skewed toward machine learning.

ZnMR7jn.png!web

The papers selected below are even more heavily biased toward machine learning because this is my personal list of papers of interest and I’m trying to deepen my understanding of graphics and machine learning.

Favorite Papers

The following is a list of my favorite papers at the conference. Generally, I chose the papers in this list because I felt the result, performance, or intuition behind the paper was impressive.

  • [ link video ] Non-Stationary Texture Synthesis by Adversarial Expansion . There was some buzz around the conference regarding this paper, as it beautifully synthesizes textures with non-trivial patterns.
  • [ link video ] Noise2Noise: Learning Image Restoration without Clean Data . Originally presented at ICML , Noise2Noise made multiple appearances in SIGGRAPH 2018 talks. The intuition is that a network can be trained with pairs of noisy images (rather than noisy+clean) to learn a function which produces a clean image given a noisy image. The result is an order of magnitude reduction in the computational cost of generating datasets for image de-noising research. Further, it generalizes to non-zero-mean noise, such as text and other image artifacts.
  • [ link video ] Efficient Rendering of Layered Materials Using an Atomic Decomposition with Statistical Operators . Layered materials, rendered efficiently in Unity using extremely math math. The paper’s performance looks great, however the technique has not yet been adopted for the Unity HDRP due to performance concerns (stated by Sébastien Lagarde in Advances in Real-Time Rendering).
  • [ link ] Deep Convolutional Priors for Indoor Scene Synthesis . Builds a prior distribution of where objects might be placed in a room based on an iterative approach. I added this to my favorites list for its potential practical applications.
  • [ link video ] tempoGAN: A Temporally Coherent Volumetric GAN for Super-Resolution Fluid Flow . Restricted to 4x upsampling, but great temporally stable result with aesthetically pleasing artifacts. Reduces simulation time from hours to single-digit minutes, reduces time complexity to linear scaling, and enables parallel execution of the simulation.
  • [ link ] Single-Image SVBRDF Capture with a Rendering-Aware Deep Network . Generate materials with a single image from a cell phone. Quality was improved by using a differentiable renderer to formulate the loss function in rendered image space, while retaining the ability to back propagate through the network.

Papers Fast Forward Picks

SIGGRAPH had a new feature in the app this year to flag papers during the fast forward. I flagged the following list of papers for future reading material.

  • [ link ] Progressive Parameterizations
  • [ link ] Deep Exemplar-based Colorization
  • [ link video ] Non-Stationary Texture Synthesis by Adversarial Expansion
  • [ link ] Laplacian Kernel Splatting for Efficient Depth-of-Field and Motion Blur Synthesis or Reconstruction
  • [ link ] Deep Appearance Models for Face Rendering
  • [ link ] Neural Best-Buddies: Sparse Cross-Domain Correspondence
  • [ link ] Deep Convolutional Priors for Indoor Scene Synthesis
  • [ link ] Point Convolutional Neural Networks by Extension Operators
  • [ link pdf ] Learning Local Shape Descriptors from Part Correspondences with Multi-View Convolutional Networks
  • [ link video ] Efficient Rendering of Layered Materials Using an Atomic Decomposition with Statistical Operators
  • [ link ] Gaussian Material Synthesis
  • [ link ] Appearance Modeling via Proxy-To-Image Alignment
  • [ link ] Example-Based Turbulence Style Transfer
  • [ link ] Modeling n-Symmetry Vector Fields Using Higher-Order Energies
  • Water Surface Wavelets
  • [ link video ] tempoGAN: A Temporally Coherent Volumetric GAN for Super-Resolution Fluid Flow
  • [ link ] Instant 3D Photography
  • [ link ] Full 3D Reconstruction of Transparent Objects
  • [ link ] Optimal Cone Singularities for Conformal Flattening
  • [ link ] Spoke-Darts for High-Dimensional Blue Noise Sampling
  • [ link ] FontCode: Embedding Information in Text Documents using Glyph Perturbation
  • [ link ] What Characterizes Personalities of Graphic Designs
  • [ link ] Scale-Aware Black-and-White Abstraction of 3D Shapes
  • [ link ] Fast and Deep Deformation Approximations
  • [ link ] Denoising with Kernel Prediction and Asymmetric Loss Functions
  • [ link ] Deep Image-Based Relighting from Optimal Sparse Samples
  • [ link ] Efficient Reflectance Capture Using an Autoencoder
  • [ link ] Single-Image SVBRDF Capture with a Rendering-Aware Deep Network
  • [ link ] Autocomplete 3D Sculpting
  • [ link ] Differentiable Programming for Image Processing and Deep Learning in Halide
  • [ link ] A High-Performance Software Graphics Pipeline Architecture for the GPU
  • [ link ] Learning Basketball Dribbling Skills Using Trajectory Optimization and Deep RL
  • [ link ] DeepMimic: Example-Guided Deep RL of Physics-Based Character Skills
  • [ link ] Learning Symmetric and Low-Energy Locomotion
  • [ link video ] Mode-Adaptive Neural Networks for Quadruped Motion Control
  • [ link ] Semi-Supervised Co-Analysis of 3D Shape Styles from Projected Lines
  • [ link ] Predictive and Generative Neural Networks for Object Functionality
  • [ link ] P2P-NET: Bidirectional Point Displacement Net for Shape Transform
  • [ link video ] Robust Solving of Optical Motion Capture Data by Denoising
  • [ link video ] ToonSynth: Example-Based Synthesis of Hand-Colored Cartoon Animations

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK