Denoising Diffusions with Optimal Transport: Localization, Curvature, and Multi-Scale Complexity

Adding noise is easy; what about denoising? Diffusion is easy; what about reverting a diffusion? We provide a fine-grained analysis of the diffuse-then-denoise process. We discover a notion of multi-scale curvature complexity that collectively determines the success or failure mode of probabilistic diffusion models.

November 2024 · Tengyuan Liang, Kulunu Dharmakeerthi, Takuya Koriyama

Reversible Gromov-Monge Sampler for Simulation-Based Inference

Motivated by the seminal work on distance and isomorphism between metric measure spaces, we propose a new notion called the Reversible Gromov-Monge (RGM) distance and study how RGM can be used to design new transform samplers to perform simulation-based inference.

September 2021 · YoonHaeng Hur, Wenxuan Guo, Tengyuan Liang

How Well Generative Adversarial Networks Learn Distributions

This paper studies the rates of convergence for learning distributions implicitly with the adversarial framework and Generative Adversarial Networks (GANs), which subsume Wasserstein, Sobolev, MMD GAN, and Generalized/Simulated Method of Moments (GMM/SMM) as special cases. We study a wide range of parametric and nonparametric target distributions under a host of objective evaluation metrics. We investigate how to obtain valid statistical guarantees for GANs through the lens of regularization.

December 2017 · Tengyuan Liang