No-Regret Generative Modeling via Parabolic Monge-Ampère PDE

We introduce a novel generative modeling framework called parabolic Monge-Ampère PDE sampler. We establish theoretical guarantees for generative modeling through the lens of no-regret analysis, demonstrating that the iterates converge to the optimal Brenier map under a variety of step-size schedules. We derive a new Evolution Variational Inequality connecting geometry, transportation cost, and regret.

April 2025 · Nabarun Deb, Tengyuan Liang

Denoising Diffusions with Optimal Transport: Localization, Curvature, and Multi-Scale Complexity

Adding noise is easy; what about denoising? Diffusion is easy; what about reverting a diffusion? We provide a fine-grained analysis of the diffuse-then-denoise process. We discover a notion of multi-scale curvature complexity that collectively determines the success or failure mode of probabilistic diffusion models.

November 2024 · Tengyuan Liang, Kulunu Dharmakeerthi, Takuya Koriyama

Online Learning to Transport via the Minimal Selection Principle

Motivated by robust dynamic resource allocation in operations research, we study the Online Learning to Transport (OLT) problem where the decision variable is a probability measure, an infinite-dimensional object. We draw connections between online learning, optimal transport, and partial differential equations through an insight called the minimal selection principle, originally studied in the Wasserstein gradient flow setting by Ambrosio et al. (2005).

February 2022 · Wenxuan Guo, YoonHaeng Hur, Tengyuan Liang, Christopher Ryan

Reversible Gromov-Monge Sampler for Simulation-Based Inference

Motivated by the seminal work on distance and isomorphism between metric measure spaces, we propose a new notion called the Reversible Gromov-Monge (RGM) distance and study how RGM can be used to design new transform samplers to perform simulation-based inference.

September 2021 · YoonHaeng Hur, Wenxuan Guo, Tengyuan Liang

How Well Generative Adversarial Networks Learn Distributions

This paper studies the rates of convergence for learning distributions implicitly with the adversarial framework and Generative Adversarial Networks (GANs), which subsume Wasserstein, Sobolev, MMD GAN, and Generalized/Simulated Method of Moments (GMM/SMM) as special cases. We study a wide range of parametric and nonparametric target distributions under a host of objective evaluation metrics. We investigate how to obtain valid statistical guarantees for GANs through the lens of regularization.

December 2017 · Tengyuan Liang