Biography (Third-Person Narrative)

Tengyuan Liang is Professor of Econometrics and Statistics, and Applied AI at the Booth School of Business, and the JP Gan Professor in the Wallman Society of Fellows at the University of Chicago. He builds mathematical theories for machine learning — theories that reveal when and why these algorithms work. He creates principled methods for their reliable application in business and economics. He received a National Science Foundation CAREER Grant from the Division of Mathematical Sciences for his research program on the interpolation regime, extending the limits of statistical learning theory. His contributions to overparametrized models, generative models, and causal inference have shaped research across disciplines, appearing in the premier venues of statistics (Annals of Statistics, JRSSB, JASA, Biometrika), economics (Econometrica), and machine learning (JMLR, COLT). He has served as an Associate Editor for the Journal of the American Statistical Association and for Operations Research, and on the Editorial Board of the Journal of Machine Learning Research.

Technical Contributions

His research has established how implicit regularization governs generalization in overparametrized models, from kernel machines to boosting classifiers to neural networks. His work has built statistical and computational theories for generative models — GANs, denoising diffusions, and PDE samplers — through the lens of transport maps and stochastic dynamics. His frameworks for causal inference have brought machine learning tools to uncertainty visualization, experimental design, and policy evaluation.


What I Study

Why should we trust the synthetic data, decisions, and predictions produced by AI? The answer starts with the right mathematical language — one that lives in the data distributions these systems generate, transform, and learn from.

A statistician and machine learning theorist, I pursue three research programs to develop this answer:

The Distributional Regime: Generative Models

How do generative models learn to sample from distributions they have never fully observed? Can distributional geometry, dynamics, and shrinkage explain what they can — and cannot — generate?

The Causal Shift: Design and Inference

Designing an experiment is choosing a distribution; inferring a cause is extracting invariance amid distributional shifts. How do we do both — across individuals, populations, and time — to drive consequential decisions?

The Interpolation Regime: Overparametrization

Why do overparametrized models generalize when classical distribution-free theory says they shouldn’t? How does implicit regularization turn a curse into a blessing for predictive models?

What unifies these programs is a thesis: the deepest questions in AI — what it can generate, what it can infer, why it generalizes — are, at root, distributional questions. The mathematical theory of distributions — geometry, dynamics, and robustness — is what I develop to answer them.


Selected Work

The Distributional Regime: Shrinkage and Denoising

The Distributional Regime: Generative Models

More on The Distributional Regime
  • T. Liang, K. Dharmakeerthi, T. Koriyama.
    “Denoising Diffusions with Optimal Transport: Localization, Curvature, and Multi-Scale Complexity.”
    Transactions on Machine Learning Research, 2026.

  • T. Liang, S. Sen, P. Sur.
    “High-Dimensional Asymptotics of Langevin Dynamics in Spiked Matrix Models.”
    Information and Inference: A Journal of the IMA, 12(4):2720–2752, 2023.

  • W. Guo, Y. Hur, T. Liang, C. Ryan.
    “Online Learning to Transport via the Minimal Selection Principle.”
    Conference on Learning Theory, PMLR 178:4085–4109, 2022.

  • T. Liang, J. Stokes.
    “Interaction Matters: A Note on Non-asymptotic Local Convergence of Generative Adversarial Networks.”
    International Conference on Artificial Intelligence and Statistics, PMLR 89: 907-915, 2019.

  • B. Tzen, T. Liang, M. Raginsky.
    “Local Optimality and Generalization Guarantees for the Langevin Algorithm via Empirical Metastability.”
    Conference on Learning Theory, PMLR 75:857–875, 2018.

The Causal Shift: Design and Inference

More on The Causal Shift
  • K. Dharmakeerthi, Y. Hur, T. Liang.
    “Learning When the Concept Shifts: Confounding, Invariance, and Dimension Reduction.”
    Journal of the American Statistical Association (Theory and Methods), 2026+.

  • Y. Hur, T. Liang.
    “Detecting Weak Distribution Shifts via Displacement Interpolation.”
    Journal of Business & Economic Statistics, 43(1):178–190, 2025.

  • T. Liang.
    “Blessings and Curses of Covariate Shifts: Adversarial Learning Dynamics, Directional Convergence, and Equilibria.”
    Journal of Machine Learning Research, 25(140):1–27, 2024.

The Interpolation Regime: Overparametrization and Regularization

More on The Interpolation Regime
  • T. Liang, H. Tran-Bach.
    “Mehler’s Formula, Branching Process, and Compositional Kernels of Deep Neural Networks.”
    Journal of the American Statistical Association (Theory and Methods), 117(539):1324–1337, 2022.

  • X. Dou, T. Liang.
    “Training Neural Networks as Learning Data-adaptive Kernels: Provable Representation and Approximation Benefits.”
    Journal of the American Statistical Association (Theory and Methods), 116(535):1507–1520, 2021.

  • T. Liang, A. Rakhlin, X. Zhai.
    “On the Multiple Descent of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels.”
    Conference on Learning Theory, PMLR 125:2683–2711, 2020.


Mentoring · All Publications · Brief CV