Biography (Third-Person Narrative)
Tengyuan Liang is Professor of Econometrics and Statistics, and Applied AI at the Booth School of Business, and the JP Gan Professor in the Wallman Society of Fellows at the University of Chicago. He builds mathematical theories for machine learning — theories that reveal when and why these algorithms work. He creates principled methods for their reliable application in business and economics. He received a National Science Foundation CAREER Grant from the Division of Mathematical Sciences for his research program on the interpolation regime, extending the limits of statistical learning theory. His contributions to overparametrized models, generative models, and causal inference have shaped research across disciplines, appearing in the premier venues of statistics (Annals of Statistics, JRSSB, JASA, Biometrika), economics (Econometrica), and machine learning (JMLR, COLT). He has served as an Associate Editor for the Journal of the American Statistical Association and for Operations Research, and on the Editorial Board of the Journal of Machine Learning Research.
Technical Contributions
His research has established how implicit regularization governs generalization in overparametrized models, from kernel machines to boosting classifiers to neural networks. His work has built statistical and computational theories for generative models — GANs, denoising diffusions, and PDE samplers — through the lens of transport maps and stochastic dynamics. His frameworks for causal inference have brought machine learning tools to uncertainty visualization, experimental design, and policy evaluation.
What I Study
Why should we trust the synthetic data, decisions, and predictions produced by AI? The answer starts with the right mathematical language — one that lives in the data distributions these systems generate, transform, and learn from.
A statistician and machine learning theorist, I pursue three research programs to develop this answer:
The Distributional Regime: Generative Models
How do generative models learn to sample from distributions they have never fully observed? Can distributional geometry, dynamics, and shrinkage explain what they can — and cannot — generate?
The Causal Shift: Design and Inference
Designing an experiment is choosing a distribution; inferring a cause is extracting invariance amid distributional shifts. How do we do both — across individuals, populations, and time — to drive consequential decisions?
The Interpolation Regime: Overparametrization
Why do overparametrized models generalize when classical distribution-free theory says they shouldn’t? How does implicit regularization turn a curse into a blessing for predictive models?
What unifies these programs is a thesis: the deepest questions in AI — what it can generate, what it can infer, why it generalizes — are, at root, distributional questions. The mathematical theory of distributions — geometry, dynamics, and robustness — is what I develop to answer them.
Selected Work
The Distributional Regime: Shrinkage and Denoising
T. Liang.
“Distributional Shrinkage I: Universal Denoiser Beyond Tweedie’s Formula.”
arXiv:2511.09500, 2025.T. Liang.
“Distributional Shrinkage II: Higher-Order Scores Encode Brenier Map.”
arXiv:2512.09295, 2025.
The Distributional Regime: Generative Models
T. Liang.
“How Well Generative Adversarial Networks Learn Distributions.”
Journal of Machine Learning Research, 22(228):1–41, 2021.Y. Hur, W. Guo, T. Liang.
“Reversible Gromov-Monge Sampler for Simulation-Based Inference.”
SIAM Journal on Mathematics of Data Science, 6(2):283–310, 2024.N. Deb, T. Liang.
“No-Regret Generative Modeling via Parabolic Monge-Ampère PDE.”
The Annals of Statistics, 2026+.
More on The Distributional Regime
T. Liang, K. Dharmakeerthi, T. Koriyama.
“Denoising Diffusions with Optimal Transport: Localization, Curvature, and Multi-Scale Complexity.”
Transactions on Machine Learning Research, 2026.T. Liang, S. Sen, P. Sur.
“High-Dimensional Asymptotics of Langevin Dynamics in Spiked Matrix Models.”
Information and Inference: A Journal of the IMA, 12(4):2720–2752, 2023.W. Guo, Y. Hur, T. Liang, C. Ryan.
“Online Learning to Transport via the Minimal Selection Principle.”
Conference on Learning Theory, PMLR 178:4085–4109, 2022.T. Liang, J. Stokes.
“Interaction Matters: A Note on Non-asymptotic Local Convergence of Generative Adversarial Networks.”
International Conference on Artificial Intelligence and Statistics, PMLR 89: 907-915, 2019.B. Tzen, T. Liang, M. Raginsky.
“Local Optimality and Generalization Guarantees for the Langevin Algorithm via Empirical Metastability.”
Conference on Learning Theory, PMLR 75:857–875, 2018.
The Causal Shift: Design and Inference
M. H. Farrell, T. Liang, S. Misra.
“Deep Neural Networks for Estimation and Inference.”
Econometrica, 89(1):181–213, 2021.T. Liang, B. Recht.
“Randomization Inference When N Equals One.”
Biometrika, 112(2):1–23, 2025.W. Guo, T. Liang, P. Toulis.
“Gaussianized Design Optimization for Covariate Balance in Randomized Experiments.”
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2026+.
Extended abstract in ACM Conference on Economics and Computation, 918–918, 2025.
More on The Causal Shift
K. Dharmakeerthi, Y. Hur, T. Liang.
“Learning When the Concept Shifts: Confounding, Invariance, and Dimension Reduction.”
Journal of the American Statistical Association (Theory and Methods), 2026+.Y. Hur, T. Liang.
“Detecting Weak Distribution Shifts via Displacement Interpolation.”
Journal of Business & Economic Statistics, 43(1):178–190, 2025.T. Liang.
“Blessings and Curses of Covariate Shifts: Adversarial Learning Dynamics, Directional Convergence, and Equilibria.”
Journal of Machine Learning Research, 25(140):1–27, 2024.
The Interpolation Regime: Overparametrization and Regularization
T. Liang, A. Rakhlin.
“Just Interpolate: Kernel Ridgeless Regression Can Generalize.”
The Annals of Statistics, 48(3):1329–1347, 2020.T. Liang, P. Sur.
“A Precise High-Dimensional Asymptotic Theory for Boosting and Minimum-L1-Norm Interpolated Classifiers.”
The Annals of Statistics, 50(3):1669–1695, 2022.T. Liang.
“Universal Prediction Band via Semi-Definite Programming.”
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 84(4):1558–1580, 2022.T. Liang, B. Recht.
“Interpolating Classifiers Make Few Mistakes.”
Journal of Machine Learning Research, 24(20):1–27, 2023.
More on The Interpolation Regime
T. Liang, H. Tran-Bach.
“Mehler’s Formula, Branching Process, and Compositional Kernels of Deep Neural Networks.”
Journal of the American Statistical Association (Theory and Methods), 117(539):1324–1337, 2022.X. Dou, T. Liang.
“Training Neural Networks as Learning Data-adaptive Kernels: Provable Representation and Approximation Benefits.”
Journal of the American Statistical Association (Theory and Methods), 116(535):1507–1520, 2021.T. Liang, A. Rakhlin, X. Zhai.
“On the Multiple Descent of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels.”
Conference on Learning Theory, PMLR 125:2683–2711, 2020.