Biography (Third-Person Narrative)
Tengyuan Liang is the JP Gan Professor of Econometrics and Statistics, and Applied AI in the Wallman Society of Fellows at the University of Chicago Booth School of Business. He builds mathematical theories for modern AI — theories that reveal when and why these systems work — and creates principled tools for their reliable application in business and economics. His work on the interpolation regime, generative models, and causal inference spans journals across statistics, machine learning, economics, and applied mathematics. He received a National Science Foundation CAREER Award from the Division of Mathematical Sciences for his work on modern statistical learning paradigms. He has served as an Associate Editor for the Journal of the American Statistical Association and Operations Research, and on the Editorial Board of the Journal of Machine Learning Research.
What I Study (First-Person Narrative)
I am a statistician and machine learning theorist.
Why should we trust the predictions, decisions, and synthetic data produced by modern AI? My research builds the mathematical foundations to answer this question — theories that reveal when and why modern learning systems work, and principled tools for when they don’t. These foundations have direct consequences: they shape how predictive and generative models are validated, how experiments are designed in business and economics, and how uncertainty is communicated to decision-makers.
My work has established how implicit regularization governs generalization in overparametrized models, from kernel machines to boosting to neural networks. I build statistical and computational foundations for generative models — GANs, denoising diffusions, and PDE samplers — through the lens of transport maps and stochastic dynamics. I also develop machine learning methods for causal inference and experimental design, and rigorous frameworks for quantifying and visualizing uncertainty.
My current research programs:
“Quasi-Random” Samples of My Work
Generative Models: Geometry and Dynamics
T. Liang.
“Distributional Shrinkage II: Higher-Order Scores Encode Brenier Map.”
arXiv:2512.09295 , 2025.T. Liang.
“Distributional Shrinkage I: Universal Denoiser Beyond Tweedie’s Formula.”
arXiv:2511.09500 , 2025.N. Deb, T. Liang.
“No-Regret Generative Modeling via Parabolic Monge-Ampère PDE.”
arXiv:2504.09279 , 2025.T. Liang, K. Dharmakeerthi, T. Koriyama.
“Denoising Diffusions with Optimal Transport: Localization, Curvature, and Multi-Scale Complexity.”
Transactions on Machine Learning Research, 2026.Y. Hur, W. Guo, T. Liang.
“Reversible Gromov-Monge Sampler for Simulation-Based Inference.”
SIAM Journal on Mathematics of Data Science, 6(2):283–310, 2024.T. Liang, S. Sen, P. Sur.
“High-Dimensional Asymptotics of Langevin Dynamics in Spiked Matrix Models.”
Information and Inference: A Journal of the IMA, 12(4):2720–2752, 2023.W. Guo, Y. Hur, T. Liang, C. Ryan.
“Online Learning to Transport via the Minimal Selection Principle.”
Conference on Learning Theory, PMLR 178:4085–4109, 2022.B. Tzen, T. Liang, M. Raginsky.
“Local Optimality and Generalization Guarantees for the Langevin Algorithm via Empirical Metastability.”
Conference on Learning Theory, PMLR 75:857–875, 2018.T. Liang.
“How Well Generative Adversarial Networks Learn Distributions.”
Journal of Machine Learning Research, 22(228):1–41, 2021.
Causal Learning: Design and Inference
W. Guo, T. Liang, P. Toulis.
“Gaussianized Design Optimization for Covariate Balance in Randomized Experiments.”
Journal of the Royal Statistical Society: Series B (Statistical Methodology), forthcoming, 2026.- Extended abstract in ACM Conference on Economics and Computation, 918–918, 2025.
T. Liang, B. Recht.
“Randomization Inference When N Equals One.”
Biometrika, 112(2):1–23, 2025.Y. Hur, T. Liang.
“Detecting Weak Distribution Shifts via Displacement Interpolation.”
Journal of Business & Economic Statistics, 43(1):178–190, 2025.T. Liang.
“Blessings and Curses of Covariate Shifts: Adversarial Learning Dynamics, Directional Convergence, and Equilibria.”
Journal of Machine Learning Research, 25(140):1–27, 2024.M. H. Farrell, T. Liang, S. Misra.
“Deep Neural Networks for Estimation and Inference.”
Econometrica, 89(1):181–213, 2021.
The Interpolation Regime: Overparametrization and Regularization
T. Liang, B. Recht.
“Interpolating Classifiers Make Few Mistakes.”
Journal of Machine Learning Research, 24(20):1–27, 2023.T. Liang.
“Universal Prediction Band via Semi-Definite Programming.”
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 84(4):1558–1580, 2022.T. Liang, P. Sur.
“A Precise High-Dimensional Asymptotic Theory for Boosting and Minimum-L1-Norm Interpolated Classifiers.”
The Annals of Statistics, 50(3):1669–1695, 2022.T. Liang, H. Tran-Bach.
“Mehler’s Formula, Branching Process, and Compositional Kernels of Deep Neural Networks.”
Journal of the American Statistical Association (Theory and Methods), 117(539):1324–1337, 2022.X. Dou, T. Liang.
“Training Neural Networks as Learning Data-adaptive Kernels: Provable Representation and Approximation Benefits.”
Journal of the American Statistical Association (Theory and Methods), 116(535):1507–1520, 2021.T. Liang, A. Rakhlin, X. Zhai.
“On the Multiple Descent of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels.”
Conference on Learning Theory, PMLR 125:2683–2711, 2020.T. Liang, A. Rakhlin.
“Just Interpolate: Kernel Ridgeless Regression Can Generalize.”
The Annals of Statistics, 48(3):1329–1347, 2020.