Hur, Y. and Liang, T. (2024a) ‘A Convexified Matching Approach to Imputation and Individualized Inference’. arXiv. Available at: http://arxiv.org/abs/2407.05372 (Accessed: 8 July 2024).
Hur, Y., Guo, W. and Liang, T. (2024) ‘Reversible Gromov–Monge Sampler for Simulation-Based Inference’, SIAM Journal on Mathematics of Data Science, pp. 283–310. Available at: https://doi.org/10.1137/23M1550384.
Dharmakeerthi, K., Hur, Y. and Liang, T. (2024) ‘Learning When the Concept Shifts: Confounding, Invariance, and Dimension Reduction’. arXiv. Available at: http://arxiv.org/abs/2406.15904 (Accessed: 7 July 2024).
Liang, T. (2024) ‘Blessings and Curses of Covariate Shifts: Adversarial Learning Dynamics, Directional Convergence, and Equilibria’, Journal of Machine Learning Research, 25(140), pp. 1–27.
Hur, Y. and Liang, T. (2024b) ‘Detecting Weak Distribution Shifts via Displacement Interpolation’, Journal of Business & Economic Statistics, 0(0), pp. 1–13. Available at: https://doi.org/10.1080/07350015.2024.2335957.
Liang, T. and Recht, B. (2023a) ‘Randomization Inference When N Equals One’. arXiv. Available at: http://arxiv.org/abs/2310.16989 (Accessed: 7 July 2024).
Liang, T., Sen, S. and Sur, P. (2023) ‘High-dimensional asymptotics of Langevin dynamics in spiked matrix models’, Information and Inference: A Journal of the IMA, 12(4), pp. 2720–2752. Available at: https://doi.org/10.1093/imaiai/iaad042.
Liang, T. and Recht, B. (2023b) ‘Interpolating Classifiers Make Few Mistakes’, Journal of Machine Learning Research, 24(20), pp. 1–27.
Liang, T. (2022) ‘Universal prediction band via semi‐definite programming’, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 84(4), pp. 1558–1580. Available at: https://doi.org/10.1111/rssb.12542.
Guo, W., Hur, Y., Liang, T. and Ryan, C. (2022) ‘Online learning to transport via the minimal selection principle’, in P.-L. Loh and M. Raginsky (eds) Proceedings of thirty fifth conference on learning theory. COLT, London, United Kingdom: PMLR (Proceedings of machine learning research), pp. 4085–4109. Available at: https://proceedings.mlr.press/v178/guo22a.html.
Liang, T. and Sur, P. (2022) ‘A precise high-dimensional asymptotic theory for boosting and minimum-ℓ1-norm interpolated classifiers’, The Annals of Statistics, 50(3). Available at: https://doi.org/10.1214/22-AOS2170.
Liang, T. and Tran-Bach, H. (2022) ‘Mehler’s formula, branching process, and compositional kernels of deep neural networks’, Journal of the American Statistical Association, 117(539), pp. 1324–1337. Available at: https://doi.org/10.1080/01621459.2020.1853547.
Farrell, M.H., Liang, T. and Misra, S. (2021a) ‘Deep Learning for Individual Heterogeneity: An Automatic Inference Framework’. arXiv. Available at: http://arxiv.org/abs/2010.14694 (Accessed: 7 July 2024).
Farrell, M.H., Liang, T. and Misra, S. (2021b) ‘Deep neural networks for estimation and inference’, Econometrica, 89(1), pp. 181–213. Available at: https://doi.org/10.3982/ECTA16901.
Liang, T. (2021) ‘How Well Generative Adversarial Networks Learn Distributions’, Journal of Machine Learning Research, 22(228), pp. 1–41.
Dou, X. and Liang, T. (2021) ‘Training Neural Networks as Learning Data-adaptive Kernels: Provable Representation and Approximation Benefits’, Journal of the American Statistical Association, 116(535), pp. 1507–1520. Available at: https://doi.org/10.1080/01621459.2020.1745812.
Liang, T., Rakhlin, A. and Zhai, X. (2020) ‘On the multiple descent of minimum-norm interpolants and restricted lower isometry of kernels’, in J. Abernethy and S. Agarwal (eds) Proceedings of thirty third conference on learning theory. COLT, PMLR (Proceedings of machine learning research), pp. 2683–2711. Available at: http://proceedings.mlr.press/v125/liang20a.html.
Liang, T. and Rakhlin, A. (2020) ‘Just interpolate: Kernel “Ridgeless” regression can generalize’, The Annals of Statistics, 48(3), pp. 1329–1347. Available at: https://doi.org/10.1214/19-AOS1849.
Cai, T.T., Liang, T. and Rakhlin, A. (2020) ‘Weighted message passing and minimum energy flow for heterogeneous stochastic block models with side information’, Journal of Machine Learning Research, 21(11), pp. 1–34.
Liang, T. (2019) ‘Estimating Certain Integral Probability Metric (IPM) is as Hard as Estimating under the IPM’. arXiv. Available at: http://arxiv.org/abs/1911.00730 (Accessed: 7 July 2024).
Liang, T., Poggio, T., Rakhlin, A. and Stokes, J. (2019) ‘Fisher-rao metric, geometry, and complexity of neural networks’, in K. Chaudhuri and M. Sugiyama (eds) The 22nd international conference on artificial intelligence and statistics. AISTATS, Naha, Japan: PMLR (Proceedings of machine learning research), pp. 888–896. Available at: http://proceedings.mlr.press/v89/liang19a.html.
Liang, T. and Stokes, J. (2019) ‘Interaction matters: A note on non-asymptotic local convergence of generative adversarial networks’, in K. Chaudhuri and M. Sugiyama (eds) The 22nd international conference on artificial intelligence and statistics. AISTATS, Naha, Japan: PMLR (Proceedings of machine learning research), pp. 907–915. Available at: http://proceedings.mlr.press/v89/liang19b.html.
Liang, T. and Su, W.J. (2019) ‘Statistical inference for the population landscape via moment-adjusted stochastic gradients’, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 81(2), pp. 431–456. Available at: https://doi.org/10.1111/rssb.12313.
Tzen, B., Liang, T. and Raginsky, M. (2018) ‘Local optimality and generalization guarantees for the langevin algorithm via empirical metastability’, in S. Bubeck, V. Perchet, and P. Rigollet (eds) Proceedings of the 31st conference on learning theory. COLT, Stockholm, Sweden: PMLR (Proceedings of machine learning research), pp. 857–875. Available at: http://proceedings.mlr.press/v75/tzen18a.html.
Kale, S., Karnin, Z., Liang, T. and Pál, D. (2017) ‘Adaptive feature selection: Computationally efficient online sparse linear regression under RIP’, in D. Precup and Y.W. Teh (eds) Proceedings of the 34th international conference on machine learning. ICML, Sydney, Australia: PMLR (Proceedings of machine learning research), pp. 1780–1788. Available at: http://proceedings.mlr.press/v70/kale17a.html.
Cai, T.T., Liang, T. and Rakhlin, A. (2017) ‘Computational and statistical boundaries for submatrix localization in a large noisy matrix’, The Annals of Statistics, 45(4), pp. 1403–1430. Available at: https://doi.org/10.1214/16-AOS1488.
Cai, T., Liang, T. and Rakhlin, A. (2017) ‘On detection and structural reconstruction of small-world random networks’, IEEE Transactions on Network Science and Engineering, 4(3), pp. 165–176. Available at: https://doi.org/10.1109/TNSE.2017.2703102.
Cai, T.T., Liang, T. and Rakhlin, A. (2016) ‘Geometric inference for general high-dimensional linear inverse problems’, The Annals of Statistics, 44(4), pp. 1536–1563. Available at: https://doi.org/10.1214/15-AOS1426.
Belloni, A., Liang, T., Narayanan, H. and Rakhlin, A. (2015) ‘Escaping the local minima via simulated annealing: Optimization of approximately convex functions’, in P. Grünwald, E. Hazan, and S. Kale (eds) Proceedings of the 28th conference on learning theory. COLT, Paris, France: PMLR (Proceedings of machine learning research), pp. 240–265. Available at: http://proceedings.mlr.press/v40/Belloni15.html.
Liang, T., Rakhlin, A. and Sridharan, K. (2015) ‘Learning with square loss: Localization through offset rademacher complexity’, in P. Grünwald, E. Hazan, and S. Kale (eds) Proceedings of the 28th conference on learning theory. COLT, Paris, France: PMLR (Proceedings of machine learning research), pp. 1260–1285. Available at: http://proceedings.mlr.press/v40/Liang15.html.
Cai, T.T., Liang, T. and Zhou, H.H. (2015) ‘Law of log determinant of sample covariance matrix and optimal estimation of differential entropy for high-dimensional Gaussian distributions’, Journal of Multivariate Analysis, 137, pp. 161–172. Available at: https://doi.org/10.1016/j.jmva.2015.02.003.