A Convexified Matching Approach to Imputation and Individualized Inference
We introduce a new convexified matching method for missing value imputation and individualized inference inspired by computational optimal transport.
We introduce a new convexified matching method for missing value imputation and individualized inference inspired by computational optimal transport.
Confounding can obfuscate the definition of the best prediction model (concept shift) and shift covariates to domains yet unseen (covariate shift). Therefore, a model maximizing prediction accuracy in the source environment could suffer a significant accuracy drop in the target environment. We propose a new domain adaptation method for observational data in the presence of confounding, and characterize the the stability and predictability tradeoff leveraging a structural causal model.
A statistical theory for N-of-1 experiments, where a unit serves as its own control and treatment in different time windows.
Detecting weak, systematic distribution shifts and quantitatively modeling individual, heterogeneous responses to policies or incentives have found increasing empirical applications in social and economic sciences. We propose a model for weak distribution shifts via displacement interpolation, drawing from the optimal transport theory.
Motivated by the seminal work on distance and isomorphism between metric measure spaces, we propose a new notion called the Reversible Gromov-Monge (RGM) distance and study how RGM can be used to design new transform samplers to perform simulation-based inference.
This course covers fundamental statistical concepts and basic computational tools in data analysis. The goal is to learn how to perform descriptive and predictive data analysis based on real datasets. This course also serves as a quantitative foundation for Chicago Booth elective courses in marketing, finance, economics and more advanced courses in data science.
Modern statistical inference tasks often require iterative optimization methods to compute the solution. Convergence analysis from an optimization viewpoint only informs us how well the solution is approximated numerically but overlooks the sampling nature of the data. We introduce the moment-adjusted stochastic gradient descents, a new stochastic optimization method for statistical inference.