Tengyuan Liang, Alexander Rakhlin. Just Interpolate: Kernel ''Ridgeless'' Regression Can Generalize. The Annals of Statistics, to appear, 2019.
Tengyuan Liang, Weijie Su. Statistical Inference for the Population Landscape via Moment Adjusted Stochastic Gradients. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2019.
Xialiang Dou, Tengyuan Liang. Training Neural Networks as Learning Data-adaptive Kernels: Provable Representation and Approximation Benefits. arXiv:1901.07114, 2019.
Tengyuan Liang, Tomaso Poggio, Alexander Rakhlin, James Stokes. Fisher-Rao Metric, Geometry, and Complexity of Neural Networks. International Conference on Artificial Intelligence and Statistics (AISTATS), 2019.
Tengyuan Liang. On How Well Generative Adversarial Networks Learn Densities: Nonparametric and Parametric Results. arXiv:1811.03179, 2018.
Max Farrell, Tengyuan Liang, Sanjog Misra. Deep Neural Networks for Estimation and Inference: Application to Causal Effects and Other Semiparametric Estimands. arXiv:1809.09953, 2018.
Belinda Tzen, Tengyuan Liang, Maxim Raginsky. Local Optimality and Generalization Guarantees for the Langevin Algorithm via Empirical Metastability. Conference on Learning Theory (COLT), 2018.
Satyen Kale, Zohar Karnin, Tengyuan Liang, Dávid Pál. Adaptive Feature Selection: Computationally Efficient Online Sparse Linear Regression under RIP. International Conference on Machine Learning (ICML), 2017.
Tengyuan Liang, Alexander Rakhlin, Karthik Sridharan. Learning with Square Loss: Localization through Offset Rademacher Complexity. Conference on Learning Theory (COLT), 2015.