Download:
- Paper
nominated for the best paper award in Conference on Learning Theory (COLT) 2015 - arXiv
- Online appendix
Abstract:
We consider regression with square loss and general classes of functions without the boundedness assumption. We introduce a notion of offset Rademacher complexity that provides a transparent way to study localization both in expectation and in high probability. For any (possibly non-convex) class, the excess loss of a two-step estimator is shown to be upper bounded by this offset complexity through a novel geometric inequality. In the convex case, the estimator reduces to an empirical risk minimizer. The method recovers the results of \citep{RakSriTsy15} for the bounded case while also providing guarantees without the boundedness assumption.
Citation
Tengyuan Liang, Alexander Rakhlin, and Karthik Sridharan. 2015. “Learning with Square Loss: Localization through Offset Rademacher Complexity.” Conference on Learning Theory, pmlr 40: 1260-1285, 2015.
@InProceedings{Liang_2015,
title = {Learning with Square Loss: Localization through Offset Rademacher Complexity},
author = {Liang, Tengyuan and Rakhlin, Alexander and Sridharan, Karthik},
booktitle = {Proceedings of The 28th Conference on Learning Theory},
pages = {1260--1285},
year = {2015},
editor = {Grünwald, Peter and Hazan, Elad and Kale, Satyen},
volume = {40},
series = {Proceedings of Machine Learning Research},
address = {Paris, France},
month = {03--06 Jul},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v40/Liang15.pdf},
url = {https://proceedings.mlr.press/v40/Liang15.html}
}