Multi-task Outputs Space Regularization

Home >> Publications >> arXiv- Multi-task Output Projects    Blog    Bio
Sergey Feldman, Bela A. Frigyk, Maya R. Gupta, Luca Cazzanti, Peter Sadowski
PDF | Go to the arXiv page.

Abstract - We investigate multi-task learning from an output space regularization perspective. Most multi-task approaches tie together related tasks by constraining them to share input spaces and function classes. In contrast to this, we propose a multi-task paradigm which we call output space regularization, in which the only constraint is that the output spaces of the multiple tasks are related. We focus on a specific instance of output space regularization, multi-task averaging, that is both widely applicable and amenable to analysis. The multi-task averaging estimator improves on the single-task sample average under certain conditions, which we detail. Our analysis shows that for a simple case the optimal similarity depends on the ratio of the task variance to the task differences, but that for more complicated cases the optimal similarity behaves non-linearly. Further, we show that the estimates produced are a convex combination of the tasks' sample averages. We discuss the Bayesian viewpoint. Three applications of multi-task output space regularization are presented: multi-task kernel density estimation, multi-task-regularized empirical moment constraints in similarity discriminant analysis, and multi-task local linear regression. Experiments on real data sets show statistically significant gains.

BibTex -
@ARTICLE{FeldmanFrigykGuptaCazzantiSadowskiarXiv2011,
TITLE = {Multi-task Output Space Regularization},
AUTHOR = {S. Feldman and B. A. Frigyk and M. R. Gupta L. Cazzanti and P. Sadowski},
JOURNAL = {{arXiv}},
EPRINT = {\tt arXiv:1107.4390v3 [stat.ML]}}