Relevant subtask learning by constrained mixture models

Jaakko Peltonen, Yusuf Yaslan, Samuel Kaski*

*Bu çalışma için yazışmadan sorumlu yazar

Araştırma sonucu: Dergiye katkıMakalebilirkişi

1 Atıf (Scopus)

Özet

We introduce relevant subtask learning, a new learning problem which is a variant of multi-task learning. The goal is to build a classifier for a task-of-interest for which we have too few training samples. We additionally have "supplementary data" collected from other tasks, but it is uncertain which of these other samples are relevant, that is, which samples are classified in the same way as in the task-of-interest. The research problem is how to use the "supplementary data" from the other tasks to improve the classifier in the task-of-interest. We show how to solve the problem, and demonstrate the solution with logistic regression classifiers. The key idea is to model all tasks as mixtures of relevant and irrelevant samples, and model the irrelevant part with a sufficiently flexible model such that it does not distort the model of relevant data. We give two learning algorithms for the method - a simple maximum likelihood optimization algorithm and a more advanced variational Bayes inference algorithm; in both cases we show that the method works better than a comparable multi-task learning model and naive methods.

Orijinal dilİngilizce
Sayfa (başlangıç-bitiş)641-662
Sayfa sayısı22
DergiIntelligent Data Analysis
Hacim14
Basın numarası6
DOI'lar
Yayın durumuYayınlandı - 2010

Parmak izi

Relevant subtask learning by constrained mixture models' araştırma başlıklarına git. Birlikte benzersiz bir parmak izi oluştururlar.

Alıntı Yap