Online Learning of Conditionally I.I.D. Data
Daniil Ryabko - Computer Learning Research Centre, Royal Holloway, University of London
In this work we consider the task of relaxing the i.i.d assumptionin online pattern recognition (or classification), aiming to makeexisting learning algorithms applicable to a wider range of tasks.Online pattern recognition is predicting a sequence of labels basedon objects given for each label and on examples (pairs ofobjects and labels) learned so far. Traditionally, thistask is considered under the assumption that examplesare independent and identically distributed. However,it turns out that many results of pattern recognition theory carry over undera much weaker assumption. Namely, under the assumptionof conditional independence and identical distribution of objects only,while the only condition on the distribution of labels is that therate of occurrence of each label should be above some positive threshold.We find a broad class of learning algorithms for which estimations ofthe probability of a classification error achieved under the classical i.i.d.assumption canbe generalised to the similar estimates for the case of conditionally i.i.d.distributed examples.