Research

My research involves practical extensions to on-line model of learning. In this model of inductive learning, the computer starts with zero examples and has to build its knowledge one example at a time. Each example allows the computer to learn more about the problem and refine its classification rule. However, each time an example appears, the algorithm must predict the category of the example. If the algorithm guesses the wrong category then the algorithm makes a mistake. To minimize the number of mistakes, it is important that the computer quickly learns a well performing rule. For example, assume we want to predict whether a fund in the stock market goes up or down in price. Each day we get a new example for the stock price.

Part of my research is on multi-class extensions to on-line linear-threshold algorithms. My work extends many popular learning algorithms such as Perceptron and Winnow. The new multi-class algorithms are direct generalizations of the existing binary prediction algorithms and maintain many of their desirable properties.

I'm also interested in problems where examples are generated by a fixed distribution. While many algorithms are robust enough to do well against examples generated by an adversary, these algorithms can be modified to do even better when instances are generated by a distribution. This builds off Nick Littlestone's work on creating a general procedure for improving the noise tolerance of on-line learning algorithms.

Some of my recent work includes tracking changing concepts for algorithms such as Winnow. This is a popular practical problem for on-line learning. For many problems, it is not possible to guarantee the concept will remain constant over time. I prove good bounds for Winnow that show it performs well even when tracking arbitrary linear-threshold concepts.

Last, I'm looking at ways to extend the on-line model to handle problems with labels. On-line learning assumes that the algorithm has access to the label of an instance soon after the algorithm makes a prediction. This is an unrealistic assumption for many practical learning problems. I extend the on-line model to allow arbitrary delays in the reporting of instance labels. I give techniques to transform traditional on-line algorithms to work in this setting and show the transformations are close to optimal. I'm also interesting in the semi-supervised setting where multi-view learning can be used to help generate labels. I have some preliminary work on using multi-view learning to generate labels for on-line learning.