UCSC-CRL-95-37: EFFICIENT LEARNING WITH VIRTUAL THRESHOLD GATES

07/01/1995 09:00 AM
Computer Science
We reduce learning simple geometric concept classes to learning disjunctions over exponentially many variables. We then apply an on-line algorithm called Winnow whose number of prediction mistakes grows only logarithmically with the number of variables. The hypotheses of Winnow are linear threshold functions with one weight per variable. We find ways to keep the exponentially many weights of Winnow implicitly so that the time for the algorithm to compute a prediction and update its ``virtual\'\' weights is polynomial. Our method can be used to learn d-dimensional axis-parallel boxes when d is variable, and unions of d-dimensional axis-parallel boxes when d is constant. The worst-case number of mistakes of our algorithms for the above classes is optimal to within a constant factor, and our algorithms inherit the noise robustness of Winnow. We think that other on-line algorithms with multiplicative weight updates whose loss bounds grow logarithmically with the dimension are amenable to our methods.

UCSC-CRL-95-37