10/01/1995 09:00 AM

Computer Science

We give an adversary strategy that forces the Perceptron algorithm to make (N-k+1)/2 mistakes when learning k- literal disjunctions over N variables. Experimentally we see that even for simple random data, the number of mistakes made by the Perceptron algorithm grows almost linearly with N, even if the number k of relevant variable remains a small constant. In contrast, Littlestone\'s algorithm Winnow makes at most O(k log N) mistakes for the same problem. Both algorithms use thresholded linear functions as their hypotheses. However, Winnow does multiplicative updates to its weight vector instead of the additive updates of the Perceptron algorithm.