UCSC-CRL-93-37: DATA FILTERING AND DISTRIBUTION MODELING ALGORITHMS FOR MACHING LEARNING

09/01/1993 09:00 AM
Computer Science
This thesis is concerned with the analysis of algorithms for machine learning. The main focus is on the role of the distribution of the examples used for learning. Chapters 2 and 3 are concerned with algorithms for learning concepts from random examples. Briefly, the goal of the learner is to observe a set of labeled instances and generate a hypothesis that approximates the rule that maps the instances to their labels. Chapter 2 describes and analyses an algorithm for improving the performance of a general concept learning algorithm by selecting those labeled instances that are most informative. This work is an improvement over previous work by Schapire. The analysis provides upper bounds on the time, space and number of examples that are required for concept learning. Chapter 3 is concerned with situations in which the learner can select, out of a stream of random instances, those for which it wants to know the label. We analyze an algorithm of Seung et.al. for selecting such instances, and prove that it is effective for the Perceptron concept class. Both Chapters 2 and 3 show situations in which a carefully selected exponentially small fraction of the random training examples are sufficient for learning. Chapter 4 is concerned with learning distributions of binary vectors. Here we present a new distribution model that can represent combinations of correlation patterns. We describe two different algorithms for learning this distribution model from random examples, and provide experimental evidence that they are effective. We conclude, in Chapter 5, with a brief discussion of the possible use of our algorithms in real world problems and compare them with classical approaches from pattern recognition. Notes: Ph.D. Thesis

UCSC-CRL-93-37