UCSC-CRL-06-06: BiBoost for Asymmetric Learning

06/03/2005 09:00 AM
Computer Science
Although boosting methods have become an extremely important classification
method, there has been little attention paid to boosting with asymmetric losses. In this
paper we take a gradient descent view of boosting in order to motivate a new boosting
variant called BiBoost which treats the two classes differently. This variant is likely to
perform well when there is a different cost for false positive and false negative predic-
tions. The variant is also appropriate when the data comes from multiple sources with
different reliabilities or noise levels. Experiments show that BiBoost effectively reduces
the number of false positive mistakes, and a more general algorithm is discussed.

This report is not available for download at this time.