AMS2005-15: Default Priors for Neural Network Classification

Herbert Lee
12/31/2005 09:00 AM
Applied Mathematics & Statistics
Feedforward neural networks are a popular tool for classification, offering a method for fully flexible modeling. This paper looks at the underlying probability model, so as to understand statistically what is going on in order to facilitate an intelligent choice of prior for a fully Bayesian analysis. The parameters turn out to be difficult or impossible to interpret, and yet a coherent prior requires a quantification of this inherent uncertainty. Several approaches are discussed, including flat priors, Jeffreys priors and reference priors.

Key Words: Bayesian neural network; nonparametric classification; noninformative prior