UCSC-CRL-90-63: ON THE COMPUTATIONAL COMPLEXITY OF APPROXIMATING DISTRIBUTIONS BY PROBABILISTIC AUTOMATA

12/01/1990 09:00 AM
Computer Science
We introduce a rigorous performance criterion for training algorithms for probabilistic automata (PAs) and hidden Markov models (HMMs), used extensively for speech recognition, and analyze the complexity of the training problem as a computational problem. The PA training problem is the problem of approximating an arbitrary, unknown source distribution by distributions generated by a PA. We investigate the following question about this important, well-studied problem: Does there exist an *efficient* training algorithm such that the trained PAs *provably converge* to a model close to an optimum one with high confidence, after only a feasibly small set of training data? We model this problem in the framework of computational learning theory and analyze the sample as well as computational complexity. We show that the number of examples required for training PAs is moderate -- essentially linear in the number of transition probabilities to be trained and a low-degree polynomial in the example length and parameters quantifying the accuracy and confidence. Computationally, however, training PAs is quite demanding: Fixed state size PAs are trainable in time polynomial in the accuracy and confidence parameters and example length, but *not* in the alphabet size, unless RP = NP. The latter result is shown via a strong non- approximability result for the single string maximum likelihood model problem for 2-state PAs, which is of independent interest.

UCSC-CRL-90-63