Hidden Markov Models With Set-Valued Parameters

Denis Deratani Mauá, Alessandro Antonucci, Cassio Polpo de Campos

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)
226 Downloads (Pure)

Abstract

Hidden Markov models (HMMs) are widely used probabilistic models of sequential data. As with other probabilistic models, they require the specification of local conditional probability distributions, whose assessment can be too difficult and error-prone, especially when data are scarce or costly to acquire. The imprecise HMM (iHMM) generalizes HMMs by allowing the quantification to be done by sets of, instead of single, probability distributions. iHMMs have the ability to suspend judgment when there is not enough statistical evidence, and can serve as a sensitivity analysis tool for standard non-stationary HMMs. In this paper, we consider iHMMs under the strong independence interpretation, for which we develop efficient inference algorithms to address standard HMM usage such as the computation of likelihoods and most probable explanations, as well as performing filtering and predictive inference. Experiments with real data show that iHMMs produce more reliable inferences without compromising the computational efficiency.
Original languageEnglish
Pages (from-to)94-107
Number of pages14
JournalNeurocomputing
Volume180
Early online date05 Nov 2015
DOIs
Publication statusPublished - 05 Mar 2016

Fingerprint

Dive into the research topics of 'Hidden Markov Models With Set-Valued Parameters'. Together they form a unique fingerprint.

Cite this