admin管理员组

文章数量:1388153

I'm experiencing unexpected behavior with the MultinomialHMM in hmmlearn. When using one-hot encoded observations (with n_trials=1), the Viterbi algorithm returns the state sequence incorrectly.

In my minimal reproducible example, the decoded state sequence consists entirely of state 0, even though the parameters push for state 2.

Steps to Reproduce:

Use the following script as a minimal reproducible example:

import numpy as np
from hmmlearn import hmm

def main():
    # Define HMM parameters
    start_prob = np.array([0.3, 0.3, 0.4])
    trans_mat = np.array([
        [0.8, 0.1, 0.1],
        [0.1, 0.8, 0.1],
        [0.1, 0.4, 0.5]
    ])
    emission_mat = np.array([
        [0.3,  0.3,  0.3,  0.1],
        [0.25, 0.25, 0.25, 0.25],
        [0.25, 0.25, 0.25, 0.25]
    ])

    # Create an observation sequence:
    # Here, we create 20 observations, all of which are symbol 2.
    obs_int = np.array([2] * 20)
    # Convert to one-hot encoded observations (required by hmmlearn with n_trials=1)
    observations = np.eye(4)[obs_int]
    print(observations)

    # Initialize the HMM model.
    model = hmm.MultinomialHMM(n_components=3, n_trials=1, init_params="")
    model.startprob_ = start_prob
    model.transmat_ = trans_mat
    model.emissionprob_ = emission_mat

    # Decode the observation sequence using the Viterbi algorithm.
    logprob, state_seq = model.decode(observations, algorithm="viterbi")
    print("Log probability:", logprob)
    print("State sequence:", state_seq)

if __name__ == "__main__":
    main()

The outputs:

[[0. 0. 1. 0.]
 [0. 0. 1. 0.]
 [0. 0. 1. 0.]
 [0. 0. 1. 0.]
 [0. 0. 1. 0.]
 [0. 0. 1. 0.]
 [0. 0. 1. 0.]
 [0. 0. 1. 0.]
 [0. 0. 1. 0.]
 [0. 0. 1. 0.]
 [0. 0. 1. 0.]
 [0. 0. 1. 0.]
 [0. 0. 1. 0.]
 [0. 0. 1. 0.]
 [0. 0. 1. 0.]
 [0. 0. 1. 0.]
 [0. 0. 1. 0.]
 [0. 0. 1. 0.]
 [0. 0. 1. 0.]
 [0. 0. 1. 0.]]
MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in ). See these issues for details:


Log probability: -29.52315636581463
State sequence: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]

本文标签: