nltk.tag.hmm module

Hidden Markov Models (HMMs) largely used to assign the correct label sequence to sequential data or assess the probability of a given label and data sequence. These models are finite state machines characterised by a number of states, transitions between these states, and output symbols emitted while in each state. The HMM is an extension to the Markov chain, where each state corresponds deterministically to a given event. In the HMM the observation is a probabilistic function of the state. HMMs share the Markov chain’s assumption, being that the probability of transition from one state to another only depends on the current state - i.e. the series of states that led to the current state are not used. They are also time invariant.

The HMM is a directed graph, with probability weighted edges (representing the probability of a transition between the source and sink states) where each vertex emits an output symbol when entered. The symbol (or observation) is non-deterministically generated. For this reason, knowing that a sequence of output observations was generated by a given HMM does not mean that the corresponding sequence of states (and what the current state is) is known. This is the ‘hidden’ in the hidden markov model.

Formally, a HMM can be characterised by:

  • the output observation alphabet. This is the set of symbols which may be observed as output of the system.

  • the set of states.

  • the transition probabilities a_{ij} = P(s_t = j | s_{t-1} = i). These represent the probability of transition to each state from a given state.

  • the output probability matrix b_i(k) = P(X_t = o_k | s_t = i). These represent the probability of observing each symbol in a given state.

  • the initial state distribution. This gives the probability of starting in each state.

To ground this discussion, take a common NLP application, part-of-speech (POS) tagging. An HMM is desirable for this task as the highest probability tag sequence can be calculated for a given sequence of word forms. This differs from other tagging techniques which often tag each word individually, seeking to optimise each individual tagging greedily without regard to the optimal combination of tags for a larger unit, such as a sentence. The HMM does this with the Viterbi algorithm, which efficiently computes the optimal path through the graph given the sequence of words forms.

In POS tagging the states usually have a 1:1 correspondence with the tag alphabet - i.e. each state represents a single tag. The output observation alphabet is the set of word forms (the lexicon), and the remaining three parameters are derived by a training regime. With this information the probability of a given sentence can be easily derived, by simply summing the probability of each distinct path through the model. Similarly, the highest probability tagging sequence can be derived with the Viterbi algorithm, yielding a state sequence which can be mapped into a tag sequence.

This discussion assumes that the HMM has been trained. This is probably the most difficult task with the model, and requires either MLE estimates of the parameters or unsupervised learning using the Baum-Welch algorithm, a variant of EM.

For more information, please consult the source code for this module, which includes extensive demonstration code.

class nltk.tag.hmm.HiddenMarkovModelTagger[source]

Bases: TaggerI

Hidden Markov model class, a generative model for labelling sequence data. These models define the joint probability of a sequence of symbols and their labels (state transitions) as the product of the starting state probability, the probability of each state transition, and the probability of each observation being generated from each state. This is described in more detail in the module documentation.

This implementation is based on the HMM description in Chapter 8, Huang, Acero and Hon, Spoken Language Processing and includes an extension for training shallow HMM parsers or specialized HMMs as in Molina et. al, 2002. A specialized HMM modifies training data by applying a specialization function to create a new training set that is more appropriate for sequential tagging with an HMM. A typical use case is chunking.

Parameters:
  • symbols (seq of any) – the set of output symbols (alphabet)

  • states (seq of any) – a set of states representing state space

  • transitions (ConditionalProbDistI) – transition probabilities; Pr(s_i | s_j) is the probability of transition from state i given the model is in state_j

  • outputs (ConditionalProbDistI) – output probabilities; Pr(o_k | s_i) is the probability of emitting symbol k when entering state i

  • priors (ProbDistI) – initial state distribution; Pr(s_i) is the probability of starting in state i

  • transform (callable) – an optional function for transforming training instances, defaults to the identity function.

__init__(symbols, states, transitions, outputs, priors, transform=<function _identity>)[source]
best_path(unlabeled_sequence)[source]

Returns the state sequence of the optimal (most probable) path through the HMM. Uses the Viterbi algorithm to calculate this part by dynamic programming.

Returns:

the state sequence

Return type:

sequence of any

Parameters:

unlabeled_sequence (list) – the sequence of unlabeled symbols

best_path_simple(unlabeled_sequence)[source]

Returns the state sequence of the optimal (most probable) path through the HMM. Uses the Viterbi algorithm to calculate this part by dynamic programming. This uses a simple, direct method, and is included for teaching purposes.

Returns:

the state sequence

Return type:

sequence of any

Parameters:

unlabeled_sequence (list) – the sequence of unlabeled symbols

entropy(unlabeled_sequence)[source]

Returns the entropy over labellings of the given sequence. This is given by:

H(O) = - sum_S Pr(S | O) log Pr(S | O)

where the summation ranges over all state sequences, S. Let Z = Pr(O) = sum_S Pr(S, O)} where the summation ranges over all state sequences and O is the observation sequence. As such the entropy can be re-expressed as:

H = - sum_S Pr(S | O) log [ Pr(S, O) / Z ]
= log Z - sum_S Pr(S | O) log Pr(S, 0)
= log Z - sum_S Pr(S | O) [ log Pr(S_0) + sum_t Pr(S_t | S_{t-1}) + sum_t Pr(O_t | S_t) ]

The order of summation for the log terms can be flipped, allowing dynamic programming to be used to calculate the entropy. Specifically, we use the forward and backward probabilities (alpha, beta) giving:

H = log Z - sum_s0 alpha_0(s0) beta_0(s0) / Z * log Pr(s0)
+ sum_t,si,sj alpha_t(si) Pr(sj | si) Pr(O_t+1 | sj) beta_t(sj) / Z * log Pr(sj | si)
+ sum_t,st alpha_t(st) beta_t(st) / Z * log Pr(O_t | st)

This simply uses alpha and beta to find the probabilities of partial sequences, constrained to include the given state(s) at some point in time.

log_probability(sequence)[source]

Returns the log-probability of the given symbol sequence. If the sequence is labelled, then returns the joint log-probability of the symbol, state sequence. Otherwise, uses the forward algorithm to find the log-probability over all label sequences.

Returns:

the log-probability of the sequence

Return type:

float

Parameters:

sequence (Token) – the sequence of symbols which must contain the TEXT property, and optionally the TAG property

point_entropy(unlabeled_sequence)[source]

Returns the pointwise entropy over the possible states at each position in the chain, given the observation sequence.

probability(sequence)[source]

Returns the probability of the given symbol sequence. If the sequence is labelled, then returns the joint probability of the symbol, state sequence. Otherwise, uses the forward algorithm to find the probability over all label sequences.

Returns:

the probability of the sequence

Return type:

float

Parameters:

sequence (Token) – the sequence of symbols which must contain the TEXT property, and optionally the TAG property

random_sample(rng, length)[source]

Randomly sample the HMM to generate a sentence of a given length. This samples the prior distribution then the observation distribution and transition distribution for each subsequent observation and state. This will mostly generate unintelligible garbage, but can provide some amusement.

Returns:

the randomly created state/observation sequence, generated according to the HMM’s probability distributions. The SUBTOKENS have TEXT and TAG properties containing the observation and state respectively.

Return type:

list

Parameters:
  • rng (Random (or any object with a random() method)) – random number generator

  • length (int) – desired output length

reset_cache()[source]
tag(unlabeled_sequence)[source]

Tags the sequence with the highest probability state sequence. This uses the best_path method to find the Viterbi path.

Returns:

a labelled sequence of symbols

Return type:

list

Parameters:

unlabeled_sequence (list) – the sequence of unlabeled symbols

test(test_sequence, verbose=False, **kwargs)[source]

Tests the HiddenMarkovModelTagger instance.

Parameters:
  • test_sequence (list(list)) – a sequence of labeled test instances

  • verbose (bool) – boolean flag indicating whether training should be verbose or include printed output

classmethod train(labeled_sequence, test_sequence=None, unlabeled_sequence=None, **kwargs)[source]

Train a new HiddenMarkovModelTagger using the given labeled and unlabeled training instances. Testing will be performed if test instances are provided.

Returns:

a hidden markov model tagger

Return type:

HiddenMarkovModelTagger

Parameters:
  • labeled_sequence (list(list)) – a sequence of labeled training instances, i.e. a list of sentences represented as tuples

  • test_sequence (list(list)) – a sequence of labeled test instances

  • unlabeled_sequence (list(list)) – a sequence of unlabeled training instances, i.e. a list of sentences represented as words

  • transform (function) – an optional function for transforming training instances, defaults to the identity function, see transform()

  • estimator (class or function) – an optional function or class that maps a condition’s frequency distribution to its probability distribution, defaults to a Lidstone distribution with gamma = 0.1

  • verbose (bool) – boolean flag indicating whether training should be verbose or include printed output

  • max_iterations (int) – number of Baum-Welch iterations to perform

class nltk.tag.hmm.HiddenMarkovModelTrainer[source]

Bases: object

Algorithms for learning HMM parameters from training data. These include both supervised learning (MLE) and unsupervised learning (Baum-Welch).

Creates an HMM trainer to induce an HMM with the given states and output symbol alphabet. A supervised and unsupervised training method may be used. If either of the states or symbols are not given, these may be derived from supervised training.

Parameters:
  • states (sequence of any) – the set of state labels

  • symbols (sequence of any) – the set of observation symbols

__init__(states=None, symbols=None)[source]
train(labeled_sequences=None, unlabeled_sequences=None, **kwargs)[source]

Trains the HMM using both (or either of) supervised and unsupervised techniques.

Returns:

the trained model

Return type:

HiddenMarkovModelTagger

Parameters:
  • labelled_sequences (list) – the supervised training data, a set of labelled sequences of observations ex: [ (word_1, tag_1),…,(word_n,tag_n) ]

  • unlabeled_sequences (list) – the unsupervised training data, a set of sequences of observations ex: [ word_1, …, word_n ]

  • kwargs – additional arguments to pass to the training methods

train_supervised(labelled_sequences, estimator=None)[source]

Supervised training maximising the joint probability of the symbol and state sequences. This is done via collecting frequencies of transitions between states, symbol observations while within each state and which states start a sentence. These frequency distributions are then normalised into probability estimates, which can be smoothed if desired.

Returns:

the trained model

Return type:

HiddenMarkovModelTagger

Parameters:
  • labelled_sequences (list) – the training data, a set of labelled sequences of observations

  • estimator – a function taking a FreqDist and a number of bins and returning a CProbDistI; otherwise a MLE estimate is used

train_unsupervised(unlabeled_sequences, update_outputs=True, **kwargs)[source]

Trains the HMM using the Baum-Welch algorithm to maximise the probability of the data sequence. This is a variant of the EM algorithm, and is unsupervised in that it doesn’t need the state sequences for the symbols. The code is based on ‘A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition’, Lawrence Rabiner, IEEE, 1989.

Returns:

the trained model

Return type:

HiddenMarkovModelTagger

Parameters:

unlabeled_sequences (list) – the training data, a set of sequences of observations

kwargs may include following parameters:

Parameters:
  • model – a HiddenMarkovModelTagger instance used to begin the Baum-Welch algorithm

  • max_iterations – the maximum number of EM iterations

  • convergence_logprob – the maximum change in log probability to allow convergence

nltk.tag.hmm.demo()[source]
nltk.tag.hmm.demo_bw()[source]
nltk.tag.hmm.demo_pos()[source]
nltk.tag.hmm.demo_pos_bw(test=10, supervised=20, unsupervised=10, verbose=True, max_iterations=5)[source]
nltk.tag.hmm.load_pos(num_sents)[source]
nltk.tag.hmm.logsumexp2(arr)[source]