nltk.translate.ibm5 module

Translation model that keeps track of vacant positions in the target sentence to decide where to place translated words.

Translation can be viewed as a process where each word in the source sentence is stepped through sequentially, generating translated words for each source word. The target sentence can be viewed as being made up of m empty slots initially, which gradually fill up as generated words are placed in them.

Models 3 and 4 use distortion probabilities to decide how to place translated words. For simplicity, these models ignore the history of which slots have already been occupied with translated words. Consider the placement of the last translated word: there is only one empty slot left in the target sentence, so the distortion probability should be 1.0 for that position and 0.0 everywhere else. However, the distortion probabilities for Models 3 and 4 are set up such that all positions are under consideration.

IBM Model 5 fixes this deficiency by accounting for occupied slots during translation. It introduces the vacancy function v(j), the number of vacancies up to, and including, position j in the target sentence.

Terminology

Maximum vacancy

The number of valid slots that a word can be placed in. This is not necessarily the same as the number of vacant slots. For example, if a tablet contains more than one word, the head word cannot be placed at the last vacant slot because there will be no space for the other words in the tablet. The number of valid slots has to take into account the length of the tablet. Non-head words cannot be placed before the head word, so vacancies to the left of the head word are ignored.

Vacancy difference

For a head word: (v(j) - v(center of previous cept)) Can be positive or negative. For a non-head word: (v(j) - v(position of previously placed word)) Always positive, because successive words in a tablet are assumed to appear to the right of the previous word.

Positioning of target words fall under three cases:

  1. Words generated by NULL are distributed uniformly

  2. For a head word t, its position is modeled by the probability v_head(dv | max_v,word_class_t(t))

  3. For a non-head word t, its position is modeled by the probability v_non_head(dv | max_v,word_class_t(t))

dv and max_v are defined differently for head and non-head words.

The EM algorithm used in Model 5 is:

E step

In the training data, collect counts, weighted by prior probabilities.

    1. count how many times a source language word is translated into a target language word

    1. for a particular word class and maximum vacancy, count how many times a head word and the previous cept’s center have a particular difference in number of vacancies

    1. for a particular word class and maximum vacancy, count how many times a non-head word and the previous target word have a particular difference in number of vacancies

    1. count how many times a source word is aligned to phi number of target words

    1. count how many times NULL is aligned to a target word

M step

Estimate new probabilities based on the counts from the E step

Like Model 4, there are too many possible alignments to consider. Thus, a hill climbing approach is used to sample good candidates. In addition, pruning is used to weed out unlikely alignments based on Model 4 scores.

Notations

i

Position in the source sentence Valid values are 0 (for NULL), 1, 2, …, length of source sentence

j

Position in the target sentence Valid values are 1, 2, …, length of target sentence

l

Number of words in the source sentence, excluding NULL

m

Number of words in the target sentence

s

A word in the source language

t

A word in the target language

phi

Fertility, the number of target words produced by a source word

p1

Probability that a target word produced by a source word is accompanied by another target word that is aligned to NULL

p0

1 - p1

max_v

Maximum vacancy

dv

Vacancy difference, Δv

The definition of v_head here differs from GIZA++, section 4.7 of [Brown et al., 1993], and [Koehn, 2010]. In the latter cases, v_head is v_head(v(j) | v(center of previous cept),max_v,word_class(t)).

Here, we follow appendix B of [Brown et al., 1993] and combine v(j) with v(center of previous cept) to obtain dv: v_head(v(j) - v(center of previous cept) | max_v,word_class(t)).

References

Philipp Koehn. 2010. Statistical Machine Translation. Cambridge University Press, New York.

Peter E Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics, 19 (2), 263-311.

class nltk.translate.ibm5.IBMModel5[source]

Bases: nltk.translate.ibm_model.IBMModel

Translation model that keeps track of vacant positions in the target sentence to decide where to place translated words

>>> bitext = []
>>> bitext.append(AlignedSent(['klein', 'ist', 'das', 'haus'], ['the', 'house', 'is', 'small']))
>>> bitext.append(AlignedSent(['das', 'haus', 'war', 'ja', 'groß'], ['the', 'house', 'was', 'big']))
>>> bitext.append(AlignedSent(['das', 'buch', 'ist', 'ja', 'klein'], ['the', 'book', 'is', 'small']))
>>> bitext.append(AlignedSent(['ein', 'haus', 'ist', 'klein'], ['a', 'house', 'is', 'small']))
>>> bitext.append(AlignedSent(['das', 'haus'], ['the', 'house']))
>>> bitext.append(AlignedSent(['das', 'buch'], ['the', 'book']))
>>> bitext.append(AlignedSent(['ein', 'buch'], ['a', 'book']))
>>> bitext.append(AlignedSent(['ich', 'fasse', 'das', 'buch', 'zusammen'], ['i', 'summarize', 'the', 'book']))
>>> bitext.append(AlignedSent(['fasse', 'zusammen'], ['summarize']))
>>> src_classes = {'the': 0, 'a': 0, 'small': 1, 'big': 1, 'house': 2, 'book': 2, 'is': 3, 'was': 3, 'i': 4, 'summarize': 5 }
>>> trg_classes = {'das': 0, 'ein': 0, 'haus': 1, 'buch': 1, 'klein': 2, 'groß': 2, 'ist': 3, 'war': 3, 'ja': 4, 'ich': 5, 'fasse': 6, 'zusammen': 6 }
>>> ibm5 = IBMModel5(bitext, 5, src_classes, trg_classes)
>>> print(round(ibm5.head_vacancy_table[1][1][1], 3))
1.0
>>> print(round(ibm5.head_vacancy_table[2][1][1], 3))
0.0
>>> print(round(ibm5.non_head_vacancy_table[3][3][6], 3))
1.0
>>> print(round(ibm5.fertility_table[2]['summarize'], 3))
1.0
>>> print(round(ibm5.fertility_table[1]['book'], 3))
1.0
>>> print(ibm5.p1)
0.033...
>>> test_sentence = bitext[2]
>>> test_sentence.words
['das', 'buch', 'ist', 'ja', 'klein']
>>> test_sentence.mots
['the', 'book', 'is', 'small']
>>> test_sentence.alignment
Alignment([(0, 0), (1, 1), (2, 2), (3, None), (4, 3)])
MIN_SCORE_FACTOR = 0.2

Alignments with scores below this factor are pruned during sampling

__init__(sentence_aligned_corpus, iterations, source_word_classes, target_word_classes, probability_tables=None)[source]

Train on sentence_aligned_corpus and create a lexical translation model, vacancy models, a fertility model, and a model for generating NULL-aligned words.

Translation direction is from AlignedSent.mots to AlignedSent.words.

Parameters
  • sentence_aligned_corpus (list(AlignedSent)) – Sentence-aligned parallel corpus

  • iterations (int) – Number of iterations to run training algorithm

  • source_word_classes (dict[str]: int) – Lookup table that maps a source word to its word class, the latter represented by an integer id

  • target_word_classes (dict[str]: int) – Lookup table that maps a target word to its word class, the latter represented by an integer id

  • probability_tables (dict[str]: object) – Optional. Use this to pass in custom probability values. If not specified, probabilities will be set to a uniform distribution, or some other sensible value. If specified, all the following entries must be present: translation_table, alignment_table, fertility_table, p1, head_distortion_table, non_head_distortion_table, head_vacancy_table, non_head_vacancy_table. See IBMModel, IBMModel4, and IBMModel5 for the type and purpose of these tables.

reset_probabilities()[source]
set_uniform_probabilities(sentence_aligned_corpus)[source]

Set vacancy probabilities uniformly to 1 / cardinality of vacancy difference values

train(parallel_corpus)[source]
sample(sentence_pair)[source]

Sample the most probable alignments from the entire alignment space according to Model 4

Note that Model 4 scoring is used instead of Model 5 because the latter is too expensive to compute.

First, determine the best alignment according to IBM Model 2. With this initial alignment, use hill climbing to determine the best alignment according to a IBM Model 4. Add this alignment and its neighbors to the sample set. Repeat this process with other initial alignments obtained by pegging an alignment point. Finally, prune alignments that have substantially lower Model 4 scores than the best alignment.

Parameters

sentence_pair (AlignedSent) – Source and target language sentence pair to generate a sample of alignments from

Returns

A set of best alignments represented by their AlignmentInfo and the best alignment of the set for convenience

Return type

set(AlignmentInfo), AlignmentInfo

prune(alignment_infos)[source]

Removes alignments from alignment_infos that have substantially lower Model 4 scores than the best alignment

Returns

Pruned alignments

Return type

set(AlignmentInfo)

hillclimb(alignment_info, j_pegged=None)[source]

Starting from the alignment in alignment_info, look at neighboring alignments iteratively for the best one, according to Model 4

Note that Model 4 scoring is used instead of Model 5 because the latter is too expensive to compute.

There is no guarantee that the best alignment in the alignment space will be found, because the algorithm might be stuck in a local maximum.

Parameters

j_pegged (int) – If specified, the search will be constrained to alignments where j_pegged remains unchanged

Returns

The best alignment found from hill climbing

Return type

AlignmentInfo

prob_t_a_given_s(alignment_info)[source]

Probability of target sentence and an alignment given the source sentence

maximize_vacancy_probabilities(counts)[source]
class nltk.translate.ibm5.Model5Counts[source]

Bases: nltk.translate.ibm_model.Counts

Data object to store counts of various parameters during training. Includes counts for vacancies.

__init__()[source]
update_vacancy(count, alignment_info, i, trg_classes, slots)[source]
Parameters
  • count – Value to add to the vacancy counts

  • alignment_info – Alignment under consideration

  • i – Source word position under consideration

  • trg_classes – Target word classes

  • slots – Vacancy states of the slots in the target sentence. Output parameter that will be modified as new words are placed in the target sentence.

class nltk.translate.ibm5.Slots[source]

Bases: object

Represents positions in a target sentence. Used to keep track of which slot (position) is occupied.

__init__(target_sentence_length)[source]
occupy(position)[source]
Returns

Mark slot at position as occupied

vacancies_at(position)[source]
Returns

Number of vacant slots up to, and including, position