nltk.parse.projectivedependencyparser module¶
- class nltk.parse.projectivedependencyparser.ChartCell[source]¶
Bases:
object
A cell from the parse chart formed when performing the CYK algorithm. Each cell keeps track of its x and y coordinates (though this will probably be discarded), and a list of spans serving as the cell’s entries.
- __init__(x, y)[source]¶
- Parameters:
x (int.) – This cell’s x coordinate.
y (int.) – This cell’s y coordinate.
- add(span)[source]¶
Appends the given span to the list of spans representing the chart cell’s entries.
- Parameters:
span (DependencySpan) – The span to add.
- class nltk.parse.projectivedependencyparser.DependencySpan[source]¶
Bases:
object
A contiguous span over some part of the input string representing dependency (head -> modifier) relationships amongst words. An atomic span corresponds to only one word so it isn’t a ‘span’ in the conventional sense, as its _start_index = _end_index = _head_index for concatenation purposes. All other spans are assumed to have arcs between all nodes within the start and end indexes of the span, and one head index corresponding to the head word for the entire span. This is the same as the root node if the dependency structure were depicted as a graph.
- class nltk.parse.projectivedependencyparser.ProbabilisticProjectiveDependencyParser[source]¶
Bases:
object
A probabilistic, projective dependency parser.
This parser returns the most probable projective parse derived from the probabilistic dependency grammar derived from the train() method. The probabilistic model is an implementation of Eisner’s (1996) Model C, which conditions on head-word, head-tag, child-word, and child-tag. The decoding uses a bottom-up chart-based span concatenation algorithm that’s identical to the one utilized by the rule-based projective parser.
Usage example
>>> from nltk.parse.dependencygraph import conll_data2
>>> graphs = [ ... DependencyGraph(entry) for entry in conll_data2.split('\n\n') if entry ... ]
>>> ppdp = ProbabilisticProjectiveDependencyParser() >>> ppdp.train(graphs)
>>> sent = ['Cathy', 'zag', 'hen', 'wild', 'zwaaien', '.'] >>> list(ppdp.parse(sent)) [Tree('zag', ['Cathy', 'hen', Tree('zwaaien', ['wild', '.'])])]
- __init__()[source]¶
Create a new probabilistic dependency parser. No additional operations are necessary.
- compute_prob(dg)[source]¶
Computes the probability of a dependency graph based on the parser’s probability model (defined by the parser’s statistical dependency grammar).
- Parameters:
dg (DependencyGraph) – A dependency graph to score.
- Returns:
The probability of the dependency graph.
- Return type:
int
- concatenate(span1, span2)[source]¶
Concatenates the two spans in whichever way possible. This includes rightward concatenation (from the leftmost word of the leftmost span to the rightmost word of the rightmost span) and leftward concatenation (vice-versa) between adjacent spans. Unlike Eisner’s presentation of span concatenation, these spans do not share or pivot on a particular word/word-index.
- Returns:
A list of new spans formed through concatenation.
- Return type:
list(DependencySpan)
- parse(tokens)[source]¶
Parses the list of tokens subject to the projectivity constraint and the productions in the parser’s grammar. This uses a method similar to the span-concatenation algorithm defined in Eisner (1996). It returns the most probable parse derived from the parser’s probabilistic dependency grammar.
- train(graphs)[source]¶
Trains a ProbabilisticDependencyGrammar based on the list of input DependencyGraphs. This model is an implementation of Eisner’s (1996) Model C, which derives its statistics from head-word, head-tag, child-word, and child-tag relationships.
- Parameters:
graphs – A list of dependency graphs to train from.
- Type:
list(DependencyGraph)
- class nltk.parse.projectivedependencyparser.ProjectiveDependencyParser[source]¶
Bases:
object
A projective, rule-based, dependency parser. A ProjectiveDependencyParser is created with a DependencyGrammar, a set of productions specifying word-to-word dependency relations. The parse() method will then return the set of all parses, in tree representation, for a given input sequence of tokens. Each parse must meet the requirements of the both the grammar and the projectivity constraint which specifies that the branches of the dependency tree are not allowed to cross. Alternatively, this can be understood as stating that each parent node and its children in the parse tree form a continuous substring of the input sequence.
- __init__(dependency_grammar)[source]¶
Create a new ProjectiveDependencyParser, from a word-to-word dependency grammar
DependencyGrammar
.- Parameters:
dependency_grammar (DependencyGrammar) – A word-to-word relation dependencygrammar.
- concatenate(span1, span2)[source]¶
Concatenates the two spans in whichever way possible. This includes rightward concatenation (from the leftmost word of the leftmost span to the rightmost word of the rightmost span) and leftward concatenation (vice-versa) between adjacent spans. Unlike Eisner’s presentation of span concatenation, these spans do not share or pivot on a particular word/word-index.
- Returns:
A list of new spans formed through concatenation.
- Return type:
list(DependencySpan)
- nltk.parse.projectivedependencyparser.arity_parse_demo()[source]¶
A demonstration showing the creation of a
DependencyGrammar
in which a specific number of modifiers is listed for a given head. This can further constrain the number of possible parses created by aProjectiveDependencyParser
.