nltk.parse.nonprojectivedependencyparser module

class nltk.parse.nonprojectivedependencyparser.DependencyScorerI[source]

Bases: object

A scorer for calculated the weights on the edges of a weighted dependency graph. This is used by a ProbabilisticNonprojectiveParser to initialize the edge weights of a DependencyGraph. While typically this would be done by training a binary classifier, any class that can return a multidimensional list representation of the edge weights can implement this interface. As such, it has no necessary fields.

__init__()[source]
train(graphs)[source]
Parameters

graphs (list(DependencyGraph)) – A list of dependency graphs to train the scorer. Typically the edges present in the graphs can be used as positive training examples, and the edges not present as negative examples.

score(graph)[source]
Parameters

graph (DependencyGraph) – A dependency graph whose set of edges need to be scored.

Return type

A three-dimensional list of numbers.

Returns

The score is returned in a multidimensional(3) list, such that the outer-dimension refers to the head, and the inner-dimension refers to the dependencies. For instance, scores[0][1] would reference the list of scores corresponding to arcs from node 0 to node 1. The node’s ‘address’ field can be used to determine its number identification.

For further illustration, a score list corresponding to Fig.2 of Keith Hall’s ‘K-best Spanning Tree Parsing’ paper:

scores = [[[], [5],  [1],  [1]],
         [[], [],   [11], [4]],
         [[], [10], [],   [5]],
         [[], [8],  [8],  []]]

When used in conjunction with a MaxEntClassifier, each score would correspond to the confidence of a particular edge being classified with the positive training examples.

class nltk.parse.nonprojectivedependencyparser.NaiveBayesDependencyScorer[source]

Bases: nltk.parse.nonprojectivedependencyparser.DependencyScorerI

A dependency scorer built around a MaxEnt classifier. In this particular class that classifier is a NaiveBayesClassifier. It uses head-word, head-tag, child-word, and child-tag features for classification.

>>> from nltk.parse.dependencygraph import DependencyGraph, conll_data2
>>> graphs = [DependencyGraph(entry) for entry in conll_data2.split('\n\n') if entry]
>>> npp = ProbabilisticNonprojectiveParser()
>>> npp.train(graphs, NaiveBayesDependencyScorer())
>>> parses = npp.parse(['Cathy', 'zag', 'hen', 'zwaaien', '.'], ['N', 'V', 'Pron', 'Adj', 'N', 'Punc'])
>>> len(list(parses))
1
__init__()[source]
train(graphs)[source]

Trains a NaiveBayesClassifier using the edges present in graphs list as positive examples, the edges not present as negative examples. Uses a feature vector of head-word, head-tag, child-word, and child-tag.

Parameters

graphs (list(DependencyGraph)) – A list of dependency graphs to train the scorer.

score(graph)[source]

Converts the graph into a feature-based representation of each edge, and then assigns a score to each based on the confidence of the classifier in assigning it to the positive label. Scores are returned in a multidimensional list.

Parameters

graph (DependencyGraph) – A dependency graph to score.

Return type

3 dimensional list

Returns

Edge scores for the graph parameter.

class nltk.parse.nonprojectivedependencyparser.DemoScorer[source]

Bases: nltk.parse.nonprojectivedependencyparser.DependencyScorerI

train(graphs)[source]
Parameters

graphs (list(DependencyGraph)) – A list of dependency graphs to train the scorer. Typically the edges present in the graphs can be used as positive training examples, and the edges not present as negative examples.

score(graph)[source]
Parameters

graph (DependencyGraph) – A dependency graph whose set of edges need to be scored.

Return type

A three-dimensional list of numbers.

Returns

The score is returned in a multidimensional(3) list, such that the outer-dimension refers to the head, and the inner-dimension refers to the dependencies. For instance, scores[0][1] would reference the list of scores corresponding to arcs from node 0 to node 1. The node’s ‘address’ field can be used to determine its number identification.

For further illustration, a score list corresponding to Fig.2 of Keith Hall’s ‘K-best Spanning Tree Parsing’ paper:

scores = [[[], [5],  [1],  [1]],
         [[], [],   [11], [4]],
         [[], [10], [],   [5]],
         [[], [8],  [8],  []]]

When used in conjunction with a MaxEntClassifier, each score would correspond to the confidence of a particular edge being classified with the positive training examples.

class nltk.parse.nonprojectivedependencyparser.ProbabilisticNonprojectiveParser[source]

Bases: object

A probabilistic non-projective dependency parser.

Nonprojective dependencies allows for “crossing branches” in the parse tree which is necessary for representing particular linguistic phenomena, or even typical parses in some languages. This parser follows the MST parsing algorithm, outlined in McDonald(2005), which likens the search for the best non-projective parse to finding the maximum spanning tree in a weighted directed graph.

>>> class Scorer(DependencyScorerI):
...     def train(self, graphs):
...         pass
...
...     def score(self, graph):
...         return [
...             [[], [5],  [1],  [1]],
...             [[], [],   [11], [4]],
...             [[], [10], [],   [5]],
...             [[], [8],  [8],  []],
...         ]
>>> npp = ProbabilisticNonprojectiveParser()
>>> npp.train([], Scorer())
>>> parses = npp.parse(['v1', 'v2', 'v3'], [None, None, None])
>>> len(list(parses))
1

Rule based example

>>> from nltk.grammar import DependencyGrammar
>>> grammar = DependencyGrammar.fromstring('''
... 'taught' -> 'play' | 'man'
... 'man' -> 'the' | 'in'
... 'in' -> 'corner'
... 'corner' -> 'the'
... 'play' -> 'golf' | 'dachshund' | 'to'
... 'dachshund' -> 'his'
... ''')
>>> ndp = NonprojectiveDependencyParser(grammar)
>>> parses = ndp.parse(['the', 'man', 'in', 'the', 'corner', 'taught', 'his', 'dachshund', 'to', 'play', 'golf'])
>>> len(list(parses))
4
__init__()[source]

Creates a new non-projective parser.

train(graphs, dependency_scorer)[source]

Trains a DependencyScorerI from a set of DependencyGraph objects, and establishes this as the parser’s scorer. This is used to initialize the scores on a DependencyGraph during the parsing procedure.

Parameters
  • graphs (list(DependencyGraph)) – A list of dependency graphs to train the scorer.

  • dependency_scorer (DependencyScorerI) – A scorer which implements the DependencyScorerI interface.

initialize_edge_scores(graph)[source]

Assigns a score to every edge in the DependencyGraph graph. These scores are generated via the parser’s scorer which was assigned during the training process.

Parameters

graph (DependencyGraph) – A dependency graph to assign scores to.

collapse_nodes(new_node, cycle_path, g_graph, b_graph, c_graph)[source]

Takes a list of nodes that have been identified to belong to a cycle, and collapses them into on larger node. The arcs of all nodes in the graph must be updated to account for this.

Parameters
  • new_node (Node.) – A Node (Dictionary) to collapse the cycle nodes into.

  • cycle_path (A list of integers.) – A list of node addresses, each of which is in the cycle.

  • c_graph (g_graph, b_graph,) – Graphs which need to be updated.

update_edge_scores(new_node, cycle_path)[source]

Updates the edge scores to reflect a collapse operation into new_node.

Parameters
  • new_node (A Node.) – The node which cycle nodes are collapsed into.

  • cycle_path (A list of integers.) – A list of node addresses that belong to the cycle.

compute_original_indexes(new_indexes)[source]

As nodes are collapsed into others, they are replaced by the new node in the graph, but it’s still necessary to keep track of what these original nodes were. This takes a list of node addresses and replaces any collapsed node addresses with their original addresses.

Parameters

new_indexes (A list of integers.) – A list of node addresses to check for subsumed nodes.

compute_max_subtract_score(column_index, cycle_indexes)[source]

When updating scores the score of the highest-weighted incoming arc is subtracted upon collapse. This returns the correct amount to subtract from that edge.

Parameters
  • column_index (integer.) – A index representing the column of incoming arcs to a particular node being updated

  • cycle_indexes (A list of integers.) – Only arcs from cycle nodes are considered. This is a list of such nodes addresses.

best_incoming_arc(node_index)[source]

Returns the source of the best incoming arc to the node with address: node_index

Parameters

node_index (integer.) – The address of the ‘destination’ node, the node that is arced to.

original_best_arc(node_index)[source]
parse(tokens, tags)[source]

Parses a list of tokens in accordance to the MST parsing algorithm for non-projective dependency parses. Assumes that the tokens to be parsed have already been tagged and those tags are provided. Various scoring methods can be used by implementing the DependencyScorerI interface and passing it to the training algorithm.

Parameters
  • tokens (list(str)) – A list of words or punctuation to be parsed.

  • tags (list(str)) – A list of tags corresponding by index to the words in the tokens list.

Returns

An iterator of non-projective parses.

Return type

iter(DependencyGraph)

class nltk.parse.nonprojectivedependencyparser.NonprojectiveDependencyParser[source]

Bases: object

A non-projective, rule-based, dependency parser. This parser will return the set of all possible non-projective parses based on the word-to-word relations defined in the parser’s dependency grammar, and will allow the branches of the parse tree to cross in order to capture a variety of linguistic phenomena that a projective parser will not.

__init__(dependency_grammar)[source]

Creates a new NonprojectiveDependencyParser.

Parameters

dependency_grammar (DependencyGrammar) – a grammar of word-to-word relations.

parse(tokens)[source]

Parses the input tokens with respect to the parser’s grammar. Parsing is accomplished by representing the search-space of possible parses as a fully-connected directed graph. Arcs that would lead to ungrammatical parses are removed and a lattice is constructed of length n, where n is the number of input tokens, to represent all possible grammatical traversals. All possible paths through the lattice are then enumerated to produce the set of non-projective parses.

param tokens: A list of tokens to parse. type tokens: list(str) return: An iterator of non-projective parses. rtype: iter(DependencyGraph)

nltk.parse.nonprojectivedependencyparser.demo()[source]
nltk.parse.nonprojectivedependencyparser.hall_demo()[source]
nltk.parse.nonprojectivedependencyparser.nonprojective_conll_parse_demo()[source]
nltk.parse.nonprojectivedependencyparser.rule_based_demo()[source]