Nltk: Help me please! nltk and standford nlp integration

Created on 6 Jul 2018  ·  3Comments  ·  Source: nltk/nltk

There was a problem make me confused when i used nltk and standford nlp integration.
My develop environments like this :

  1. nltk 3.3
  2. standford nlp stanford-segmenter 3.6.0 / 3.9.1
    And i try to create an StanfordSegmenter Object like this :
    standfordNlpPath = self.projectPath + "\standford-nlp\stanford-segmenter-2015-12-09"
    stanfordSegmenter= StanfordSegmenter(
    path_to_jar=standfordNlpPath + "\stanford-segmenter-3.6.0.jar",
    path_to_slf4j=standfordNlpPath + "\slf4j-api.jar",
    path_to_sihan_corpora_dict=standfordNlpPath + "\data-2015",
    path_to_model=standfordNlpPath + "\data-2015\pku.gz",
    path_to_dict=standfordNlpPath + "\data-2015\dict-chris6.ser.gz")
    then the failure like this as outcome :
    ===========================================================================
    NLTK was unable to find stanford-segmenter.jar! Set the CLASSPATH
    environment variable.
    For more information, on stanford-segmenter.jar, see:

https://nlp.stanford.edu/software

All kinds of jars exactly exist there i pretty sure, is there anything wrong with my path or the parameters which i put in the class of StanfordSegmenter? The example were quite easy what i find in nltk 3.3 document, they just put in one parameter that "path_to_slf4j".
So, somebody, help me :-( !

resolved stanford api

Most helpful comment

Please use the new CoreNLPParser interface.

First update your NLTK:

pip3 install -U nltk

Then still in terminal:

# Get the CoreNLP package
wget http://nlp.stanford.edu/software/stanford-corenlp-full-2018-02-27.zip
unzip stanford-corenlp-full-2018-02-27.zip
cd stanford-corenlp-full-2018-02-27/

# Download the properties for chinese language
wget http://nlp.stanford.edu/software/stanford-chinese-corenlp-2018-02-27-models.jar 
wget https://raw.githubusercontent.com/stanfordnlp/CoreNLP/master/src/edu/stanford/nlp/pipeline/StanfordCoreNLP-chinese.properties 

# Download the properties for arabic
wget http://nlp.stanford.edu/software/stanford-arabic-corenlp-2018-02-27-models.jar
wget https://raw.githubusercontent.com/stanfordnlp/CoreNLP/master/src/edu/stanford/nlp/pipeline/StanfordCoreNLP-arabic.properties


For Chinese:

# Start the server.
java -Xmx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer \
-serverProperties StanfordCoreNLP-chinese.properties \
-preload tokenize,ssplit,pos,lemma,ner,parse \
-status_port 9001  -port 9001 -timeout 15000 & 

Then in Python3:

>>> from nltk.parse import CoreNLPParser
>>> parser = CoreNLPParser('http://localhost:9001')
>>> list(parser.tokenize(u'我家没有电脑。'))
['我家', '没有', '电脑', '。']

For Arabic:

# Start the server.
java -Xmx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer \
-serverProperties StanfordCoreNLP-arabic.properties \
-preload tokenize,ssplit,pos,parse \
-status_port 9005  -port 9005 -timeout 15000

Finally, start Python:

>>> from nltk.parse import CoreNLPParser
>>> parser = CoreNLPParser(url='http://localhost:9005')
>>> text = u'انا حامل'
>>> parser.tokenize(text)
<generator object GenericCoreNLPParser.tokenize at 0x7f4a26181bf8>
>>> list(parser.tokenize(text))
['انا', 'حامل']

All 3 comments

@libingnan54321 why are you not using the latest 3.9.1 version?

Can you please try this one first and provide the output?

segmenter_jar_file = os.path.join(standfordNlpPath,'stanford-segmenter-2018-02-27/stanford-segmenter-3.9.1.jar')
assert(os.path.isfile(segmenter_jar_file))
stanfordSegmenter = StanfordSegmenter(
    path_to_jar=segmenter_jar_file,
)

Please use the new CoreNLPParser interface.

First update your NLTK:

pip3 install -U nltk

Then still in terminal:

# Get the CoreNLP package
wget http://nlp.stanford.edu/software/stanford-corenlp-full-2018-02-27.zip
unzip stanford-corenlp-full-2018-02-27.zip
cd stanford-corenlp-full-2018-02-27/

# Download the properties for chinese language
wget http://nlp.stanford.edu/software/stanford-chinese-corenlp-2018-02-27-models.jar 
wget https://raw.githubusercontent.com/stanfordnlp/CoreNLP/master/src/edu/stanford/nlp/pipeline/StanfordCoreNLP-chinese.properties 

# Download the properties for arabic
wget http://nlp.stanford.edu/software/stanford-arabic-corenlp-2018-02-27-models.jar
wget https://raw.githubusercontent.com/stanfordnlp/CoreNLP/master/src/edu/stanford/nlp/pipeline/StanfordCoreNLP-arabic.properties


For Chinese:

# Start the server.
java -Xmx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer \
-serverProperties StanfordCoreNLP-chinese.properties \
-preload tokenize,ssplit,pos,lemma,ner,parse \
-status_port 9001  -port 9001 -timeout 15000 & 

Then in Python3:

>>> from nltk.parse import CoreNLPParser
>>> parser = CoreNLPParser('http://localhost:9001')
>>> list(parser.tokenize(u'我家没有电脑。'))
['我家', '没有', '电脑', '。']

For Arabic:

# Start the server.
java -Xmx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer \
-serverProperties StanfordCoreNLP-arabic.properties \
-preload tokenize,ssplit,pos,parse \
-status_port 9005  -port 9005 -timeout 15000

Finally, start Python:

>>> from nltk.parse import CoreNLPParser
>>> parser = CoreNLPParser(url='http://localhost:9005')
>>> text = u'انا حامل'
>>> parser.tokenize(text)
<generator object GenericCoreNLPParser.tokenize at 0x7f4a26181bf8>
>>> list(parser.tokenize(text))
['انا', 'حامل']

Closing the issue as resolved for now =)
Please open if there's further issues.

Was this page helpful?
0 / 5 - 0 ratings