Nltk: Dropping Python 2.7 Support

Created on 10 May 2019  ·  4Comments  ·  Source: nltk/nltk

From https://travis-ci.org/nltk/nltk/jobs/530566954, some of the dependencies NLTK is dependent on no longer supports Python 2.7. I think it's good timing to also drop support for Python 2.7 so that our CI continues to work and move the library forward.

admin python2.7 python3 pythonic

Most helpful comment

With Python 3.8 support from #2432 , maybe it's time we officially drop Python 2.7? It's 2020 =)

P/S: It's already out of our CI workflow for quite a while.

All 4 comments

I see, ok, it's time to move on!

Starting from these __future__ =)

~/git-stuff/my-contrib/nltk$ grep "__future__" nltk/**/*
nltk/app/chartparser_app.py:from __future__ import division
nltk/app/chunkparser_app.py:from __future__ import division
nltk/app/collocations_app.py:from __future__ import division
nltk/app/rdparser_app.py:from __future__ import division
nltk/app/wordnet_app.py:from __future__ import print_function
nltk/ccg/api.py:from __future__ import unicode_literals
nltk/ccg/chart.py:from __future__ import print_function, division, unicode_literals
nltk/ccg/combinator.py:from __future__ import unicode_literals
nltk/ccg/lexicon.py:from __future__ import unicode_literals
nltk/chat/__init__.py:from __future__ import print_function
nltk/chat/eliza.py:from __future__ import print_function
nltk/chat/iesha.py:from __future__ import print_function
nltk/chat/rude.py:from __future__ import print_function
nltk/chat/suntsu.py:from __future__ import print_function
nltk/chat/util.py:from __future__ import print_function
nltk/chat/zen.py:from __future__ import print_function
nltk/chunk/named_entity.py:from __future__ import print_function
nltk/chunk/named_entity.py:from __future__ import unicode_literals
nltk/chunk/regexp.py:from __future__ import print_function, unicode_literals
nltk/chunk/regexp.py:from __future__ import division
nltk/chunk/util.py:from __future__ import print_function, unicode_literals, division
nltk/classify/decisiontree.py:from __future__ import print_function, unicode_literals, division
nltk/classify/maxent.py:from __future__ import print_function, unicode_literals
nltk/classify/megam.py:from __future__ import print_function
nltk/classify/naivebayes.py:from __future__ import print_function, unicode_literals
nltk/classify/rte_classify.py:from __future__ import print_function
nltk/classify/scikitlearn.py:from __future__ import print_function, unicode_literals
nltk/classify/senna.py:    >>> from __future__ import unicode_literals
nltk/classify/senna.py:from __future__ import unicode_literals
nltk/classify/tadm.py:from __future__ import print_function, unicode_literals
nltk/classify/textcat.py:from __future__ import print_function, unicode_literals
nltk/classify/util.py:from __future__ import print_function, division
nltk/classify/weka.py:from __future__ import print_function
nltk/cluster/em.py:from __future__ import print_function, unicode_literals
nltk/cluster/gaac.py:from __future__ import print_function, unicode_literals, division
nltk/cluster/kmeans.py:from __future__ import print_function, unicode_literals, division
nltk/cluster/util.py:from __future__ import print_function, unicode_literals, division
grep: nltk/corpus/reader: Is a directory
nltk/corpus/util.py:from __future__ import unicode_literals
nltk/draw/table.py:from __future__ import division
nltk/inference/api.py:from __future__ import print_function
nltk/inference/discourse.py:from __future__ import print_function
nltk/inference/mace.py:from __future__ import print_function
nltk/inference/nonmonotonic.py:from __future__ import print_function, unicode_literals
nltk/inference/prover9.py:from __future__ import print_function
nltk/inference/resolution.py:from __future__ import print_function, unicode_literals
nltk/inference/tableau.py:from __future__ import print_function, unicode_literals
nltk/lm/api.py:from __future__ import division, unicode_literals
nltk/lm/counter.py:from __future__ import unicode_literals
nltk/lm/models.py:from __future__ import division, unicode_literals
nltk/lm/vocabulary.py:from __future__ import unicode_literals
nltk/metrics/agreement.py:from __future__ import print_function, unicode_literals, division
nltk/metrics/aline.py:from __future__ import unicode_literals
nltk/metrics/association.py:from __future__ import division
nltk/metrics/confusionmatrix.py:from __future__ import print_function, unicode_literals
nltk/metrics/distance.py:from __future__ import print_function
nltk/metrics/distance.py:from __future__ import division
nltk/metrics/scores.py:from __future__ import print_function, division
nltk/metrics/spearman.py:from __future__ import division
nltk/misc/babelfish.py:from __future__ import print_function
nltk/misc/chomsky.py:from __future__ import print_function
nltk/misc/sort.py:from __future__ import print_function, division
nltk/misc/wordfinder.py:from __future__ import print_function
nltk/parse/bllip.py:from __future__ import print_function
nltk/parse/chart.py:from __future__ import print_function, division, unicode_literals
nltk/parse/corenlp.py:from __future__ import unicode_literals
nltk/parse/dependencygraph.py:from __future__ import print_function, unicode_literals
nltk/parse/earleychart.py:from __future__ import print_function, division
nltk/parse/evaluate.py:from __future__ import division
nltk/parse/featurechart.py:from __future__ import print_function, unicode_literals
nltk/parse/generate.py:from __future__ import print_function
nltk/parse/malt.py:from __future__ import print_function, unicode_literals
nltk/parse/nonprojectivedependencyparser.py:from __future__ import print_function
nltk/parse/pchart.py:from __future__ import print_function, unicode_literals
nltk/parse/projectivedependencyparser.py:from __future__ import print_function, unicode_literals
nltk/parse/recursivedescent.py:from __future__ import print_function, unicode_literals
nltk/parse/shiftreduce.py:from __future__ import print_function, unicode_literals
nltk/parse/stanford.py:from __future__ import unicode_literals
nltk/parse/transitionparser.py:from __future__ import absolute_import
nltk/parse/transitionparser.py:from __future__ import division
nltk/parse/transitionparser.py:from __future__ import print_function
nltk/parse/util.py:from __future__ import print_function
nltk/parse/viterbi.py:from __future__ import print_function, unicode_literals
nltk/sem/boxer.py:from __future__ import print_function, unicode_literals
nltk/sem/chat80.py:from __future__ import print_function, unicode_literals
nltk/sem/cooper_storage.py:from __future__ import print_function
nltk/sem/drt.py:from __future__ import print_function, unicode_literals
nltk/sem/evaluate.py:from __future__ import print_function, unicode_literals
nltk/sem/glue.py:from __future__ import print_function, division, unicode_literals
nltk/sem/hole.py:from __future__ import print_function, unicode_literals
nltk/sem/lfg.py:from __future__ import print_function, division, unicode_literals
nltk/sem/linearlogic.py:from __future__ import print_function, unicode_literals
nltk/sem/logic.py:from __future__ import print_function, unicode_literals
nltk/sem/relextract.py:from __future__ import print_function
nltk/sem/util.py:from __future__ import print_function, unicode_literals
nltk/sentiment/sentiment_analyzer.py:from __future__ import print_function
nltk/sentiment/util.py:from __future__ import division
nltk/stem/arlstem.py:from __future__ import unicode_literals
nltk/stem/cistem.py:from __future__ import unicode_literals
nltk/stem/isri.py:from __future__ import unicode_literals
nltk/stem/lancaster.py:from __future__ import unicode_literals
nltk/stem/porter.py:from __future__ import print_function, unicode_literals
nltk/stem/regexp.py:from __future__ import unicode_literals
nltk/stem/rslp.py:from __future__ import print_function, unicode_literals
nltk/stem/snowball.py:from __future__ import unicode_literals, print_function
nltk/stem/wordnet.py:from __future__ import unicode_literals
nltk/tag/__init__.py:from __future__ import print_function
nltk/tag/brill.py:from __future__ import print_function, division
nltk/tag/brill_trainer.py:from __future__ import print_function, division
nltk/tag/crf.py:from __future__ import absolute_import
nltk/tag/crf.py:from __future__ import unicode_literals
nltk/tag/hmm.py:from __future__ import print_function, unicode_literals, division
nltk/tag/mapping.py:from __future__ import print_function, unicode_literals, division
nltk/tag/perceptron.py:from __future__ import absolute_import
nltk/tag/perceptron.py:from __future__ import print_function, division
nltk/tag/sequential.py:from __future__ import print_function, unicode_literals
nltk/tag/tnt.py:from __future__ import print_function, division
nltk/tbl/demo.py:from __future__ import print_function, absolute_import, division
nltk/tbl/erroranalysis.py:from __future__ import print_function
nltk/tbl/feature.py:from __future__ import division, print_function, unicode_literals
nltk/tbl/rule.py:from __future__ import print_function
nltk/tbl/template.py:from __future__ import print_function
nltk/test/childes_fixt.py:from __future__ import absolute_import
nltk/test/classify_fixt.py:from __future__ import absolute_import
nltk/test/compat_fixt.py:from __future__ import absolute_import
nltk/test/corpus_fixt.py:from __future__ import absolute_import
nltk/test/data.doctest:    >>> from __future__ import print_function
nltk/test/discourse_fixt.py:from __future__ import absolute_import
nltk/test/doctest_nose_plugin.py:from __future__ import print_function
nltk/test/featgram.doctest:    >>> from __future__ import print_function
nltk/test/featstruct.doctest:    >>> from __future__ import print_function
nltk/test/gensim_fixt.py:from __future__ import absolute_import
nltk/test/gluesemantics_malt_fixt.py:from __future__ import absolute_import
grep: nltk/test/images: Is a directory
nltk/test/inference_fixt.py:from __future__ import absolute_import
nltk/test/metrics.doctest:   >>> from __future__ import print_function
nltk/test/nonmonotonic_fixt.py:from __future__ import absolute_import
nltk/test/portuguese_en_fixt.py:from __future__ import absolute_import
nltk/test/probability_fixt.py:from __future__ import absolute_import
nltk/test/runtests.py:from __future__ import absolute_import, print_function
nltk/test/segmentation_fixt.py:from __future__ import absolute_import
nltk/test/semantics_fixt.py:from __future__ import absolute_import
nltk/test/simple.doctest:    >>> from __future__ import print_function
nltk/test/stem.doctest:    >>> from __future__ import print_function
nltk/test/tokenize.doctest:    >>> from __future__ import print_function
nltk/test/translate_fixt.py:from __future__ import absolute_import
grep: nltk/test/unit: Is a directory
nltk/test/util.doctest:    >>> from __future__ import print_function
nltk/test/wordnet.doctest:    >>> from __future__ import print_function, unicode_literals
nltk/test/wordnet_fixt.py:from __future__ import absolute_import
nltk/tokenize/casual.py:from __future__ import unicode_literals
nltk/tokenize/nist.py:from __future__ import unicode_literals
nltk/tokenize/punkt.py:from __future__ import print_function, unicode_literals, division
nltk/tokenize/regexp.py:from __future__ import unicode_literals
nltk/tokenize/repp.py:from __future__ import unicode_literals, print_function
nltk/tokenize/simple.py:from __future__ import unicode_literals
nltk/tokenize/sonority_sequencing.py:from __future__ import unicode_literals
nltk/tokenize/stanford.py:from __future__ import unicode_literals, print_function
nltk/tokenize/stanford_segmenter.py:from __future__ import unicode_literals, print_function
nltk/translate/api.py:from __future__ import print_function, unicode_literals
nltk/translate/bleu_score.py:from __future__ import division
nltk/translate/chrf_score.py:from __future__ import division
nltk/translate/gale_church.py:from __future__ import division
nltk/translate/gleu_score.py:from __future__ import division
nltk/translate/ibm1.py:from __future__ import division
nltk/translate/ibm2.py:from __future__ import division
nltk/translate/ibm3.py:from __future__ import division
nltk/translate/ibm4.py:from __future__ import division
nltk/translate/ibm5.py:from __future__ import division
nltk/translate/ibm_model.py:from __future__ import division
nltk/translate/metrics.py:from __future__ import division
nltk/translate/nist_score.py:from __future__ import division
nltk/translate/ribes_score.py:from __future__ import division
nltk/twitter/common.py:from __future__ import print_function
nltk/twitter/twitter_demo.py:from __future__ import print_function
nltk/twitter/util.py:from __future__ import print_function

Then the compat codes:

~/git-stuff/my-contrib/nltk$ grep "compat" nltk/**/*
nltk/app/chartparser_app.py:        if not self._checkcompat():
nltk/app/chartparser_app.py:        if not self._checkcompat():
nltk/app/chartparser_app.py:        if not self._checkcompat():
nltk/app/chartparser_app.py:    def _checkcompat(self):
nltk/app/concordance_app.py:import nltk.compat
nltk/app/wordnet_app.py:from nltk import compat
nltk/app/wordnet_app.py:if compat.PY3:
nltk/ccg/api.py:from nltk.compat import python_2_unicode_compatible, unicode_repr
nltk/ccg/api.py:@python_2_unicode_compatible
nltk/ccg/api.py:@python_2_unicode_compatible
nltk/ccg/api.py:@python_2_unicode_compatible
nltk/ccg/api.py:@python_2_unicode_compatible
nltk/ccg/chart.py:from nltk.compat import python_2_unicode_compatible
nltk/ccg/chart.py:@python_2_unicode_compatible
nltk/ccg/chart.py:@python_2_unicode_compatible
nltk/ccg/chart.py:@python_2_unicode_compatible
nltk/ccg/combinator.py:from nltk.compat import python_2_unicode_compatible
nltk/ccg/combinator.py:@python_2_unicode_compatible
nltk/ccg/combinator.py:@python_2_unicode_compatible
nltk/ccg/combinator.py:@python_2_unicode_compatible
nltk/ccg/combinator.py:@python_2_unicode_compatible
nltk/ccg/combinator.py:@python_2_unicode_compatible
nltk/ccg/combinator.py:@python_2_unicode_compatible
nltk/ccg/lexicon.py:from nltk.compat import python_2_unicode_compatible
nltk/ccg/lexicon.py:@python_2_unicode_compatible
nltk/chunk/regexp.py:from nltk.compat import python_2_unicode_compatible, unicode_repr
nltk/chunk/regexp.py:@python_2_unicode_compatible
nltk/chunk/regexp.py:@python_2_unicode_compatible
nltk/chunk/regexp.py:@python_2_unicode_compatible
nltk/chunk/regexp.py:@python_2_unicode_compatible
nltk/chunk/regexp.py:@python_2_unicode_compatible
nltk/chunk/regexp.py:@python_2_unicode_compatible
nltk/chunk/regexp.py:@python_2_unicode_compatible
nltk/chunk/regexp.py:@python_2_unicode_compatible
nltk/chunk/regexp.py:@python_2_unicode_compatible
nltk/chunk/regexp.py:@python_2_unicode_compatible
nltk/chunk/regexp.py:@python_2_unicode_compatible
nltk/chunk/regexp.py:@python_2_unicode_compatible
nltk/chunk/util.py:from nltk.compat import python_2_unicode_compatible
nltk/classify/decisiontree.py:from nltk.compat import python_2_unicode_compatible
nltk/classify/decisiontree.py:@python_2_unicode_compatible
nltk/classify/maxent.py:from nltk import compat
nltk/classify/maxent.py:@compat.python_2_unicode_compatible
nltk/classify/megam.py:from nltk import compat
nltk/classify/scikitlearn.py:from nltk import compat
nltk/classify/scikitlearn.py:@compat.python_2_unicode_compatible
nltk/classify/senna.py:from nltk.compat import python_2_unicode_compatible
nltk/classify/senna.py:@python_2_unicode_compatible
nltk/classify/textcat.py:from nltk.compat import PY3
nltk/cluster/em.py:from nltk.compat import python_2_unicode_compatible
nltk/cluster/em.py:@python_2_unicode_compatible
nltk/cluster/gaac.py:from nltk.compat import python_2_unicode_compatible
nltk/cluster/gaac.py:@python_2_unicode_compatible
nltk/cluster/kmeans.py:from nltk.compat import python_2_unicode_compatible
nltk/cluster/kmeans.py:@python_2_unicode_compatible
nltk/cluster/util.py:from nltk.compat import python_2_unicode_compatible
nltk/cluster/util.py:@python_2_unicode_compatible
grep: nltk/corpus/reader: Is a directory
nltk/corpus/util.py:from nltk.compat import python_2_unicode_compatible
nltk/corpus/util.py:@python_2_unicode_compatible
nltk/draw/dispersion.py:    import nltk.compat
nltk/draw/table.py:    a way that's incompatible with the fact that ``Table`` behaves as a
nltk/inference/nonmonotonic.py:from nltk.compat import python_2_unicode_compatible
nltk/inference/nonmonotonic.py:@python_2_unicode_compatible
nltk/inference/resolution.py:from nltk.compat import python_2_unicode_compatible
nltk/inference/resolution.py:@python_2_unicode_compatible
nltk/inference/resolution.py:@python_2_unicode_compatible
nltk/lm/counter.py:from nltk import compat
nltk/lm/counter.py:@compat.python_2_unicode_compatible
nltk/lm/models.py:from nltk import compat
nltk/lm/models.py:@compat.python_2_unicode_compatible
nltk/lm/models.py:@compat.python_2_unicode_compatible
nltk/lm/models.py:@compat.python_2_unicode_compatible
nltk/lm/vocabulary.py:from nltk import compat
nltk/lm/vocabulary.py:@compat.python_2_unicode_compatible
nltk/metrics/agreement.py:from nltk.compat import python_2_unicode_compatible
nltk/metrics/agreement.py:@python_2_unicode_compatible
nltk/metrics/confusionmatrix.py:from nltk.compat import python_2_unicode_compatible
nltk/metrics/confusionmatrix.py:@python_2_unicode_compatible
nltk/parse/chart.py:from nltk.compat import python_2_unicode_compatible, unicode_repr
nltk/parse/chart.py:@python_2_unicode_compatible
nltk/parse/chart.py:@python_2_unicode_compatible
nltk/parse/chart.py:@python_2_unicode_compatible
nltk/parse/corenlp.py:                response = requests.get(requests.compat.urljoin(self.url, 'live'))
nltk/parse/corenlp.py:                response = requests.get(requests.compat.urljoin(self.url, 'ready'))
nltk/parse/dependencygraph.py:from nltk.compat import python_2_unicode_compatible
nltk/parse/dependencygraph.py:@python_2_unicode_compatible
nltk/parse/featurechart.py:from nltk.compat import python_2_unicode_compatible
nltk/parse/featurechart.py:@python_2_unicode_compatible
nltk/parse/pchart.py:from nltk.compat import python_2_unicode_compatible
nltk/parse/pchart.py:@python_2_unicode_compatible
nltk/parse/projectivedependencyparser.py:from nltk.compat import python_2_unicode_compatible
nltk/parse/projectivedependencyparser.py:@python_2_unicode_compatible
nltk/parse/projectivedependencyparser.py:@python_2_unicode_compatible
nltk/parse/recursivedescent.py:from nltk.compat import unicode_repr
nltk/parse/shiftreduce.py:from nltk.compat import unicode_repr
nltk/parse/stanford.py:        # Windows is incompatible with NamedTemporaryFile() without passing in delete=False.
nltk/parse/viterbi.py:from nltk.compat import python_2_unicode_compatible
nltk/parse/viterbi.py:@python_2_unicode_compatible
nltk/sem/boxer.py:from nltk.compat import python_2_unicode_compatible
nltk/sem/boxer.py:@python_2_unicode_compatible
nltk/sem/boxer.py:@python_2_unicode_compatible
nltk/sem/boxer.py:@python_2_unicode_compatible
nltk/sem/chat80.py:from nltk.compat import python_2_unicode_compatible
nltk/sem/chat80.py:@python_2_unicode_compatible
nltk/sem/drt.py:from nltk.compat import python_2_unicode_compatible
nltk/sem/drt.py:@python_2_unicode_compatible
nltk/sem/drt.py:@python_2_unicode_compatible
nltk/sem/drt.py:@python_2_unicode_compatible
nltk/sem/drt.py:@python_2_unicode_compatible
nltk/sem/evaluate.py:from nltk.compat import python_2_unicode_compatible
nltk/sem/evaluate.py:@python_2_unicode_compatible
nltk/sem/evaluate.py:@python_2_unicode_compatible
nltk/sem/evaluate.py:@python_2_unicode_compatible
nltk/sem/glue.py:from nltk.compat import python_2_unicode_compatible
nltk/sem/glue.py:@python_2_unicode_compatible
nltk/sem/glue.py:@python_2_unicode_compatible
nltk/sem/hole.py:from nltk import compat
nltk/sem/hole.py:@compat.python_2_unicode_compatible
nltk/sem/lfg.py:from nltk.compat import python_2_unicode_compatible
nltk/sem/lfg.py:@python_2_unicode_compatible
nltk/sem/linearlogic.py:from nltk.compat import python_2_unicode_compatible
nltk/sem/linearlogic.py:@python_2_unicode_compatible
nltk/sem/linearlogic.py:@python_2_unicode_compatible
nltk/sem/linearlogic.py:@python_2_unicode_compatible
nltk/sem/linearlogic.py:@python_2_unicode_compatible
nltk/sem/linearlogic.py:@python_2_unicode_compatible
nltk/sem/logic.py:from nltk.compat import python_2_unicode_compatible
nltk/sem/logic.py:@python_2_unicode_compatible
nltk/sem/logic.py:@python_2_unicode_compatible
nltk/sem/logic.py:@python_2_unicode_compatible
nltk/sem/logic.py:@python_2_unicode_compatible
nltk/sem/logic.py:@python_2_unicode_compatible
nltk/sem/logic.py:@python_2_unicode_compatible
nltk/sem/logic.py:@python_2_unicode_compatible
nltk/sem/logic.py:@python_2_unicode_compatible
nltk/sem/logic.py:@python_2_unicode_compatible
nltk/sem/logic.py:@python_2_unicode_compatible
nltk/sem/logic.py:@python_2_unicode_compatible
nltk/sem/logic.py:@python_2_unicode_compatible
nltk/sem/logic.py:@python_2_unicode_compatible
nltk/sem/logic.py:@python_2_unicode_compatible
nltk/sem/logic.py:@python_2_unicode_compatible
nltk/sem/util.py:    Check that interpret_sents() is compatible with legacy grammars that use
nltk/sentiment/util.py:        # The protocol=2 parameter is for python2 compatibility
nltk/sentiment/util.py:        (writer, outf) = outf_writer_compat(outfile, encoding, errors, gzip_compress)
nltk/sentiment/util.py:    from nltk.twitter.common import outf_writer_compat, extract_fields
nltk/sentiment/vader.py:# ensure Python 3 compatibility, and refactoring to achieve greater modularity.
nltk/stem/cistem.py:from nltk.compat import python_2_unicode_compatible
nltk/stem/cistem.py:@python_2_unicode_compatible
nltk/stem/lancaster.py:from nltk.compat import python_2_unicode_compatible
nltk/stem/lancaster.py:@python_2_unicode_compatible
nltk/stem/porter.py:from nltk.compat import python_2_unicode_compatible
nltk/stem/porter.py:@python_2_unicode_compatible
nltk/stem/regexp.py:from nltk.compat import python_2_unicode_compatible
nltk/stem/regexp.py:@python_2_unicode_compatible
nltk/stem/snowball.py:from nltk import compat
nltk/stem/snowball.py:@compat.python_2_unicode_compatible
nltk/stem/wordnet.py:from nltk.compat import python_2_unicode_compatible
nltk/stem/wordnet.py:@python_2_unicode_compatible
nltk/tag/hmm.py:from nltk.compat import python_2_unicode_compatible
nltk/tag/hmm.py:@python_2_unicode_compatible
nltk/tag/perceptron.py:from nltk.compat import python_2_unicode_compatible
nltk/tag/perceptron.py:@python_2_unicode_compatible
nltk/tag/perceptron.py:                # changed protocol from -1 to 2 to make pickling Python 2 compatible
nltk/tag/senna.py:from nltk.compat import python_2_unicode_compatible
nltk/tag/senna.py:@python_2_unicode_compatible
nltk/tag/senna.py:@python_2_unicode_compatible
nltk/tag/senna.py:@python_2_unicode_compatible
nltk/tag/sequential.py:from nltk.compat import python_2_unicode_compatible
nltk/tag/sequential.py:@python_2_unicode_compatible
nltk/tag/sequential.py:@python_2_unicode_compatible
nltk/tag/sequential.py:@python_2_unicode_compatible
nltk/tag/sequential.py:@python_2_unicode_compatible
nltk/tbl/rule.py:from nltk.compat import python_2_unicode_compatible, unicode_repr
nltk/tbl/rule.py:@python_2_unicode_compatible
nltk/tbl/rule.py:                # for complete compatibility with the wordy format of nltk2
nltk/tbl/template.py:        An alternative calling convention (kept for backwards compatibility,
nltk/test/compat.doctest:NLTK comes with a Python 2.x/3.x compatibility layer, nltk.compat
nltk/test/compat.doctest:    >>> from nltk import compat
nltk/test/compat.doctest:    >>> compat.PY3
nltk/test/compat.doctest:@python_2_unicode_compatible
nltk/test/compat.doctest:``@python_2_unicode_compatible`` decorator allows writing these methods
nltk/test/compat.doctest:in a way compatible with Python 3.x:
nltk/test/compat.doctest:    >>> from nltk.compat import python_2_unicode_compatible
nltk/test/compat.doctest:    >>> @python_2_unicode_compatible
nltk/test/compat.doctest:There is no need to wrap a subclass with ``@python_2_unicode_compatible``
nltk/test/compat.doctest:    >>> @python_2_unicode_compatible
nltk/test/compat.doctest:Applying ``@python_2_unicode_compatible`` to a subclass
nltk/test/compat.doctest:``nltk.compat.unicode_repr`` function may be used instead of ``repr`` and
nltk/test/compat.doctest:    >>> from nltk.compat import unicode_repr
nltk/test/compat.doctest:of objects which class was fixed by ``@python_2_unicode_compatible``
nltk/test/compat.doctest:    >>> @python_2_unicode_compatible
nltk/test/compat.doctest:methods of classes fixed by ``@python_2_unicode_compatible``
nltk/test/compat_fixt.py:from nltk.compat import PY3
nltk/test/compat_fixt.py:        raise SkipTest("compat.doctest is for Python 2.x")
nltk/test/corpus.doctest:    >>> # We use float for backward compatibility with division in Python2.7
nltk/test/data.doctest:    >>> from nltk.compat import StringIO
nltk/test/floresta.txt:O norte-americano Pete Sampras foi afastado pelo seu compatriota Jim Courier (24? ATP) pelos parciais de 7-6 (7-5), 6-4, o que significa que o n?mero um do mundo vai chegar ? ?catedral da terra batida?, Roland Garros, com duas derrotas em outros tantos encontros disputados sobre o p? de tijolo.
grep: nltk/test/images: Is a directory
nltk/test/index.doctest:.. _compat howto: compat.html
nltk/test/index.doctest:* `compat HOWTO`_
nltk/test/portuguese_en_fixt.py:from nltk.compat import PY3
nltk/test/semantics.doctest:In order to provide backwards compatibility with 'legacy' grammars where the semantics value
nltk/test/twitter.ipynb:    "from nltk.compat import StringIO\n",
grep: nltk/test/unit: Is a directory
nltk/tokenize/punkt.py:from nltk.compat import unicode_repr, python_2_unicode_compatible
nltk/tokenize/punkt.py:@python_2_unicode_compatible
nltk/tokenize/regexp.py:from nltk.compat import python_2_unicode_compatible
nltk/tokenize/regexp.py:@python_2_unicode_compatible
nltk/tokenize/stanford.py:        # Windows is incompatible with NamedTemporaryFile() without passing in delete=False.
nltk/tokenize/stanford_segmenter.py:from nltk import compat
nltk/translate/api.py:from nltk.compat import python_2_unicode_compatible
nltk/translate/api.py:@python_2_unicode_compatible
nltk/translate/api.py:@python_2_unicode_compatible
nltk/translate/bleu_score.py:    from nltk.compat import Fraction
nltk/twitter/api.py:from nltk.compat import UTC
nltk/twitter/common.py:from nltk import compat
nltk/twitter/common.py:    (writer, outf) = outf_writer_compat(outfile, encoding, errors, gzip_compress)
nltk/twitter/common.py:def outf_writer_compat(outfile, encoding, errors, gzip_compress=False):
nltk/twitter/common.py:    if compat.PY3:
nltk/twitter/common.py:        writer = compat.UnicodeWriter(outf, encoding=encoding, errors=errors)
nltk/twitter/common.py:    (writer, outf) = outf_writer_compat(outfile, encoding, errors, gzip_compress)
nltk/twitter/twitter_demo.py:from nltk.compat import StringIO

And some low-hanging fruits:

~/git-stuff/my-contrib/nltk$ grep "sys.version" nltk/**/*
grep: nltk/corpus/reader: Is a directory
nltk/lm/vocabulary.py:    if sys.version_info[0] == 2:
nltk/sem/evaluate.py:    if sys.version_info[0] >= 3:
nltk/sentiment/util.py:    if sys.version_info[0] == 3:
nltk/sentiment/util.py:    elif sys.version_info[0] < 3:
grep: nltk/test/images: Is a directory
grep: nltk/test/unit: Is a directory

With Python 3.8 support from #2432 , maybe it's time we officially drop Python 2.7? It's 2020 =)

P/S: It's already out of our CI workflow for quite a while.

If #2504 is merged there are still a few things from Python 2 around:

  • [x] The test plugin in nltk/test/doctest_nose_plugin.py which ignores the leading u from Unicode literals and makes sure the equality comparisons succeed. If one wants to remove this plugin they will need to make sure that all Unicode literals are removed from the tests.
  • [ ] web/dev/local_testing.rst still mentions Python 2 and should be updated.
  • [ ] mentions of Python 2.7 and Python 2.6 in code, should check if they are still relevant
Was this page helpful?
0 / 5 - 0 ratings