Text Preprocessing With NLTK

栏目: IT技术 · 发布时间: 4年前

Text Preprocessing With NLTK

Photo by Carlos Muza on Unsplash

Intro

Almost every Natural Language Processing (NLP) task requires text to be preprocessed before training a model. Deep learning models cannot use raw text directly, so it is up to us researchers to clean the text ourselves. Depending on the nature of the task, the preprocessing methods can be different. This tutorial will teach the most common preprocessing approach that can fit in with various NLP tasks using NLTK (Natural Language Toolkit) .

Why NLTK?

  • Popularity : NLTK is one of the leading platforms for dealing with language data.
  • Simplicity : Provides easy-to-use APIs for a wide variety of text preprocessing methods
  • Community : It has a large and active community that supports the library and improves it
  • Open Source : Free and open-source available for Windows, Mac OSX, and Linux.

Now you know the benefits of NLTK, let’s get started!

Tutorial Overview

  1. Lowercase
  2. Removing Punctuation
  3. Tokenization
  4. Stopword Filtering
  5. Stemming
  6. Part-of-Speech Tagger

All code displayed in this tutorial can be accessed in my Github repo .

Import NLTK

Before preprocessing, we need to first download the NLTK library .

pip install nltk

Then, we can import the library in our Python notebook and download its contents.

Lowercase

As an example, we grab the first sentence from the book Pride and Prejudice as the text. We convert the sentence to lowercase via text.lower() .

Removing Punctuation

To remove punctuation, we save only the characters that are not punctuation, which can be checked by using string.punctuation .

Tokenization

Strings can be tokenized into tokens via nltk.word_tokenize .

Stopword Filtering

We can use nltk.corpus.stopwords.words(‘english’) to fetch a list of stopwords in the English dictionary. Then, we remove the tokens that are stopwords.

Stemming

We stem the tokens using nltk.stem.porter.PorterStemmer to get the stemmed tokens.

POS Tagger

Lastly, we can use nltk.pos_tag to retrieve the part of speech of each token in a list.

The full notebook can be seen here .

Combining all Together

We can combine all the preprocessing methods above and create a preprocess function that takes in a .txt file and handles all the preprocessing. We print out the tokens, filtered words (after stopword filtering), stemmed words, and POS, one of which is usually passed on to the model or for further processing. We use the Pride and Prejudice book (accessible here ) and preprocess it.

This notebook can be accessed here .

Conclusion

Text preprocessing is an important first step for any NLP application. In this tutorial, we discussed several popular preprocessing approaches using NLTK: lowercase, removing punctuation, tokenization, stopword filtering, stemming, and part-of-speech tagger.


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

规划算法

规划算法

拉瓦利 / 2011-1 / 99.00元

《规划算法》内容简介:规划是人类智慧的结晶,规划问题广泛地存在于人们的日常工作和生活中。现在,规划已涉及计算机科学、人工智能、力学、机械学、控制论、对策论、概率论、图论、拓扑学、微分几何、代数几何等许多现代科学领域。《规划算法》是作者多年来教学和科研工作的总结,系统地介绍了规划领域中的基础知识和最新成果。作者将三个相对独立的学科:机器人学、人工智能和控制论巧妙地结合在一起。《规划算法》给出了大量内......一起来看看 《规划算法》 这本书的介绍吧!

CSS 压缩/解压工具
CSS 压缩/解压工具

在线压缩/解压 CSS 代码

URL 编码/解码
URL 编码/解码

URL 编码/解码

Markdown 在线编辑器
Markdown 在线编辑器

Markdown 在线编辑器