python – 如何在数据框中使用word_tokenize

栏目: Python · 发布时间: 6年前

内容简介:翻译自:https://stackoverflow.com/questions/33098040/how-to-use-word-tokenize-in-data-frame

我最近开始使用nltk模块进行文本分析.我陷入了困境.我想在数据帧上使用word_tokenize,以便获取数据帧的特定行中使用的所有单词.

data example:
       text
1.   This is a very good site. I will recommend it to others.
2.   Can you please give me a call at 9983938428. have issues with the listings.
3.   good work! keep it up
4.   not a very helpful site in finding home decor. 

expected output:

1.   'This','is','a','very','good','site','.','I','will','recommend','it','to','others','.'
2.   'Can','you','please','give','me','a','call','at','9983938428','.','have','issues','with','the','listings'
3.   'good','work','!','keep','it','up'
4.   'not','a','very','helpful','site','in','finding','home','decor'

基本上,我想分离所有单词并找到数据框中每个文本的长度.

我知道word_tokenize可以用于字符串,但是如何将它应用到整个数据帧?

请帮忙!

提前致谢…

您可以使用DataFrame API的apply方法:

import pandas as pd
import nltk

df = pd.DataFrame({'sentences': ['This is a very good site. I will recommend it to others.', 'Can you please give me a call at 9983938428. have issues with the listings.', 'good work! keep it up']})
df['tokenized_sents'] = df.apply(lambda row: nltk.word_tokenize(row['sentences']), axis=1)

输出:

>>> df
                                           sentences  \
0  This is a very good site. I will recommend it ...   
1  Can you please give me a call at 9983938428. h...   
2                              good work! keep it up   

                                     tokenized_sents  
0  [This, is, a, very, good, site, ., I, will, re...  
1  [Can, you, please, give, me, a, call, at, 9983...  
2                      [good, work, !, keep, it, up]

要查找每个文本的长度,请尝试再次使用apply和lambda函数:

df['sents_length'] = df.apply(lambda row: len(row['tokenized_sents']), axis=1)

>>> df
                                           sentences  \
0  This is a very good site. I will recommend it ...   
1  Can you please give me a call at 9983938428. h...   
2                              good work! keep it up   

                                     tokenized_sents  sents_length  
0  [This, is, a, very, good, site, ., I, will, re...            14  
1  [Can, you, please, give, me, a, call, at, 9983...            15  
2                      [good, work, !, keep, it, up]             6

翻译自:https://stackoverflow.com/questions/33098040/how-to-use-word-tokenize-in-data-frame


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

笨办法学Python 3

笨办法学Python 3

[美]泽德 A. 肖 / 王巍巍 / 人民邮电出版社 / 2018-6-1 / CNY 59.00

本书是一本Python入门书,适合对计算机了解不多,没有学过编程,但对编程感兴趣的读者学习使用。这本书以习题的方式引导读者一步一步学习编程,从简单的打印一直讲到完整项目的实现,让初学者从基础的编程技术入手,最终体验到软件开发的基本过程。本书是基于Python 3.6版本编写的。 本书结构非常简单,除“准备工作”之外,还包括52个习题,其中26个覆盖了输入/输出、变量和函数3个主题,另外26个......一起来看看 《笨办法学Python 3》 这本书的介绍吧!

JS 压缩/解压工具
JS 压缩/解压工具

在线压缩/解压 JS 代码

CSS 压缩/解压工具
CSS 压缩/解压工具

在线压缩/解压 CSS 代码

RGB HSV 转换
RGB HSV 转换

RGB HSV 互转工具