入门客AI创业平台(我带你入门,你带我飞行)
博文笔记

利用NLTK进行分句分词

创建时间:2017-03-05 投稿人: 浏览次数:6104
1.输入一个段落,分成句子(Punkt句子分割器)
import nltk
import nltk.data

def splitSentence(paragraph):
    tokenizer = nltk.data.load("tokenizers/punkt/english.pickle")
    sentences = tokenizer.tokenize(paragraph)
    return sentences

if __name__ == "__main__":
    print splitSentence("My name is Tom. I am a boy. I like soccer!")

结果为["My name is Tom.", "I am a boy.", "I like soccer!"]


2.输入一个句子,分成词组

from nltk.tokenize import WordPunctTokenizer  

def wordtokenizer(sentence):
    #分段
    words = WordPunctTokenizer().tokenize(sentence)
    return words

if __name__ == "__main__":
    print wordtokenizer("My name is Tom.")
结果为["My", "name", "is", "Tom", "."]




声明:该文观点仅代表作者本人,入门客AI创业平台信息发布平台仅提供信息存储空间服务,如有疑问请联系rumenke@qq.com。
  • 上一篇:没有了
  • 下一篇:没有了
未上传头像