3 packages returned for Tags:"segmenter"
- 14,472 total downloads
- last updated 2/27/2021
- Latest version: 4.2.0
Tokenization of raw text is a standard pre-processing step for many NLP tasks. For English, tokenization usually involves punctuation splitting and separation of some affixes like possessives. Other languages require more extensive token pre-processing, which is usually called segmentation.