Top 20 NuGet tokenization Packages
Tokenization of raw text is a standard pre-processing step for many NLP tasks. For English, tokenization usually involves punctuation splitting and separation of some affixes like possessives. Other languages require more extensive token pre-processing, which is usually called segmentation.
TextMatch is a library for searching inside texts using Lucene query expressions. Supports all types of Lucene query expressions - boolean, wildcard, fuzzy. Options are available for tweaking tokenization, such as case-sensitivity and word stemming.
Extract tokens from a string of text for use with NLP tools or statistical analysis.
Text tokenization based on Unicode grapheme clustering, and the XID_Start and XID_Continue binary properties.