pragmatic_tokenizer

Tokenizer

A multilingual tokenizer to split strings into tokens, handling various language and formatting nuances.

A multilingual tokenizer to split a string into tokens

GitHub

90 stars
6 watching
11 forks
Language: Ruby
last commit: 3 months ago
Linked from 1 awesome list


Backlinks from these awesome lists:

Related projects:

Repository Description Stars
bzick/tokenizer A high-performance tokenization library for Go, capable of parsing various data formats and syntaxes. 98
arbox/tokenizer A Ruby-based library for splitting written text into tokens for natural language processing tasks. 46
zencephalon/tactful_tokenizer A Ruby library that tokenizes text into sentences using a Bayesian statistical model 80
jonsafari/tok-tok A fast and simple tokenizer for multiple languages 28
shonfeder/tokenize A Prolog-based tokenization library for lexing text into common tokens 11
juliatext/wordtokenizers.jl A set of high-performance tokenizers for natural language processing tasks 96
thisiscetin/textoken A gem for extracting words from text with customizable tokenization rules 31
abitdodgy/words_counted A Ruby library that tokenizes input and provides various statistical measures about the tokens 159
denosaurs/tokenizer A simple tokenizer library for parsing and analyzing text input in various formats. 17
amir-zeldes/rftokenizer A tokenizer for segmenting words into morphological components 27
diasks2/pragmatic_segmenter A rule-based sentence boundary detection gem that works across many languages 551
jirkamarsik/trainable-tokenizer A tool for creating customizable tokenization rules for natural languages 22
zseder/huntoken A tool for tokenizing raw text into words and sentences in multiple languages. 3
6/tiny_segmenter A Ruby port of a Japanese text tokenization algorithm 21
mathewsanders/mustard A Swift library for tokenizing strings with customizable matching behavior 689