trainable-tokenizer

Tokenizer builder

A tool for creating customizable tokenization rules for natural languages

Fast and trainable tokenizer for natural languages relying on maximum entropy methods.

GitHub

22 stars
4 watching
3 forks
Language: C++
last commit: over 7 years ago

Related projects:

Repository Description Stars
jonsafari/tok-tok A fast and simple tokenizer for multiple languages 28
diasks2/pragmatic_tokenizer A multilingual tokenizer to split strings into tokens, handling various language and formatting nuances. 90
arbox/tokenizer A Ruby-based library for splitting written text into tokens for natural language processing tasks. 46
zurawiki/tiktoken-rs Provides a Rust library for tokenizing text with OpenAI models using tiktoken. 256
juliatext/wordtokenizers.jl A set of high-performance tokenizers for natural language processing tasks 96
bzick/tokenizer A high-performance tokenization library for Go, capable of parsing various data formats and syntaxes. 100
zencephalon/tactful_tokenizer A Ruby library that tokenizes text into sentences using a Bayesian statistical model 80
thisiscetin/textoken A gem for extracting words from text with customizable tokenization rules 31
abitdodgy/words_counted A Ruby library that tokenizes input and provides various statistical measures about the tokens 159
nytud/quntoken A C++ tokenizer that tokenizes Hungarian text 14
shonfeder/tokenize A Prolog-based tokenization library for lexing text into common tokens 11
lukemathwalker/build-your-own-jira-with-rust A workshop for learning Rust by building a JIRA clone 944
mathewsanders/mustard A Swift library for tokenizing strings with customizable matching behavior 689
xujiajun/gotokenizer A tokenizer based on dictionary and Bigram language models for text segmentation in Chinese 21
languagemachines/ucto A tokeniser for natural language text that separates words from punctuation and supports basic preprocessing steps such as case changing 65