tok-tok

Tokenizer

A fast and simple tokenizer for multiple languages

A fast, simple, multilingual tokenizer

GitHub

28 stars
5 watching
3 forks
Language: Python
last commit: over 7 years ago
Linked from 1 awesome list

multilingualnlptokenisertokenizer

Backlinks from these awesome lists:

Related projects:

Repository Description Stars
diasks2/pragmatic_tokenizer A multilingual tokenizer to split strings into tokens, handling various language and formatting nuances. 90
shonfeder/tokenize A Prolog-based tokenization library for lexing text into common tokens 11
jirkamarsik/trainable-tokenizer A tool for creating customizable tokenization rules for natural languages 22
bzick/tokenizer A high-performance tokenization library for Go, capable of parsing various data formats and syntaxes. 98
c4n/pythonlexto A Python wrapper around the Thai word segmentator LexTo, allowing developers to easily integrate it into their applications. 1
arbox/tokenizer A Ruby-based library for splitting written text into tokens for natural language processing tasks. 46
xujiajun/gotokenizer A tokenizer based on dictionary and Bigram language models for text segmentation in Chinese 21
zencephalon/tactful_tokenizer A Ruby library that tokenizes text into sentences using a Bayesian statistical model 80
juliatext/wordtokenizers.jl A set of high-performance tokenizers for natural language processing tasks 96
denosaurs/tokenizer A simple tokenizer library for parsing and analyzing text input in various formats. 17
thisiscetin/textoken A gem for extracting words from text with customizable tokenization rules 31
proycon/python-ucto A Python binding to an advanced, extensible tokeniser written in C++ 29
abitdodgy/words_counted A Ruby library that tokenizes input and provides various statistical measures about the tokens 159
nytud/quntoken A C++ tokenizer that tokenizes Hungarian text 14
zurawiki/tiktoken-rs Provides a Rust library for tokenizing text with OpenAI models using tiktoken. 256