trainable-tokenizer
Tokenizer builder
A tool for creating customizable tokenization rules for natural languages
Fast and trainable tokenizer for natural languages relying on maximum entropy methods.
22 stars
4 watching
3 forks
Language: C++
last commit: over 8 years ago Related projects:
| Repository | Description | Stars |
|---|---|---|
| | A fast and simple tokenizer for multiple languages | 28 |
| | A multilingual tokenizer to split strings into tokens, handling various language and formatting nuances. | 90 |
| | A Ruby-based library for splitting written text into tokens for natural language processing tasks. | 46 |
| | Provides a Rust library for tokenizing text with OpenAI models using tiktoken. | 266 |
| | A set of high-performance tokenizers for natural language processing tasks | 96 |
| | A high-performance tokenization library for Go, capable of parsing various data formats and syntaxes. | 103 |
| | A Ruby library that tokenizes text into sentences using a Bayesian statistical model | 80 |
| | A gem for extracting words from text with customizable tokenization rules | 31 |
| | A Ruby library that tokenizes input and provides various statistical measures about the tokens | 159 |
| | A C++ tokenizer that tokenizes Hungarian text | 14 |
| | A Prolog-based tokenization library for lexing text into common tokens | 11 |
| | A workshop for learning Rust by building a JIRA clone | 946 |
| | A Swift library for tokenizing strings with customizable matching behavior | 689 |
| | A tokenizer based on dictionary and Bigram language models for text segmentation in Chinese | 21 |
| | A tokeniser for natural language text that separates words from punctuation and supports basic preprocessing steps such as case changing | 66 |