words_counted
Tokenizer
A Ruby library that tokenizes input and provides various statistical measures about the tokens
A Ruby natural language processor.
159 stars
12 watching
29 forks
Language: Ruby
last commit: about 4 years ago
Linked from 2 awesome lists
natural-language-processingnlprubyrubynlpword-counterwordcountwordscounter
Related projects:
| Repository | Description | Stars |
|---|---|---|
| | A Ruby-based library for splitting written text into tokens for natural language processing tasks. | 46 |
| | A Ruby library that tokenizes text into sentences using a Bayesian statistical model | 80 |
| | A gem for extracting words from text with customizable tokenization rules | 31 |
| | A multilingual tokenizer to split strings into tokens, handling various language and formatting nuances. | 90 |
| | A set of high-performance tokenizers for natural language processing tasks | 96 |
| | A Prolog-based tokenization library for lexing text into common tokens | 11 |
| | A Ruby port of a Japanese text tokenization algorithm | 21 |
| | A Ruby wrapper for a statistical language modeling tool for part-of-speech tagging and chunking | 16 |
| | A Ruby port of the NLTK algorithm to detect sentence boundaries in unstructured text | 92 |
| | A simple tokenizer library for parsing and analyzing text input in various formats. | 17 |
| | A fast and simple tokenizer for multiple languages | 28 |
| | A high-performance tokenization library for Go, capable of parsing various data formats and syntaxes. | 103 |
| | A Python wrapper around the Thai word segmentator LexTo, allowing developers to easily integrate it into their applications. | 1 |
| | Breaks text into contiguous sequences of words or phrases | 12 |
| | A Swift library for tokenizing strings with customizable matching behavior | 689 |