tiny_segmenter
Text tokenizer
A Ruby port of a Japanese text tokenization algorithm
Ruby port of TinySegmenter.js for tokenizing Japanese text
21 stars
3 watching
1 forks
Language: Ruby
last commit: over 8 years ago Related projects:
Repository | Description | Stars |
---|---|---|
| A Ruby port of the NLTK algorithm to detect sentence boundaries in unstructured text | 92 |
| A Ruby-based library for splitting written text into tokens for natural language processing tasks. | 46 |
| A gem for extracting words from text with customizable tokenization rules | 31 |
| A Ruby library that tokenizes input and provides various statistical measures about the tokens | 159 |
| Provides a Ruby port of the popular Chinese language processing library Jieba | 8 |
| A Ruby library that tokenizes text into sentences using a Bayesian statistical model | 80 |
| Breaks text into contiguous sequences of words or phrases | 12 |
| A simple tokenizer library for parsing and analyzing text input in various formats. | 17 |
| A multilingual tokenizer to split strings into tokens, handling various language and formatting nuances. | 90 |
| A Python wrapper around the Thai word segmentator LexTo, allowing developers to easily integrate it into their applications. | 1 |
| A fast and simple tokenizer for multiple languages | 28 |
| A Ruby wrapper for a statistical language modeling tool for part-of-speech tagging and chunking | 16 |
| A tokenizer based on dictionary and Bigram language models for text segmentation in Chinese | 21 |
| A tool for tokenizing raw text into words and sentences in multiple languages, including Hungarian. | 4 |
| Provides a standardized way to serialize API data in Ruby apps. | 278 |