tiny_segmenter
Text tokenizer
A Ruby port of a Japanese text tokenization algorithm
Ruby port of TinySegmenter.js for tokenizing Japanese text
21 stars
3 watching
1 forks
Language: Ruby
last commit: over 8 years ago Related projects:
Repository | Description | Stars |
---|---|---|
lfcipriani/punkt-segmenter | Port of the NLTK Punkt sentence segmentation algorithm in Ruby | 92 |
arbox/tokenizer | A Ruby-based library for splitting written text into tokens for natural language processing tasks. | 46 |
thisiscetin/textoken | A gem for extracting words from text with customizable tokenization rules | 31 |
abitdodgy/words_counted | A Ruby library that tokenizes input and provides various statistical measures about the tokens | 159 |
mimosa/jieba-jruby | Provides a Ruby port of the popular Chinese language processing library Jieba | 8 |
zencephalon/tactful_tokenizer | A Ruby library that tokenizes text into sentences using a Bayesian statistical model | 80 |
tkellen/ruby-ngram | Breaks text into contiguous sequences of words or phrases | 12 |
denosaurs/tokenizer | A simple tokenizer library for parsing and analyzing text input in various formats. | 17 |
diasks2/pragmatic_tokenizer | A multilingual tokenizer to split strings into tokens, handling various language and formatting nuances. | 90 |
c4n/pythonlexto | A Python wrapper around the Thai word segmentator LexTo, allowing developers to easily integrate it into their applications. | 1 |
jonsafari/tok-tok | A fast and simple tokenizer for multiple languages | 28 |
arbox/treetagger-ruby | A Ruby wrapper for a statistical language modeling tool for part-of-speech tagging and chunking | 16 |
xujiajun/gotokenizer | A tokenizer based on dictionary and Bigram language models for text segmentation in Chinese | 21 |
zseder/huntoken | A tool for tokenizing raw text into words and sentences in multiple languages. | 3 |
ismasan/oat | Provides a standardized way to serialize API data in Ruby apps. | 278 |