PythonLexTo
Word tokenizer
A Python wrapper around the Thai word segmentator LexTo, allowing developers to easily integrate it into their applications.
LexTo (Thai word segmentator) with Python Wrapper (Python 3)
1 stars
0 watching
1 forks
Language: Java
last commit: about 8 years ago Related projects:
Repository | Description | Stars |
---|---|---|
remixman/pythonlexto | A Python wrapper around a Java library for segmenting Thai text into individual words | 3 |
languagemachines/ucto | A tokeniser for natural language text that separates words from punctuation and supports basic preprocessing steps such as case changing | 66 |
rkcosmos/deepcut | A Thai word tokenization library using Deep Neural Network | 421 |
proycon/python-ucto | A Python binding to an advanced, extensible tokeniser written in C++ | 29 |
jonsafari/tok-tok | A fast and simple tokenizer for multiple languages | 28 |
juliatext/wordtokenizers.jl | A set of high-performance tokenizers for natural language processing tasks | 96 |
thisiscetin/textoken | A gem for extracting words from text with customizable tokenization rules | 31 |
abitdodgy/words_counted | A Ruby library that tokenizes input and provides various statistical measures about the tokens | 159 |
denosaurs/tokenizer | A simple tokenizer library for parsing and analyzing text input in various formats. | 17 |
arbox/tokenizer | A Ruby-based library for splitting written text into tokens for natural language processing tasks. | 46 |
fangpenlin/loso | A Python library for Chinese text segmentation using a Hidden Makov Model algorithm | 83 |
6/tiny_segmenter | A Ruby port of a Japanese text tokenization algorithm | 21 |
lex4all/lex4all | Software tool to generate pronunciation lexicons for low-resource languages using speech recognition and machine learning algorithms. | 21 |
proycon/python-frog | A Python binding to a C++ NLP tool for Dutch language processing tasks | 47 |
lfcipriani/punkt-segmenter | A Ruby port of the NLTK algorithm to detect sentence boundaries in unstructured text | 92 |