tokenize

Tokenizer

A Prolog-based tokenization library for lexing text into common tokens

A tokenizer written in (SWI-)Prolog. It has some useful features and some flexibility and it might improve.

GitHub

11 stars
4 watching
5 forks
Language: Prolog
last commit: over 5 years ago
Linked from 1 awesome list


Backlinks from these awesome lists:

Related projects:

Repository Description Stars
diasks2/pragmatic_tokenizer A multilingual tokenizer to split strings into tokens, handling various language and formatting nuances. 90
juliatext/wordtokenizers.jl A set of high-performance tokenizers for natural language processing tasks 96
jonsafari/tok-tok A fast and simple tokenizer for multiple languages 28
abitdodgy/words_counted A Ruby library that tokenizes input and provides various statistical measures about the tokens 159
bzick/tokenizer A high-performance tokenization library for Go, capable of parsing various data formats and syntaxes. 103
arbox/tokenizer A Ruby-based library for splitting written text into tokens for natural language processing tasks. 46
thisiscetin/textoken A gem for extracting words from text with customizable tokenization rules 31
denosaurs/tokenizer A simple tokenizer library for parsing and analyzing text input in various formats. 17
mathewsanders/mustard A Swift library for tokenizing strings with customizable matching behavior 689
zseder/huntoken Tokes raw text into individual words and sentences 4
zencephalon/tactful_tokenizer A Ruby library that tokenizes text into sentences using a Bayesian statistical model 80
amir-zeldes/rftokenizer A tokenizer for segmenting words into morphological components 27
usemuffin/tokenize A plugin that generates and manages security tokens for password protection in web applications. 13
c4n/pythonlexto A Python wrapper around the Thai word segmentator LexTo, allowing developers to easily integrate it into their applications. 1
xujiajun/gotokenizer A tokenizer based on dictionary and Bigram language models for text segmentation in Chinese 21