tokenizer
Tokenizer
A high-performance tokenization library for Go, capable of parsing various data formats and syntaxes.
Tokenizer (lexer) for golang
103 stars
2 watching
7 forks
Language: Go
last commit: 11 months ago
Linked from 2 awesome lists
golanglexerparseparsertokenizertokenizing
Related projects:
| Repository | Description | Stars |
|---|---|---|
| | A multilingual tokenizer to split strings into tokens, handling various language and formatting nuances. | 90 |
| | A tokenizer based on dictionary and Bigram language models for text segmentation in Chinese | 21 |
| | A Prolog-based tokenization library for lexing text into common tokens | 11 |
| | A simple tokenizer library for parsing and analyzing text input in various formats. | 17 |
| | A fast and simple tokenizer for multiple languages | 28 |
| | A set of high-performance tokenizers for natural language processing tasks | 96 |
| | A Ruby-based library for splitting written text into tokens for natural language processing tasks. | 46 |
| | A library that tokenizes XML data into smaller units for easier processing | 25 |
| | A tool for tokenizing raw text into words and sentences in multiple languages, including Hungarian. | 4 |
| | A Ruby library that tokenizes input and provides various statistical measures about the tokens | 159 |
| | A gem for extracting words from text with customizable tokenization rules | 31 |
| | A command line tool to split text into individual sentences | 441 |
| | A utility for parsing and breaking down CSS3 code into smaller components | 87 |
| | A tool for generating lexers and parsers from a BNF file with semantic actions. | 621 |
| | A tool for creating customizable tokenization rules for natural languages | 22 |