gotokenizer
Chinese Tokenizer Library
A tokenizer based on dictionary and Bigram language models for text segmentation in Chinese
A tokenizer based on the dictionary and Bigram language models for Go. (Now only support chinese segmentation)
21 stars
3 watching
7 forks
Language: Go
last commit: almost 6 years ago
Linked from 2 awesome lists
golangsegmentationtokenizer
Related projects:
Repository | Description | Stars |
---|---|---|
| A high-performance tokenization library for Go, capable of parsing various data formats and syntaxes. | 103 |
| A Python library for Chinese text segmentation using a Hidden Makov Model algorithm | 83 |
| A fast and simple tokenizer for multiple languages | 28 |
| A gem for extracting words from text with customizable tokenization rules | 31 |
| A fast and feature-rich HTTP router for Go that supports regular expressions. | 532 |
| A PHP module for Chinese text segmentation and word breaking | 1,331 |
| Provides a Ruby port of the popular Chinese language processing library Jieba | 8 |
| A Ruby library that tokenizes text into sentences using a Bayesian statistical model | 80 |
| A comprehensive guide to design patterns in Go programming language | 268 |
| A multilingual tokenizer to split strings into tokens, handling various language and formatting nuances. | 90 |
| A Ruby port of a Japanese text tokenization algorithm | 21 |
| A Ruby library that tokenizes input and provides various statistical measures about the tokens | 159 |
| A tool for tokenizing raw text into words and sentences in multiple languages, including Hungarian. | 4 |
| A Ruby-based library for splitting written text into tokens for natural language processing tasks. | 46 |
| A Go implementation of Shen, a portable functional programming language with features like pattern matching and macro support. | 56 |