Fast_Sentence_Embeddings
Sentence embedder
A Python library for efficiently computing sentence embeddings from large datasets
Compute Sentence Embeddings Fast!
618 stars
12 watching
83 forks
Language: Jupyter Notebook
last commit: over 2 years ago
Linked from 1 awesome list
cythondocument-similarityembeddingsfasttextfsegensimgensim-modelmaxpoolingsentence-embeddingssentence-representationsentence-similaritysifswemusifword2vec-modelwordembedding
Related projects:
| Repository | Description | Stars |
|---|---|---|
| | A method to convert word embeddings into sentence representations by applying entropy weights calculated from TFIDF transform. | 9 |
| | A Python implementation of a sentence embedding algorithm using the Smooth Inverse Frequency weighting scheme | 1,084 |
| | A method to generate sentence embeddings from pre-trained language models | 178 |
| | An implementation of a non-parameterized approach for building sentence representations | 19 |
| | Develops unified sentence embedding models for NLP tasks | 840 |
| | This is a tool for creating deep sentence embeddings using Sequence-to-Sequence learning. | 22 |
| | An implementation of a neural network model for learning efficient sentence representations from text data. | 205 |
| | Code for training universal paraphrastic sentence embeddings and models on semantic similarity tasks | 193 |
| | Provides training and testing code for a CNN-based sentence embedding model | 2 |
| | An unsupervised technique to generate numerical representations of sentences and words for use in machine learning tasks | 1,194 |
| | Provides fast and efficient word embeddings for natural language processing. | 223 |
| | This project generates Spanish word embeddings using fastText on large corpora. | 9 |
| | A codebase for training and using models of sentence embeddings. | 33 |
| | This project enables efficient reconstruction of word embeddings by leveraging subword representations. | 9 |
| | This implementation provides a way to represent words as multivariate Gaussian distributions, allowing scalable word embeddings. | 190 |