magnitude
Vector embedding utility
A fast and efficient utility package for utilizing vector embeddings in machine learning models
A fast, efficient universal vector embedding utility package.
2k stars
38 watching
120 forks
Language: Python
last commit: about 2 years ago
Linked from 1 awesome list
embeddingsfastfasttextgensimglovemachine-learningmachine-learning-librarymemory-efficientnatural-language-processingnlppythonvectorsword-embeddingsword2vec
Related projects:
| Repository | Description | Stars |
|---|---|---|
| | Unsupervised word embeddings capture latent knowledge from materials science literature | 624 |
| | Provides fast and efficient word embeddings for natural language processing. | 223 |
| | A project implementing a method to incorporate morphological information into word embeddings using a neural network model | 52 |
| | A utility class for generating and evaluating document representations using word embeddings. | 54 |
| | An implementation of a non-parameterized approach for building sentence representations | 19 |
| | An implementation of a deep learning-based image representation learning approach using a modified fully connected layer and transfer learning from VGG16 | 34 |
| | Tools and code for inducing custom semantic vector representations from text data | 104 |
| | Code for training universal paraphrastic sentence embeddings and models on semantic similarity tasks | 193 |
| | Multi-sense word embeddings learned from visual cooccurrences | 25 |
| | A Ruby wrapper around a vector search database API for efficient similarity searches in high-dimensional space | 25 |
| | Provides pre-trained ELMo representations for multiple languages to improve NLP tasks. | 1,462 |
| | Transforms existing word embeddings into more interpretable ones by applying a novel extension of k-sparse autoencoder with stricter sparsity constraints | 52 |
| | A toolkit for learning high-quality word and text representations from ngram co-occurrence statistics | 848 |
| | A deep learning model that generates word embeddings by predicting words based on their dependency context | 291 |
| | A framework for learning sentence embeddings from matrices | 21 |