FasterTransformer
Transformer component
A high-performance transformer-based NLP component optimized for GPU acceleration and integration into various frameworks.
Transformer related optimization, including BERT, GPT
6k stars
62 watching
895 forks
Language: C++
last commit: 11 months ago
Linked from 1 awesome list
bertgptpytorchtransformer
Related projects:
Repository | Description | Stars |
---|---|---|
| A high-performance inference engine for transformer models | 3,467 |
| Provides a set of tools and libraries for optimizing deep learning inference on NVIDIA GPUs. | 10,926 |
| Provides tools and libraries for training and fine-tuning large language models using transformer architectures | 6,215 |
| Implementations of a neural network architecture for language modeling | 3,619 |
| A framework for training large language models using scalable and optimized GPU techniques | 10,804 |
| An implementation of Google's 2018 BERT model in PyTorch, allowing pre-training and fine-tuning for natural language processing tasks | 6,251 |
| An auto-differentiation library for sparse tensors used in computer vision and deep learning applications. | 2,513 |
| A codebase for working with Open Pre-trained Transformers, enabling deployment and fine-tuning of transformer models on various platforms. | 6,519 |
| Provides a framework for training large-scale language models on GPUs with advanced features and optimizations. | 6,997 |
| Real-time speech synthesis using state-of-the-art architectures | 3,855 |
| Provides pre-trained models and code for natural language processing tasks using TensorFlow | 38,374 |
| Provides pre-trained models and code for training vision transformers and mixers using JAX/Flax | 10,620 |
| An optimizer that combines the benefits of Adam and SGD algorithms | 2,908 |
| A high-performance neural network training interface for TensorFlow that focuses on speed and flexibility. | 6,303 |
| Implements RoBERTa for Chinese pre-training using TensorFlow and provides PyTorch versions for loading and training | 2,638 |