pytorch-maml
Meta-learning framework
An implementation of Model-Agnostic Meta-Learning (MAML) using PyTorch
PyTorch implementation of MAML: https://arxiv.org/abs/1703.03400
555 stars
10 watching
128 forks
Language: Jupyter Notebook
last commit: over 6 years ago Related projects:
Repository | Description | Stars |
---|---|---|
| Replication of Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks in PyTorch for reinforcement learning tasks | 830 |
| A PyTorch implementation of meta-learning using gradient descent to adapt to new tasks. | 312 |
| Provides tools and datasets for meta-learning and few-shot learning in deep learning | 1,996 |
| A PyTorch toolbox for supporting research and development of domain adaptation, generalization, and semi-supervised learning methods in computer vision. | 1,236 |
| A toolkit providing easy-to-use machine learning modules and functionalities for natural language processing and text generation tasks | 745 |
| An implementation of Mamba-based traversal of rationale to improve performance of numerous vision language models. | 102 |
| Re-implementation of Coursera's Deep Learning specialization assignments in PyTorch | 149 |
| Provides a flexible and configurable framework for training deep learning models with PyTorch. | 1,196 |
| A centralized repository of pre-trained machine learning models for deep learning frameworks like PyTorch. | 1,402 |
| A PyTorch-based framework for training and validating models that produce high-quality embeddings for computer vision and other tasks. | 897 |
| An implementation of an online federated learning algorithm with multiple kernels for personalized machine learning | 0 |
| A deep learning framework for handling heterogeneous tabular data with diverse column types | 582 |
| A Python framework for building deep learning models with optimized encoding layers and batch normalization. | 2,044 |
| Enables training of reinforcement learning models with PyTorch on Godot Engine using shared memory | 214 |
| An implementation of an optimization algorithm inspired by a 2016 research paper | 33 |