Cortexsys

Deep learning toolkit

A toolbox for building and training deep neural networks in Matlab

Matlab GPU Accelerated Deep Learning Toolbox

GitHub

70 stars
11 watching
28 forks
Language: Matlab
last commit: over 8 years ago
Linked from 1 awesome list


Backlinks from these awesome lists:

Related projects:

Repository Description Stars
jppbsi/libdeep An open-source C library providing tools and components for developing artificial intelligence-based techniques using deep learning. 23
henriqueslab/zerocostdl4mic A toolkit for using deep learning in microscopy without requiring extensive coding expertise 568
fielddb/deeplearntoolbox A Matlab toolbox for building and training deep neural networks 0
vict0rsch/deep_learning A collection of tutorials and resources on implementing deep learning models using Python libraries such as Keras and Lasagne. 426
ardanlabs/training-ai Provides training materials and tools for building machine learning applications 72
zhanghang1989/pytorch-encoding A Python framework for building deep learning models with optimized encoding layers and batch normalization. 2,044
yechengxi/lightnet A Matlab-based framework for building and training deep learning models 271
quantumliu/matdl A lightweight MATLAB deeplearning toolbox for efficient neural network training and prediction. 54
coreylowman/dfdx A deep learning library for Rust with GPU acceleration and ergonomic API. 1,754
zygmuntz/kaggle-blackbox A toolkit for building and training machine learning models using a simple, easy-to-use interface. 115
dustinstansbury/medal A Matlab environment for training and using various deep learning architectures 109
dmarnerides/pydlt A PyTorch-based toolbox for building and training deep learning models with ease. 204
kexinhuang12345/deeppurpose A toolkit for molecular modeling and prediction tasks using deep learning 988
marcbs/keras An extension of Keras with new functionalities including conversion from Caffe models and support for multimodal data. 225
yandex/faster-rnnlm A toolkit for training efficient neural network language models on large datasets with hierarchical softmax and noise contrastive estimation. 560