seqgan-music

Music generator

Generates polyphonic music sequences using deep learning models and adversarial training

Implementation of a paper "Polyphonic Music Generation with Sequence Generative Adversarial Networks" in TensorFlow

GitHub

28 stars
4 watching
8 forks
Language: Python
last commit: about 6 years ago
deep-learninggenerative-adversarial-networkmusic-generationpolyphonicseqgantensorflow

Related projects:

Repository Description Stars
salu133445/musegan Generates polyphonic music from scratch or based on user input using a neural network architecture 1,846
brunnergino/jambot A music generation tool that uses chord embeddings and neural networks to create polyphonic music with harmonic structure 64
calclavia/deepj A deep learning model that generates music with tunable properties based on specific styles 728
fephsun/neuralnetmusic A Python-based project for composing music using neural networks. 126
mtg/deepconvsep A framework for training deep neural networks to separate music sources from audio files. 472
chrisdonahue/wavegan An open-source machine learning algorithm for generating raw audio waveforms from raw data 1,330
soroushmehr/samplernn_iclr2017 An unconditional end-to-end neural audio generation model utilizing a recurrent neural network architecture. 537
marcoppasini/musika Generates music samples using a neural network-based algorithm 662
mlachmish/musicgenreclassification Classify music genre from a 10-second sound stream using a neural network. 562
orhun/linuxwave Generates music from random data 537
umbrellabeach/music-generation-with-dl An open-source project providing an extensive collection of research papers and code snippets on music generation using deep learning. 718
deepsound-project/samplernn-pytorch An implementation of an audio generation model using PyTorch 288
peihaochen/regnet An implementation of a neural network for generating sound from video sequences 52
despoisj/deepaudioclassification Automatically categorizes music genres using deep learning algorithms 1,097
v-iashin/specvqgan A project to train an audio generation model that uses visual cues to produce high-quality sounds from a reduced dataset of representative vectors. 347