Generating beats and melodies with LSTMs using Python and Tensorflow
Kumar Abhijeet (~kumar80) |
Music is mainly an artistic act of inspired creation and is unlike some of the traditional math problems. But, a sequence of specific chords and notes can be observed when we listen to music. With the recent advancements of the AI tech, sequence models are used invariably in innumerous fields, one such sequence model, LSTM( Long Short Term Memory Networks) can be used to generate melodies and beats.
So, this talk is about how deep learning models, specifically LSTMs were used to produce music - catering particularly to the Electronic Dance Music Industry.
CONTENTS AND ORDER OF THE TALK
- Learning how LSTMs help in generating music, and the concepts behind it.
- Preprocessing the MIDI data for the melodies and beats using MIDI packages created by the Python community.
- Building the LSTM network using Keras with Tensorflow as backend and understanding it.
- Train the network with the melodical data to create the LSTM network for melodies and same thing for beats.
- Generating melodies and beats(using pretrained model) and combining the two to create different type of genres of music.
I am including a piece of music generated by an MIT alumnus, but I will be explaining the steps from scratch.
Tensorflow, Keras, Recurrent Networks and a Good taste in music ;)
I am Kumar Abhijeet, a sophomore from RV College of Engineering, Bengaluru and an AI enthusiast. I am a budding EDM producer and a python programmer as well(no doubt in that). I have worked with small AI startups in building their frameworks.
I am an open source contributor and a GSOC aspirant. I have always loved the idea of mixing technology with regular phenomena, which I will be doing with music. I love going to meetups and meet different kinds of communities to learn from them.