Skip to content

h1376h/musical-neural-net

 
 

Repository files navigation

Clara: A Neural Net Music Generator

Take the AI vs Human Quiz.

Train an AWD-LSTM to generate piano or violin/piano music
Project overview is here.
Detailed paper is here.

Requirements:

Note: From inside the musical-neural-net home directory, run:
ln -s ./replace/this/with/your/path/to/fastai/library fastai 

to create a symbolic link to the fastai library. Alternately, this blog has a clear description of how to get an AWS machine up and running with FastAI already good to go.

You will also likely need to use sudo apt install to get fluidsynth, mpg321, and twolame.

Basic:

Run the Jupyter Notebook BasicIntro.ipynb or follow the individual instructions here. To create generations with a pretrained notewise model, using only the default settings, run:
python make_test_train.py --example
python generate.py -model notewise_generator -output notewise_generation_samples

The output samples will be in data/output/notewise_generation_samples, or open Playlist.ipynb to listen to the output samples. I recommend the free program MuseScore to translate the midi files into sheet music.

Note, you must first make sure the requirements (above) are installed.

Data:

If you use your own midi files, they should go in data/composers/midi/piano_solo or data/composers/midi/chamber (the project expects to see a folder of midi files for each composer, ie: data/composers/midi/piano_solo/bach/example_piece.mid).

Run:

python midi-to-encoding.py

to translate midi files to text files in the various notewise and chordwise options.

My dataset is available here (you can download any or all):
Put these in data/composers/notewise:

Put these in data/composers/chordwise: (Run tar -zxvf thisfilename.tar.gz to expand each one.)

Training and Generation:

  • make_test_train.py - create the training and testing datasets (adjust notewise/chordwise, optionally create only a small sample size)
  • train.py - train an AWD-LSTM (adjust model parameters, dropout, and training regime)
  • generate.py - generate new samples (adjust generation size)
Each script has default settings which should be reasonable, but use --help to see the different options and parameters which can be modified.

If you use the data files I've linked above, those are quite large, and will take a long time to train. If you are looking to experiment with different training networks, I'd highly recommend at first using --sample .1 (10% of the data) with make_test_train.py, so that you have a much smaller dataset to play with and can iterate faster.

Playlist.ipynb is a simple Jupyter Notebook which creates a nicely formatted playlist for listening to all the generations.

Music Critic:

  • make_critic_data.py - create the training and test datasests (requires a trained generation model to create the fake data)
  • critic.py - trains a classifier to predict if a sample is human-composed or LSTM-composed

Composer Classifier:

  • make_composer_data.py - create the training and test datasests (all from human composed pieces)
  • composer_classifier.py - trains a classifier to predict which human composed the piece

Pretrained Models:

Sample pretrained models are included in this repository. They were trained using the default settings (all composers, notewise using a sample frequency 12, chordwise using a sample frequency 4).
  • notewise_generator
  • chordwise_generator
  • chamber_generator (uses notewise encoding)
  • notewise_critic
  • notewise_composer_classifier

For example, use:

python generator.py -model notewise_generator -output notewise_generation_samples --random_freq 0.8 --trunc 3

to generate musical samples.

About

Train an LSTM to generate piano or violin/piano music.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 95.4%
  • Jupyter Notebook 4.2%
  • Shell 0.4%