Make Donal Trump Speaks Again -- a stack LSTM RNN that speaks like Donal Trump by generating character one at a time.
RNN traning is generally computationally expensive, so you should run the program with GPUs if possible.
# run the following program to
# 1) generate a index
# 2) generate a model
# 3) generate a sample speech
# generate a model
python train_model.py
# choose the best model from model-tmp folder and name it model-DT.hdf5
# generate sample speech from the model
python generate_speech.py
- The input for the model is character not word, so that's why you can see some typos and jeburish in the text. However, in most of the cases, the model is actaully able to learn English, which is quiet amazing.
- The model is capturing some phrases like "hillary clinton" and "thank you, and god bless!".
- I ran into the problem where the RNN generate a repeated pattern if I stick with the softmax result. Therefore, I adjust the softmax result with a diversity factor and run a multinomial instead.
- The project is inspired by Andrej Karpathy's note on how effective RNN could be.
- @RyanMarcus's Edgar Allan Poetry has inspired me to keep track of char index and add a diversity factor with the multinomial.