Music Generation
The basic idea is to teach an AI to compose like me.
Key points
- ML as a music instrument
- ML as a creativity explorer/augmenter
How do I compose?
- Think about some abstract idea I want to explore (a musical idea)
- Generally play with some audio material (synth, sample, etc) and try to shape
it in the idea I want to express
- Can be as a rhythm, or some lines, or a sensation, or even by imitating some other music piece
- I keep iterating on this audio material and add new ones
- I record musical notes, keep chaining lots of effects, draw automations
- I know already some patterns that works great (eg. a signal follower from
main lead/bass to kick, so we can hear both; which chord progression to use)
- I generally build most of the sound core first, then I go stretching it,
building structure (eg longer lines, Intro-A-B-Outro)
Pain points
- Automation is pretty powerful but time consuming
- Hard to enter each note/chord by hand (well we could use a keyboard), would
be better to have some templates/phrases to select from
- Stretching core idea to make the whole song generally makes me give up on
finishing the piece
Challenges / Questions
- How to create a model that learns how to add automation? (MIDI has no
dynamics)
Ideas to implement
- Use DDSP and forget about MIDI
- Maybe combine DDSP+MIDI as input/output, so we can learn what
notes/automations generated that output
- Great idea to use multitask models
https://github.com/bearpelican/musicautobot
Refs
- https://musicautobot.com/
- https://towardsdatascience.com/a-multitask-music-model-with-bert-transformer-xl-and-seq2seq-3d80bd2ea08e
- https://github.com/ybayle/awesome-deep-learning-music
- https://medium.com/@snikolov/neuralbeats-generative-techno-with-recurrent-neural-networks-3824d7ba7972
- https://github.com/snikolov/neural-beats
- https://magenta.tensorflow.org/ddsp
- Neurofunk: https://towardsdatascience.com/neuralfunk-combining-deep-learning-with-sound-design-91935759d628