Music Generation
The basic idea is to teach an AI to compose like me.
Key points
- ML as a music instrument
- ML as a creativity explorer/augmenter
How do I compose?
- Think about some abstract idea I want to explore (a musical idea)
- Generally play with some audio material (synth, sample, etc) and try to shape
it in the idea I want to express
- Can be as a rhythm, or some lines, or a sensation, or even by imitating some other music piece
- I keep iterating on this audio material and add new ones
- I record musical notes, keep chaining lots of effects, draw automations
- I know already some patterns that works great (eg. a signal follower from
main lead/bass to kick, so we can hear both; which chord progression to use)
- I generally build most of the sound core first, then I go stretching it,
building structure (eg longer lines, Intro-A-B-Outro)
Pain points
- Automation is pretty powerful but time consuming
- Hard to enter each note/chord by hand (well we could use a keyboard), would
be better to have some templates/phrases to select from
- Stretching core idea to make the whole song generally makes me give up on
finishing the piece
Challenges / Questions
- How to create a model that learns how to add automation? (MIDI has no
dynamics)
Ideas to implement
Refs