Teaching Computers about Music

I recently took an interest in music. More specifically, algorithmic composition, that is, designing computer programs that can compose music. I made some attempts at implementing such programs, based on the idea that it should be possible to come up with a set of rules for generating music that sounds pleasant. What's pleasant to one person might not be pleasant for everyone, but I thought I could easily conjure rules for generating music that was pleasant to me.

Unfortunately, the more I looked into music theory, the more problematic this became. There are some known, seemingly loose and imprecise, "rules" associated with classical music, but it seems to me these "rules" are in no way sufficient to derive an algorithm. Furthermore, I find that music theory largely fails to account for modern music. Some people might tell you that pop and dance music are trivial once you know about classical music. I would argue that this point of view is arrogant and uninformed. Modern electronic music has a much stronger focus on rhythm and unpitched sounds. It has integrated musical elements from non-european cultures and features instruments and composition techniques that didn't exist before the 1960s.

A few days ago, I started thinking about ways to flip the algorithmic composition problem on its head. I'm unable, at this point, to write out elaborate rules to compose musical pieces by hand. It seems like an overly complex, tiresome process. I do, however, have my own musical tastes. If you show me a snippet of music, I can tell you whether it sounds musical or dischordant to me. My idea then, is to have an algorithm that composes random musical patterns, and to have a human judge tell the algorithm what sounds best.

I was able to prototype a simple version of this idea in about 6 hours of programming time. The program I implemented essentially "evolves" a simple musical grammar based on the user's tastes and renders the sounds using a synthesized bass patch. My girlfriend and I both tried teaching the program about our tastes.

Here are some samples from a single run I did last night. The training took about 15 minutes:

Here is one sample from a run supervised by my girlfriend:

The first 3 examples sound similar because they are based on the same grammar derived during one run. The algorithm is capable of generating rather different sounds, however, based on the choices made during a given run and your own personal preferences.

So far, I find the  results very encouraging. I think there is clearly potential in the idea that someone can teach an algorithm how to compose based on their own tastes, even if they themselves do not have formal knowledge of music theory. I plan to explore this concept further. I will be trying to generate more complex music that integrates a drum kit and variable-length notes.