Skip to main content

See also:

Thoughts about Iamus and the composition of music by computer

This past Wednesday BBC News Reporter Sylvia Smith filed an article that appeared on the BBC News Web site. The article concerned a project at the Universidad de Málaga called Iamus, also the name of a computer program named after a character from Greek mythology who could talk to birds. The objective of the project has been to apply the results of artificial life research to “evolve” a computer capable composing music. The article includes a video clip in which Professor Francisco Vico gives a brief description of how the software works, followed by a bit of intriguing footage of the London Symphony Orchestra performing some of the results.

Violinist Cecilia Bercovich performing "Hello World!," "composed" by Iamus
screen shot from YouTube video

As might be imagined, that “bit” is far too brief to get any sense of what Iamus is capable of doing. Fortunately, Cristo Barrios, a clarinetist with the project, has uploaded two longer videos to YouTube, both of which are “Making of” reports. The first is about “Hello World!,” a trio for clarinet (Barrios), violin (Cecilia Bercovich), and piano (Gustavo Díaz-Jerez), which was Iamus’ “first composition.” (Those scare quotes are not meant to be pejorative. Artificial life techniques involve algorithms for selection from large populations based on “fitness” criteria. I am sure that considerable effort went into generating populations, establishing “fitness,” and modeling a suitable “evolutionary process” before the researchers could declare that “music” had emerged from their methods.) The second is a video prepared to promote the release of the first Iamus CD, and it offers a broader range of examples of what the project has achieved. Both of these videos are well worth viewing.

Nevertheless, I feel it fair to observe that Iamus is far from the first major effort to get a computer to compose music. Indeed, to be more specific, it is not the first to generate scores to which audiences would be willing to listen (just to draw a distinction between the abstractions of computer output and the real world of music being made in a concert setting). Back in 1992 I published a review in IEEE Expert of a videotape entitled Bach Lives!...At David Cope's House, which provided a valuable profile of Cope’s Experiments in Musical Intelligence (EMI) project, whose software has produced compositions in the styles of over 100 composers. (Sadly, this video does not appear to have been rereleased in DVD, but Cope has had a YouTube account since last August and is uploading videos to it … as recently as today.)

Even before his video was reviewed, in my capacity as Book Review Editor for Artificial Intelligence, I had arranged for a colleague, himself a composer, to review Cope’s first major book on his approach, Computers and Musical Style. My colleague used the phrase “musical Mad Libs” as a dismissive description of Cope’s technique. This refers to a game invented by Leonard Stern and Roger Price in 1953 in which one person creates a story, replacing many of the key words by blanks, identified only by part of speech. The players are then required to fill in the blanks without knowing their context. The story would then be read, often with hilarious results.

It is true that Cope worked with musical phrases (analogous to parts of speech), rather than trying to reduce his system to the logic of individual notes; but, in doing so, he was simply continuing a practice that had been around at least since the Middle Ages. In those days the practice was called centonization, cento being the Latin for a single patch in a patchwork quilt. When a new chant was required for some new text being added to a service, that chant would be created by piecing together fragments from chants that were already known. In his pioneering treatise on Gregorian chant, Estetica gregoriana ossia Trattato delle forme musicali del canto gregoriano, published in Rome in 1934, Paolo Ferretti codified this practice in terms of filling in three successive “blanks” (Mad Lib style, but without anything between the blanks): an opening formula, one or more central formulas, and a closing formula. He then enumerated the “patches” that could be used to fill each of these blanks, along with an elaborate set of constraints over how individual patches could be ordered. In terminology that did not exist in his time, Ferretti had devised a context-free (or context-limited, depending on how you chose to view the constraints) grammar for generating “sentences” that would be “new” Gregorian chants. (For the record, I implemented such a sentence generator, documenting it in my doctoral thesis.)

Leap forward to Europe in the eighteenth century, and we discover the Musikalisches Würfelspiel (musical dice game). This was another fill-in-the-blanks pastime. This time the blanks were entire measures, each of which could be filled in with one of eleven options determined by tossing a pair of dice. Carl Philipp Emanuel Bach made one of these; and Nikolaus Simrock published one, attributing it to Wolfgang Amadeus Mozart (for which it was given the catalog number K. 294d). Since there were no constraints as to what choice for any measure could follow any other, the system was far simpler than the one Ferretti had described.

Thus, at the simplest possible level of description, what Cope has been doing for about a quarter of a century has been trying to represent both styles and individual composers in terms of a formal grammar (not necessarily context-free) and a lexicon of “terminals” (the symbols used to fill in the blanks) for that grammar. The Bach Lives! video explained this technique and showed examples of the results being performed in a concert setting. What was most interesting was that the act of seeing performers approach his scores seriously, performing them with the same expressiveness they might bring to a Mozart sonata, was sufficient to persuade the listener that this was, indeed, a “musical” experience.

Now, from an algorithmic point of view, Iamus is following a path radically different from the one Cope has been pursuing. Nevertheless, as far as the results are concerned, the effect of watching the videos that Barrios has uploaded is not that all different from my recollections of watching Bach Lives! I would thus like to conclude that Iamus is the latest example of a distinction I wrote about almost exactly a year ago in an article entitled “Making music versus making music notation.” Both EMI and Iamus involve experiments in making instances of music notation; and, from that point of view, their work is not that different from many of the pieces by John Cage. Cage differed only in how he would make his choices, whether it involved tossing coins or seeing out imperfections in the paper on which he was writing.

However, where listening is concerned, the method leading to the notation is secondary. What is primary is the act of making the music itself engaged by the performers and how the listener responds to what those performers do. Put another way, the music is in the performance, rather than in the composition without which that performance would not take place. The issue is not, as Smith seems to imply at the end of her BBC report, whether “a computer could become a more prodigious composer than Mozart, Haydn, Brahms and Beethoven combined.” The computer is only prodigious at creating more documents, and what is most interesting about the documents generated by Iamus is their capacity to challenge the creative talents of performing musicians.

Comments