“Cool”, Yael Kirschenmann says, nodding as she takes off her headphones. In the NU Building’s theatre hall, seven members of the VU Orchestra have just performed a computer-generated composition. It was supposed to sound like something Bach might have written – that was the prompt Kirschenmann gave AI software AIVA. “But it reminds me more of royalty-free timelapse music”, she says.
Kirschenmann is currently completing the Artificial Intelligence master’s at VU and plays the violin in the VU Orchestra. For her thesis, she’s exploring whether there’s a difference – or perhaps a similarity – in the emotional responses evoked by ‘classical’ music generated by AI versus that composed by humans. She chose Bach as a reference point because, in her view, his music is accessible to listeners even without a classical background. “Bach also has music pieces you could clearly label as ‘happy’ or ‘sad’.”
Emphasize the weirdness
And so the VU Orchestra members also perform two AI-generated compositions: a ‘happy’ and a ‘sad’ one, opting for the live orchestra over AI instruments to make it harder for participants to single out the Bach pieces. Conductor Arjan Tien leads the group. They rehearse several times before Kirschenmann records the versions she’ll use in her research. The freshly composed music requires some trial and error. “Is that actually right?” Tien asks the harpist after she plays her part. “I think so”, she says. “It sounds kind of weird, but it’s what’s the sheet music says.” Tien: “Emphasize it – especially if it sounds weird. Otherwise, it just sounds like you’re making a mistake.”
Visually, the sheet music doesn’t seem too different from the orchestra’s regular repertoire, but the playing experience definitely is. “It’s not very difficult music, is it?” Tien asks. “Nope – thanks, AI”, the harpist jokes. Still, the ‘happy’ piece has some tricky phrasing. “You can tell this wasn’t written by a person”, says violinist Froukje Lycklama à Nijeholt afterwards. “The music notes are really close together in some places, which makes it hard to play cleanly at this tempo. A human composer who understands the instrument might have handled that differently.” “Fortunately for AI, it landed skilled viola players”, Tien says.
Tight shower cap
A few weeks later, master’s student Lisa Hooftman sits in a small room high up in the NU Building wearing a tight shower cap. Electrodes connect her brain to Kirschenmann’s computer. To improve the EEG signal conduction, Kirschenmann squirts gel into the cap’s little sockets. “Free gel” says co-researcher Eshwar Perumal Kumar. “You can style your hair for free any way you want afterwards”, he jokes. That is, if she manages to get the cap off again. It takes a bit of wiggling and tugging to get a good signal.
Once there’s a connection, Hooftman’s brain activity appears on Kirschenmann’s screen. Not much can be interpreted yet. “Basically, you can only really tell whether someone’s eyes are open or closed and if they’re in a calm or an alert state. To actually give the data meaning, we first need to cross-verify it with the questionnaire filled out by participants”, Kirschenmann explains.
“But even at this early stage, before the actual start, you already see clear differences between people”, Perumal Kumar adds. “Some brains are really hyperactive, while others are super calm.”
To get the most reliable comparison, the team has chosen to work only with participants who have what they call a ‘healthy brain’ – without for instance Alzheimer’s. The hope is that their results will eventually contribute to research into how music can be used as therapy. “But that’s still ten steps down the road”, Kirschenmann notes.
Emotional movie scenes
Perumal Kumar is working on the project together with Kirschenmann, but on top of EEG data, he’s also using a smartwatch to record physiological responses. It tracks things like heart rate and skin temperature. While Kirschenmann focuses on the emotional response on music, Perumal Kumar looks at participant’s reactions on movie clips. Their combined data will help train an AI model to classify certain emotions.
Hooftman is asked to breathe slowly for two minutes with her eyes closed, to reach a meditative state. “This gives us a baseline in the data so we can better analyse brain activity once we introduce external stimuli”, Perumal Kumar says. The calm doesn’t last long – in come those external stimuli. Interspersed with short meditations to return to a neutral state, the participant is shown famous film scenes – ones identified in a 2010 study as highly emotionally evocative.
Meg Ryan (spoiler alert) dies on the pavement in City of Angels, the same actress fakes an orgasm in When Harry Met Sally, and a scene in Sleepers suggests the sexual abuse of young boys. After each scene, participants are asked to reflect in a questionnaire on what they saw: Did they feel in control? How calm or excited were they? How comfortable were they with watching the clip? What emotions did they feel: joyful/happy, amusement, sadness, angry, disgust or neutral?
External variables
Of course, people can respond differently based on personal experiences – whether or not they have children, if they’ve experienced grief and so on – which makes the data harder to interpret. Perumal Kumar acknowledges this complexity. The fact that participants must immediately rate their emotions on a screen that the researchers can also see, adds another complication. “I was very aware that they were watching what I entered”, Hooftman says after the session.
Kirschenmann had a feeling other participants also experienced this. “Sometimes they would laugh during a sad scene, just because they recognised an actor. But then they’d still label their emotion as ‘sad’ in the questionnaire – maybe because they felt that was the ‘right’ answer and fitting for the clip. That’s something we’ll need to note as a disclaimer in our analysis. In the end, the questionnaire will probably serve more as a supporting tool to provide the AI model with certain labels. The EEG and smartwatch give us the most raw data.”
Rising temperature
After the film scenes, it’s time for the music. All clips are played in random order for each new participant. The two AI compositions are played first during the sitting with Hooftman, followed by a piece by Bach. Turns out, Hooftman’s strongest responses came from the film clips. “Your skin temperature rose by two degrees during one of them”, Perumal Kumar points out. Also striking: Hooftman thought the second AI piece – the ‘sad’ one – was composed by a human. And although it was designed to evoke sadness, she selected ‘happy’ as her primary emotion.
During the orchestra’s recording, Kirschenmann had been convinced that most people could tell whether a piece was composed by a human or a computer. “But maybe that goes mostly for people with musical training”, she now says. “They might pick up more quickly when music patterns don’t sound exactly right, or when it lacks a bit of emotional depth.”
Moral concerns
Processing the data will take time, but Kirschenmann already has noticed that several participants enjoyed the AI-generated pieces. “Not surprising, really – they’re full of repetition, and kind of catchy.” As for whether she thinks AI poses a threat to human composers, she doesn’t hesitate. “It already does. AI bots are releasing music on Spotify. That could eventually get in the way of people trying to release their own music. Still, I think to have the full experience, people will still long for a human behind the music. Someone with lived experience who you can relate to. Someone you can go see in concert.”
She had also wanted to ask participants whether they’d even be open to listening to AI-generated music. “But that’s a whole separate study – more on the ethics side. Still, it’s an interesting question. Some people asked me, ‘Does AI even know what it’s making? It can manipulate people with music’. But honestly, music has always been able to manipulate our mood – you’re directly influencing how someone feels. Look at metro stations: they play classical music to calm people down, and repetitive music to make sure no one hangs around too long.”