My aunt told me about an evening of avant garde improvisation featuring jazz and classical musicians engaging with a series of AIs. Given the purpose of my journey I obviously could not resist. We rolled up to a suitably moody student cafe space on the UCSD campus. The evening was organised by Shlomo Dubnov who is a professor of computer music. It featured flutes, assorted percussion, a very small trumpet, keyboards and algorithms. Most of the computer stuff was wrangled by GĂ©rard Assayag from IRCAM which sits in the Centre Pompidou in Paris. In performative terms it sat in a slightly uncomfortable space between performance, tech showcase and jam.

The AI was listening to the human musicians and its sound was patched in to the loudspeakers in the venue. As a human observer it was sometimes difficult to pick out what was human and what was machine. When the machine was generating flute sounds and the only human was on the flute you had to check when the human wasn’t playing to really be sure what was the machine. A simple way to cheat the musical Turing test. With eyes closed it would have been impossible. But then the music was very avant garde.

When humans improvise together it’s always lovely to watch the non-verbal, mostly visual negotiations as they work out who’s going to take which chorus and when to wind the whole thing up. Here the AI had no eyes – in two senses. Firstly, it couldn’t see so couldn’t read cues from the humans. More interestingly, the humans couldn’t see what it was doing. Lovely work from the late Jon Driver among others has shown that very primitive cues, for example cartoon eyes, can direct your attention. For that evening the computer scientist operating the AI also functioned as an interface by indicating starts and finishes and signalling modulations. In some ways it was the interface between that human and the algorithm that was most interesting to observe.