Interviewer: Guy Harries
Date of interview: 29 May 2013
Location of interview: Harvestworks, Manhattan
Q: I’ve seen videos of your performances, and it seems like you create an integrated experience of gesture with connection to the sound
A: But that was never different… I mean, if you see a jazz band, it is a fundamentally different thing when you just listen to the music or you see them play. Also in classical music, for instance Gideon Kremer, when he plays the violin you know he’s exploding on stage, and you would never sit there and look away and think about something else.
Q: I think that with technology, the laptop and its ubiquity, people get away with no performance somehow, even though there is an approach according to which there’s someone on stage who is a sentient being so we’ll accept that that the music being performed is live. The whole issue of liveness: technology has problematised that, especially with the laptop.
A: There are, of course, different strands. I remember once coming from some discussion in Vienna where they found it very progressive to take the musician as a person out of that equation there: it’s like there’s the audience and there’s the musician or composer on stage, and they felt like it was very hierarchical. There’s even a quote by Marx that says that in Communism everybody is an artist or something like this and they transformed the new way to argue, to justify, they were like in a corner barely seen doing the music…
I mean we can fight over what this has to do with a Marxist quote, but there’s this one trend. Then you have this endless discussion about whether the laptop performer is boring, so they need [projected] visuals to make it visually interesting, and then you have the people saying, ‘Ha, the laptop is great because it finally frees the music of the bodily constraints.’ I mean, I still think that there’s a difference between a concert, where there’s always somebody who acts even if they’re in the corner, or you go into a place that’s all dark and you don’t see anybody and you only hear stuff from the speakers. As soon as somebody is there, it has a visual aspect, there’s some totally different energy in the room.
I played with a trio of mine Dishraube with a circuit bender and some guy with modified timetables at a visual arts festival in Cologne once. I realised ‘hey we’re not having any visual element here’, and it was a visual arts festival! The guy said ‘Oh you look better than all of the visuals put together, you don’t need any, you know’. We were like fun guys on stage, nice to look at, so that’s ok.
You have a visual element when I play or my trio is playing, but the visual element is a by-product of that. I do think that in a visual and audio project you have to spend some time about the visual element as well. On the other hand, the VJ culture often uses visuals, the kaleidoscopic flurry things on screen behind the players, and they are often made as a by-product; it’s not that these people necessarily think this is an artwork that’s 50% visual and 50% audio. It’s just like something that’s generated in the background. So there’s a change of culture here also that has to do with how life, DJing, partying and that kind of music is done today.
Q: I think it’s also about the immersion of the body of the listener as a total experience so the sound and visual is immersing the spectator.
A: That might be the case, but I don’t think it’s that way if the visuals are only constrained to this funny rectangle, the only difference is that like audio takes over the room, you can close your eyes, you can even try to put your finger in your ear but you cannot escape it especially with the volume levels in a club or so, but the visuals are not taking over the room: they’re usually in the back of the stage in some kind of stupid rectangle like the extension of your iPhone experience all day long.
So when I like these things is when artists really think about how to take over the room with that.
Q: Regarding the presence of the performer, it seems like even when you connect the guitar to the machine, to the computer, to the processing, it’s very much about gesture, very much about a means of expression. Would you describe your approach an instrumental approach? Is that something you find essential to your work?
A: Well there’s my Endangered Guitar… If you talk about guitar, it’s of course my body movement, my gestures, all going into there. I spent basically 13 years of practising with that, with software or various iterations of the software, to make it react properly to my body gestures translated through the guitar and the other way around. I was practising with the software to see what works best and what makes sense, so the endangered guitar is clearly as in a traditional sense an instrument that is very much connected to the body, to the gestures. The problem of course is that as computer programmers we like to invent every little thing ourselves: if you play saxophone, which is an instrument that’s 100 years old, or the violin, then you can tap into a tradition that actually you learn to play that thing and there are only very few people who do extended techniques in a sense that they really create something new. I read somebody clearly said that nothing new had happened in extended techniques in the last 20 years. I would agree as a guitarist, looking at the guitar history, but also at other histories I would say there aren’t many things that happen that are new. So here you just take a tradition and make it work for you. As a computer programmer you start fresh, you think dynamic change should be different and then you have to build it…it sucks.
Q: And was that why you chose to work with electronics? Because that vocabulary of extended techniques was limited at some point?
A: I did like the sonic outcome, practically since I was listening to Stockhausen’s Gesang der Jünglinge when I was 18. I did not come up with my own voice: in the 1970s I worked with synthesizers and so on, but looking back it was what everybody did. I had done some other things in between but even in my jazz style there was always some experiment with electronics even like a little Casio device or something like this, I always tried things, and I think it changed towards the end of the 1980s when all of a sudden I became really aware of sound versus melody and harmony. In the 1980s I wrote in jazz style for large or small ensembles, and it was all about scales, pitch and harmony and so on and rhythm was more like what you do anyway in jazz. It was more the Coltrane approach of the pulse where rhythm is a thread you follow somehow but you always go around it. I was not very conscious: it was all melody and harmony and then somehow it snapped and I remember I mentioned this very often in interviews. I remember towards 1988/89 I started to become more and more interested in these extended intros where the band doesn’t really know what to do yet: they start slowly, they have an intro and everybody finds their place, they experiment, there’s no real development going and then somebody counts in the head and all the energy goes down, it doesn’t go up as is supposed to be jazz; for me it went all down. I usually tell the joke where from then on I only played the intros, nothing else, and in hindsight I can say we became more aware about sound and started working with different sonic possibilities and that leads naturally to extend your techniques, your sound palette on your guitar which came first.
And then when electronics came in, I used tonnes of guitar pedals but it wasn’t really working out well, and then I had a couple of years without any electronics, where I just used extended guitar techniques and the Fred Frith style with like, you know, gadgets, motors, sticks, stones, mallets, whatever, on the guitar. Then towards the end of the 1990s I needed to extend it more and live sound processing as a technique came in to being, first with Lisa the live sampling software from Steim just for a year or so and then I started Max/MSP programming in 1999 and then from 2000 on it was basically part of every performance and grew from there on exponentially. But it’s extending the sound palette and sonic possibilities
Q: Regarding your work with the Third Eye Orchestra, together with laptops...how do you go about something that isn’t really working with an instrument for an orchestra?
A: This is the wrong question because these are all instruments.
Q: Yes, but it’s not an embodied instrument as we were talking about with the gesture and the presence of the performer.
A: This is a good point but I think, let’s say that the gestural approach is a necessary pre-requisite for everything, let’s say, my approach is often to try to distill this and that was often the case with the Endangered Guitar. I have a musical idea. Let’s say the idea could be very simple: how do you do a crescendo from zero to fortissimo within 30 seconds or a minute. That is a gesture that is a musical idea that can be achieved in very different ways in terms of me and the Endangered Guitar. First I have to make it as straight as possible to come up with a playing that allows as little variation as possible. The idea, for example, would be to have a resonant filter on the guitar so that and then the volume is handled with a volume pedal, this is one thing. Then I have to really practise to make it really work over a minute because, this is often a problem for experienced people, usually people go up and after 20 seconds they lose steam, they cannot go up any more, so this is something I have to practise with virtually everybody.
So now we have just a laptop and the guy plays a sound, then he moves the mixer up slowly, so here the gesture’s not that important; he can do it as well as I can do on my guitar. You need these gestures to function in the way I want for somebody else who uses just a synthesizer, and they can do it differently. So the Third Eye Electronic Orchestra works with these kinds of musical terms and we translate it to the different instruments that are there. I had somebody play this box with metal spikes on it: she ran out after these street cleaning cars every time they walk by because they have brushes that are made of metal and the metal spikes fall all the time so she’s grabbing these long metal spikes from the brushes and puts them in this box, puts a piezo mic on it and that’s her instrument. Now you can say that’s not that flexible, you can also probably only play one of two things with it so it would probably be the best kind of instrument to do exactly these things with it.
We did come up with a very god idea to create this long crescendo. We made it work, but the best was actually when she was hitting the instrument very loud at some point. So part of the orchestra is also to find what is the best you can do with your instrument. You try the other musical ideas to make them work somehow but for the final performance she would probably not be part of the long crescendo; she would probably be used for the loud bang that somebody has to do at some point. I mean the score uses roughly 50 musical ideas that are very generally written in graphic notation that has to be translated to any kind of instrument. This means I could also do this with a saxophone choir.
Q: And the performers on the electronic instruments and the laptops - do they determine how they interpret the graphic score?
A: Yes. I mean we do it together.
Q: So they don’t use software that you give them?
A: That is also possible for my week-long intensive Max course. But generally people bring their own material; I had people just playing iTunes you know. It’s all an instrument because it’s a matter of attitude.
Q: And the main parameter is dynamics?
A: Fun. Let’s say it has to be a meaningful musical evening. And now I can say after so many years (and I started this kind of stuff with a laptop orchestra in 2005) I know I make it work regardless, even if it’s beginners or totally professional musicians, I make it work. I have more flexibility with professionals of course, but the rest is working out. It’s fun. It has to be a musically successful performance. Dynamics is an important part of that. Dynamics is a lot of what I’m thinking about when I’m playing solo with my synthesizer, with my Endangered Guitar. I have these extremely loud and extremely soft passages. I have developments, I have rhythms on there, I have very dense situations, I have very sparse situations. It’s a stand out vocabulary, it’s a vocabulary that I like, it’s a vocabulary that I use in other situations, so I often say that it’s like any kind of instrument. The orchestra that I play is just a continuation of my music by other means.
Q: Is there an element of risk in your performance?
A: Yes I drive everything deliberately to fall apart. The Endangered Guitar became more and more unpredictable over the years because I programmed it this way. The reason that I as an improviser like to work with unknown situations, it still is the same also when I play with my Third Eye Orchestra: I come on stage with an idea of what I should do because I would also like to be safe, but at the last second I decide to do something else to surprise myself. That keeps me on the edge of my seat and keeps me concentrated; that’s why I do not have any presets in my system. I felt like when I go to presets I do all this in situations of weakness, to be safe. However I never failed in that sense, but it cannot be like everything is 100% perfect you know. I do not have a feeling that it is serious in the sense of like, oh now I’m embarrassed or something.
One situation I was proud of was once when I played a solo concert with a guitar that had also a semi hollow body that had a very loud acoustic aspect. I was banging, I had something between the strings, I was banging these chop sticks and I ran my Max software to the ground. Suddenly on the screen the whole thing disappeared… and it was not just a crash, it just like disappeared you know, the whole thing disappeared without a trace!
Q: And the sound as well?
A: Of course, everything… I continued playing because it was a small space, it was loud enough, it was towards the end of the concert. I played for another five minutes as I did in the 1990s when I played without electronics. I brought it to a close. Just one person asked me when the sound went ‘was this planned or not?’
So the feeling was that a total computer crash meant that I could make it look like this was planned. I was really proud of myself for being able to do that. It would not have worked in a larger space with a solid bodied guitar.
Q: Do you set any limitations to your instrument or set up?
A: The more I bring, the more fuzzy I become. One funny limitation is ‘What does not fit in my backpack does not go on stage’. I might bring an interesting proximity-based sensor for the guitar, but then I cannot bring a certain pedal or a certain other controller. These kinds of limitations are great if something’s breaking or forcing me to think in a different way. I actually cherish that as long as it doesn’t kill me. That thing with the acoustic guitar, if it happened at the beginning of the concert I would have run out of steam after five minutes, of course.
Limitations… I mean even as a traditional composer you have a choice between the notes: do you play all the time between all the 12 notes? No, you start by limiting yourself to only a few of them