DAFNA NAPHTALI

Interviewer: Guy Harries

Date of interview: 3 June 2013

Location of interview: Konditori Café (near Bedford L), New York

Website: http://dafna.info

Q: Your performance involves a lot of interactive electronics. Could you describe your set-up?

 

A: Most of the work I’ve done has been me controlling live sound-processing using software I developed in Max. Initially, I used an Eventide H3000 as my main instrument, and all these years later it still sounds really great. I had Peavey sliders going into a Max patch which had routings and combinations of parameters as “preset” that I used for controlling and sequencing processing I was doing with the Eventide.  I made all these Max patches in grad school and then I built more and more stuff around them, and then occasionally I would throw things away that were no longer needed or viable.

 

Usually I’m looking to make multiple simultaneous parameter shifts because that gets interesting results.   Things like changing the pitch-shift and increasing the feedback suddenly, and at the same time. Over time, I started grouping parameters and making them into my own presets. Then, I figured out that it’s fun to sequence the presets, so I started sequencing these radical shifts in parameter changes, using a  polyrhythmic metronome I had programmed.

 

I often process the sound played by other musicians. For years I would often get together with a group of musicians and put a microphone on everybody and then choose whose sound would be interesting to process at that moment. I’d just turn up the aux send on his/her channel, do something with their sound. That processed and manipulated sound might be something I’ll work with over the next few minutes. I gradually got used to this technical set-up and started “playing” it like an instrument.

 

What I did in addition to that was create “tap-tempo”-based algorithms, so that I could lock into the rhythm of the ensemble. It’s never going to be perfect, but if I have something in the delay line and I tap the delay line to be the same length as something else that’s going on, things might shift a little bit but you still hear the rhythmic connection. More importantly, if the drummer’s playing a pattern, I’ll tap it in so that my loop will be in this weird polyrhythmic relationship to that pattern or other things going on. It may not last for a long time, because it’s really hard to be that accurate, but it’s as good as anything else that’s done by ear.

 

 

Q: I’m interested in the idea of performative presence, because with live electronics so much can happen in an automated extended way that doesn’t necessarily relate to the performer. I’m looking at dramaturgy of live electronics. Can you see that in your work?

 

A: A long time ago, in 1995, I wrote my Master’s thesis about live sound processing in general. What I had already realised back then was that I could have a great degree control “intimacy” with the sounds I am created by using my methods for sound processing.  Unless I use a MIDI controller with a lot of latency, what I am doing is obviously connected to the other things without having to build any gesture controller.   Sure, it’s more fun if there’s a cool gesture controller involved, but do you really need to see me make a movement? As long as you know it’s me and you realise that that sound came from either my voice (easy to understand) or from the musician playing across the room (not as easy), that works fine.

 

I’ve experimented with Wii [game] controllers to have more gesture-based performance, but they’re almost like little toys and not really that serious, with a very one-to-one relationship between movement and sound.

 

I’ve spent a lot of time doing improvised music. Maybe a lot of what I’m describing to you has grown the way it has because the primary point was that I needed to react quickly to what was going on in the group performance.

 

When I play solo (processing only my own voice) I may use several audio processes at once. I can allow these processed to continue while I do something else, so once in a while I can sing without worrying about the electronics. In my Max patch I have six buffers that I can record into and play back  and process. Sometimes I also play and process back field recordings.

When I’m playing by myself, I don’t want the loops to always keep going in a repetitive way. So I set up one patch in which the loop points are randomised. The way the patch functions depends on whether there is any audio activity, for instance – whether I’m singing or not. At first I set this mutating looper patch up so that whenever I’m singing it would start mutating.  But I quickly realised that I really wanted was the opposite relationship.   This way when I create a loop length and I like it, I want it to stay at that setting for a bit rather than it changing immediately. So I have it change whenever I stop singing rather than when I start.

 

I like rhythmically interacting with other things that are happening musically and creating temporary grooves. However, if I start playing with somebody and they immediately change what they’re doing, though really that’s fine, it does mean that we’re always off balance. Maybe just for a moment, we could do something together, and then move on to other ideas. So that’s what I decided to do when I programmed the patch. I look for ‘am I playing?’ and if I’m not, it’s mutating, but then when I sing, it stops and it just plays out the loop, so that I can do something with it and then let go. That’s an example for me of a design consideration that I find interactive and useful.

 

When I do a solo gig I always bring some pre-recorded sounds that I call ‘interstitials’ just in case my computer ‘died’ during performance. I’ve done this for years, and I got to use them.

 

Q: 10-12 years ago it was a nightmare. Computers used to crash a lot.

 

A: Yes. I don't want dead time while I'm on stage playing solo; it’s the worst. So I have usually a lot of stuff prepared, and I can put it on if the need arises. Because I'm an entertainer first. If you're a performer you have to perform, right?

 

Q: Do you think of the narrative in your performance?

 

A:  I'm very abstract…

 

Q: Is the ‘cyberhumanity’ of your set-up a possible narrative?

 

A: Not for me so much. You know, there's some stuff that happened in the early 1990s where that was a big thing. But I really just like the technology. I'm very much an electroacoustic musician as much as I am a singer. When I'm doing the electroacoustic part, I don’t want to sit in the back. I'm normally in the front or on the stage, not in the back doing the diffusion, unless I'm performing from the middle of the room. I think of myself as a performer even if I'm not opening my mouth.

 

I had a piano piece that I got a commission to write in 1999. It was for Disklavier and electronics. I needed to be on stage because I'm controlling the Disklavier. There's an improvisation that happens between the pianist and the algorithm. Her part is being generated by this algorithm, and I take a version of it and she plays along with it. But I'm a control freak: while she’s playing I'll move the generated part up or down. I build in a lot of things to happen automatically but then I still enjoy surprising her by moving them. So it becomes this weird kind of battle, you know. Even in this piece, I'm a musician, I need to be there. I don't want the computer to have control of the fun part. I just don't want to do the stuff that's not necessarily fun, like having to constantly move a loop point, so I delegate that part to the computer.

 

Q: The cyber aesthetic seems to have moved on from that point in the 1990s where it needed to be visualised: the robotic arm, the gestural controller. It’s possibly more embedded and familiar…

 

A: Yes, people are more familiar with the technology. If I pick up a Wii controller and perform, people will say, 'Oh, she's using a Wii controller'. In the past, if they had never seen anything like that before, they might say it was weird.

 

I just had a piece performed by PLOrk, the Princeton laptop orchestra, where they were using the tilt sensors on their laptops. I'm against using the technology in this context where you really say 'look at the technology', pointing out your means like a gimmick. On the other hand, this was a laptop orchestra in which the entire thing is based on a technology. As my students say, ‘It’s not a big deal’.

 

I made a vocal piece (Panda Half-Life) a couple years ago for Magic Names vocal ensemble that I was in.  It was based on a biblical text about the Tower of Babel. The piece was initially for six singers with Wii controllers, but ended up with three of us on iPhones. I hired somebody to sit in the audience and control the sound, but soon realized I could run the whole show from my own phone. Maybe if I do that piece again in ten years it'll be in a different way, using a different technology.

 

When I started the group What is it Like to be a Bat? with Kitty Brazelton in the 1990s, we described what we were doing as ‘digital chamber punk’, which at the time seemed weird to our peers, but now it feels like everybody's doing the kind of mixing of genres and technologies. The idea of mixing live processing, electronics plus noise rock plus chamber music with a score is completely normal now.

 

Q: I think it's also very normal in New York, but maybe not elsewhere…

 

A: I'm thinking not so much that experimental musicians have added these other things, but that pop musicians have added in chamber music and electronics to their music in interesting ways sometimes.

 

I'm not a songwriter, but I have collaborated with Kitty Brazelton for years, and she is a wonderful songwriter. This combination is why the project has always worked. She'd work on each piece a little bit differently. She'd say, ‘Give me a whole bunch of sketches’ and then she'll put them together. Or she'll be the architect and say, 'I want you to write this melody' or create a soundscape or sample, and get my ‘assignments’ and feedback to go and create some stuff for us. The new piece we’ve been working on is called Stabat Mom instead of Stabat Mater. We use some of Pergolesi’s Stabat Mater in it, but we're doing it with different instrumentation, including granular synthesis on our voices, which evokes the idea of tears for me.

 

For the last show I did with Kitty, she wanted to do a piece that was the polar opposite of what we'd been doing previously. We did a piece using only small electronic devices and phones. So we looped things on our phones and I had a cracklebox and other small devices. I think I'm going in the other direction of using elaborate large scale electronics.