MATTHEW OSTROWSKI

Interviewer: Guy Harries

Date of interview: 4 June 2013

Location of interview: the artist’s back garden and home studio in Brooklyn Greenpoint, New York

Website:  http://www.ostrowski.info

 

MO: When I started doing improvised music back in the 1980s I was playing analog synths, and it became very clear to me that my stage presence was an issue, because I was behind this wall doing all this stuff that's invisible. When I was a kid, I did a lot of acting, and through that I realised that you had to really be aware of how you appeared on stage, especially when you didn't have a ‘prop’ in the way a guitar or a sax is a prop. You had to be thinking about your stage presence even if you're just sitting there doing this [small operational gesture]. But in those days, when there were maybe three people doing live electronics in the city, the appearance of doing nothing still had novelty value. At concerts, I would sometimes set up a patch that varied itself, leave the venue, go buy a beer and come back. Since the electronic setup had its own kind of energy, and that in itself that was a kind of dramaturgy. But nowadays of course that's not so interesting to anyone – we’ve all become accustomed to that disconnect.

 

So I became very aware of the fact that even if you weren't doing much, it was very important to be aware of your presence. I was also lucky – I played an old ARP 2600, which has a spring reverb built in – so I could actually push the entire synthesizer, tilt it really hard and get a nice banging sound.  I also worked a lot with amplified objects, partially for their sound, but also because that’s an actual thing that you physically activate to make sound.

 

One of the weird things about the control panel, or for lack of a better word ‘interface’, is that it tends to drive you, as the musician, into the world of the control panel, and away from the world of the people you're actually performing with, as well as your audience. Whatever I had lying around with a contact mic on it was a way of getting myself out of getting locked into the dials on the control panel, and gave me another thing to focus on. All of this is much worse now with computers: the interfaces look much better, and it’s much more attractive to just stare into it. If I wasn't doing a straight improv gig, I also tended to do a lot of work that had some kind of theatrical aspect, because I was very aware of the fact that otherwise I'm just some guy sitting there doing…that [small operational gesture].

 

From a musical point of view, I also realised that because electronic gizmos use a source of energy other than your body, they have a tendency to just sort of go. You just turn them on, and they go without you… Also because there are so many factors and parameters involved in creating your sound, it gets very hard to rapidly change the sound in a reasonably controlled way. This is one of the reasons I went to [The Institute of] Sonology [in the Netherlands] so I could learn computer music. When I was I working there, my interests very much coincided what was going on at Steim [Amsterdam-based organisation focusing on live electronic music and interfaces]. Back then Steim was taking half the improvisers in New York, giving them a Sensorlab [one of the first dedicated devices for mapping sensors to computer control data] for doing some project and then, sadly, they would bring all this sensor gear home and it would sit unused in a closet.

 

GH: So it's early 1990s we're talking about?

 

MO: Yeah that's when I went to school there. Steim had a huge reputation in New York at that time.  This was also before laptop music was a genre -- it was the early days of people sitting behind a computer and doing stuff. So I immediately got to work on thinking through interfaces, precisely because of this dramaturgical consideration. I wound up going through loads of low budget hardware that was available at the time and cobbling together a couple of things, such as a couple of different sensor gloves, operating in a very different way from [Dutch composer, performer, instrument inventor] Michel Waisvisz’s.

 

Roughly speaking, my glove interface is based on two principles, which are both musical and dramaturgical at the same time. I wrote most of my code in such a way that a lot what takes place is an ‘energy in - energy out’ system. I wanted there to be some sense of that from the audience's standpoint: that I was not simply exploiting the endless energy supply of the power company. Rather, my body was the driving force behind the sound.

 

There’s always that weird thing about how perceptible the relationship between action and event is. There’s the zero relation of action and event:  the guy sitting behind his laptop. And then there's the stupid relationship where everything's too clear -- Mickey Mousing, they call it – 'you put your hand up the pitch goes up'. I try to negotiate between those things, and the central idea that I aim for is to give the audience a sense of something that is ultimately not being processed through the machine, but is being processed through my body, the way a real musical instrument is a function of the human energy and the human parameters of arms and legs and fingers and hands. So that the music is not this thing in a box which I manipulate in this unknown way in which the physical relationship is very trivial. I also set up my rig so there’s as little visual feedback as I could possibly manage on my computer, so I don't look at it too much.

 

In terms of code, I use several different instruments that I switch between. I have one module that does one thing and another module that does another. They are all quite limited, but by making them limited, I have few enough parameters that I can map them to my hand and my fingers. I've got 8 or 12 possible modules, each one of which has a set of gestures and a physicality that goes with it – so the limitations of the module can translate into gestures I can hold in my head, and my body can remember. Whatever I am doing I know what's going on as opposed to having a massive control panel with 80,000 sliders and finding myself thinking 'what's exactly happening right now?'

 

GH: So it becomes what some people would call ‘instrument intimacy’ where you actually know the instrument down to the very last detail?

 

MO: Yes. That's what I'm aiming for: to have that sense of it really being an extension of my body and not something I have to necessarily think about too much. I used to work with this great Swiss cellist Alfred Zimmerlin, who once said to me, 'I don't play with my brain; I play with my spinal cord.' He could do that because of his physical skills on the instrument – they were such that he didn't have to think 'How do I accomplish this effect?’ in order to accomplish it. His hands know how to do it. So that's what I aim for in my instrument.

 

So, to answer your question, I’ve got a bunch of different ones [programmed instrument modules]. Most of them are sample-based, simply because I find those are more interesting than purely synthesized sounds – not only in terms of timbral richness, but also in terms of recognisability and unrecognisability, which I find a very interesting axis of expression. My whole initial discovery and fascination with experimental music was because of composers like Pierre Henry. 25 years ago my dream was to do real time acousmatic music, which I have kind of accomplished, although unfortunately acousmatic music has become rather ossified as a form, so I’ve become bored with it to some extent…

 

GH: All technology has a narrative associated with it when you introduce it on stage. Now, with the technology being ubiquitous and people using interfaces like Wii controllers, it becomes less of a novelty, whereas I think in the early 1990s, when you were using a glove, it was probably associated with this cyber human narrative. Did you ever consider that?

 

MO: Well, you know, it’s funny -- that particular narrative was not one that I especially cared for.

 

GH: But the audience might?

 

MO: Oh sure. I mean, that's the kind of comments you get after shows all the time, and my answer is always, ‘This is the solution to a musical problem I have.’ Obviously you can't control how people are going to respond. You know, it’s weird, because the story I just told you of having it all happen through my body can tie in to a lot of theoretical stuff about all this cyber stuff and cybernetically extended biology and all that kind of blah...

 

To me that narrative is not interesting -- it was really about, ‘Can I keep up with the guitarist?’ And remember also, I was raised improvising in the 1980s when it was all [John] Zorn, fast change, super-aggressive style of playing. Suddenly the guitarist will play something, and I have to be there now, not in five seconds. That was my concern - about immediacy.

 

GH: In regard to the instrument model, there's the responsiveness that you're building into the instrument as an essential aspect. Two things: One is how is it different from acoustic instruments? The other thing is: instruments have a certain resistance, either conceptual or physical, what's the resistance in your set-up?

 

MO: The one thing I don't like about this whole glove business is that there's no haptic feedback.

 

GH: And in terms of not being necessarily physical? 

 

MO: I was just thinking about that; that's one of the big differences. I model my code on musical instruments as much as I can. I don't exclusively parameterise: I don't tend to map one physical parameter to pitch, another parameter to volume, etc. I often write code which links parameters. For example, if I'm moving quickly it will increase the pitch regardless of where my pitch position is,  because the idea is that I'm putting more energy into the system, which creates more tension, right?

 

I don’t know how my instrument is different from a regular musical instrument because I never played any. (Actually, that’s not true; I was a percussionist when I was young, but not seriously.) I don’t really have anything to compare it to in my own experience. The resistance – if you mean by that the learning curve – depends on the instrument I’m using. I have a couple of things I’m working on: one which I’m happy with, and others which I’m not so happy with yet, where my interface controls a physics system, and the behavior of that system controls the musical parameters. So the physics model serves as a layer between myself and the parameters. The

simulation has qualities that actual physical things have, which is my attempt to create the kind of resistance built into acoustic musical instruments.

 

GH: And what is the interface to control them?

 

MO: It’s the glove, so physically I’m still waving my hand around and there’s no physical resistance, but to some extent it has a will of its own. This is the kind of thing I think a lot about: how to balance this out. You want your instruments to have some kind of agency, but you don’t want it to have so much that you’re completely confused about what the hell is going on. I remember when I started using physical systems I was still essentially still moving faders; I wasn’t using a fader box but a touch pad, but it was still basically the same model as faders. I then built some physics into the faders so that if you made a sudden change it would overshoot and then stabilise, but unfortunately it sounded really dumb – the oscillations as it came to rest were too regular. It sounded too mechanistic. So the question is: how to get some unpredictability without introducing something that’s too tiresomely unpredictable -- like random noise which is a pretty dull – and you’re stuck with a bloody computer which is still overly predictable.

 

GH: Do you find that the way you program the instrument influences your gesture, or do you build the instrument to suit the gestures that you usually use?

 

MO: Interesting question…I think it’s kind of a dialectical process, one aspect determining parts of the other and then back again. It depends. Certain things I keep consistent so I can keep track. For example, my pinky always controls volume, the height of my hand is usually controlling pitch. I keep certain things consistent across them, but ultimately I get to pick.

 

GH: It seems like there are various approaches within the electronic music scene in New York. There seems to be a lot of anti-performance and minimalism, though this is not necessarily a criticism…

 

MO:  Here’s the thing: I think that what happened was you had a shift once this technology became available to everybody and it became relatively easy to use. Public enemy number one: Ableton Live. There was an interesting thing that happened: when I left New York in ‘92, I was like one of the relatively few electronics people on the free improv scene, right? When I came back, most of the people left of my generation on the free improv scene had basically gone jazz so there was no place for me, and suddenly I was on the laptop scene - a totally different world.

 

GH: So you were in the jazz scene before?

 

MO: In the sense that they were free players whose training and or tastes were based in jazz. That generation either dropped out of music completely or they started playing much more straight-ahead jazz, and there is not a place for me in that scene. And so you’ve got this other bunch of people. I think the difference between those two groups is that those on the laptop scene were not musicians, so their developmental experience was not engaging with other people in a performance context, but sitting in their bedrooms thinking ‘oh, this sounds really cool’. So their whole mode of making music, even privately or for fun, was not going out, having a couple beers with your friends and playing in your rock’n’roll band. It was this private experience between them and their machine.  For me, the thing which is exciting about music is that you have this interaction with other people, a kind of sonic conversation. This was not part of their schooling, not part of what their conception of what music is. Their conception was ‘me coming up with a sequence, coming up with my giant set of loops’ or whatever. The music-making was about them and the software. It wasn’t about them and the sax player. And so naturally an anti-performance thing is going to come out of that, because that whole notion that there’s anyone you’re interacting with, be it a musician or an audience, is not there so much.

 

GH: I’ve seen people who are from the background of rock performance who are not concerned about stage presence. That’s why I’m saying anti-performance rather than non-performance, because it seems like a reaction to the notion of the spectacle.

 

MO: I would be more convinced of that if that moment had passed, because let’s face it, this ‘checking your e-mail’ music has been going on for ten or more years now, and it’s still ‘checking your e-mail’ music. I feel like that’s an awful long time to keep a stance.

 

Another thing is it’s the easiest solution. I spend more time getting my interface algorithms into some smoothly operating state than I do writing the music code. It takes a lot of work, and you’ve got to cart this extra stuff around with you that doesn’t work half the time. It’s much fussier, and so I think a lot of people just don’t want to be bothered with it.

 

GH: Is that the measure of good performance or good music?

 

MO: No, but I think the anti-performance stance is a cover up to some extent. You were asking me about my opinion about the whole cyber blah blah and I said I’m not interested in it because what I’m interested in is making music that works. And I think one of the things that makes music work is interacting with the audience. We can debate that, but to say, ‘I’m doing non-performance because I’ve got all this theoretical stuff about it’, that’s nice but maybe you’re just doing it because it’s a lot easier to sit there with your fader box or with your mouse than it is to actually make something that works. Because it’s a huge amount of work, and you’ve got to practice, and I suspect a lot of that impulse is not so much because of the theoretical rhetoric, but because it’s an easy solution, which is justified post facto.

 

GH: Do you think presence is necessarily connected with gesture? 

 

MO: No, because as I say, back when I was standing behind a synthesizer I was there. And I’ve seen people, like this guy who was sitting behind his laptop and I thought, ‘That guy knows how to sit behind his laptop…That guy knows how to create the drama of his energy in the work just by the way he moves his hand towards the fader box’. And you know, he had that presence. So no, it doesn’t necessarily have to be.

 

GH: But then the subtle hand movement is a gesture...

 

MO: When I was studying acting one of the hardest exercises we had to do was to sit on stage and do nothing and, again, you can do nothing and hold people’s attention, or you can do nothing and not hold people’s attention. So I don’t think its 100% dependent on that, but it sure makes it a lot easier. To really be able to do that requires an incredible amount of fortitude and charisma and ability to radiate, because everything you do, people see. You can’t pretend that they don’t see it.

 

Update from Matthew (October 2016):

 

One interesting recent development in interface technology, which I think has a lot of potential to increase the use of more sophisticated controller interfaces, has been the appearance of relatively accessible machine-learning tools, such as Wekinator [http://www.wekinator.org], and IRCAM’s MuBu library for Max[http://forumnet.ircam.fr/product/mubu/]. These tools can simplify the somewhat grueling process of multimodal mapping and linking parameters, and I hope more artists get some familiarity with these tools.