Thursday, May 8, 2008

Final Project and Final Thoughts

Well, I never quite knew what to expect when I signed on for PhysComp. Several of the shorter assignments were helpful for grasping a very baseline introduction to the materials and methods that ITP artists use in their work. But, like following recipes, until you have the confidence to try out your own variations, you're not really thinking for yourself. I think there could have been some additional intermediary steps taken to make the experience seem less like being "thrown to the wolves". I can't speak for everyone, but I definitely could have used a simple assignment that requires building a specific style enclosure out of wood, a hands-on primer in sensor wiring, securing and strain relief, but most of this was left to schematic abstractions on the board and cold readings. But ultimately, a lot depends on defining clearly what it is you'd like to do in the class, and aside from knowing that I wanted to focus on sound, I had no real game plan and not much prior exposure to tech innovations in the arts.

In a final project I went for something modular, a system that could be flexibly adapted for the express purpose of allowing for a multiplicity of choices in making music. In retrospect, this was probably too large in scope and vague. Most successful projects seemed to have one or two clearly-defined goals in what the user was supposed to achieve. In other projects, much emphasis was on making an object that was fun for a user to interact with. Music, by contrast, isn't necessarily fun, and certainly by tradition it isn't interactive with audiences in a physical sense. But I wanted a way to interact with myself so I could eventually perform as a wind soloist. No impressive visuals, just sound, and to make something impressive in this way requires a huge learning curve that for me goes well beyond the time frame of one semester. At least the exposure turned me on to a new realm of possibilities.

I began with the premise of wanting to make sonic use for the extra body motion I was using to play sax, an answer of sorts to earlier critics of my highly gestural playing. I didn't necessarily want an electronic wind controller, but I wanted my pure tone to be processible, and for other gestures to trigger some algorithmic possibilities in the music. I also wanted to trigger accompanying sound files in interesting, algorithmic ways - shape my own amplitude and rhythmic envelopes, add and vary delay lines, and play around with spatialization somewhat. I came from the standpoint of learning first some basic sound sculpting procedures in MAX/MSP, and then when I had what I wanted, figuring out a way to control them hands-free so I could also focus on playing the horn. Trouble was, the MAX platform itself is a steep learning curve, but having struggled with it once before, I wanted this time around for my progress to be real and palpable.

I finished - with the help of Peter McCullough - a 4-channel DJ rig based on rhythmic buffers (reading from slices of those sound files with lookup tables set to multislider presets, so the effect is a loop, but also a varying probability of "slice" playback), and added some sax processing capabilities including sample-and-hold record/loop functions (with random step walking), spatially-driven delay and pitchshifting effects, and harmonizing. My comfort zone in providing a serial link to these functions from the outside world lay in MIDI controller messages, and I became interested particularly in a Behringer foot module and Eric Singer's MIDItron as my primary interfaces, but only when previous attempts at designing my own failed. Some of these failures included flex sensing on my elbow pit (the sensor snapped off in class and, even after subsequent reinforcement - through heat shrinking, hot glue, stitching, velcro adhesive, the works - lost its responsiveness), FSRs (which worked well with foot pressure but which limited my mobility because I was wearing the MIDItron transmitter), and accelerometers (I soldered these incorrectly and ran out of time before I could reverse my failure). I did, however, manage to make a stretch sensor part of my foot module, and this was able to provide me with varying feedback in one delay line. I also discovered the joys of the Wii, and with a translation program called the Osculator could send MIDI cc messages in response to tilt and roll with it mounted to my arm. This became effective in controlling the speed of the delay on my sax and also the steps in pitchshifting taken in that delay. There was no ulterior motive in choosing my methods, just simply to choose the different components that worked best for what I wanted to accomplish.

The MIDI lab did provide some additional insight and, time permitting, I might have chosen to do one or more retrofitted footswitches that communicate MIDI messages via the Arduino. None of the ground triggers would have to be wireless, and in fact one issue I couldn't resolve was how to make the MIDItron truly wireless (there is no connection from transmitter to receiver, but every sensor is connected to the transmitter), or at least less confining. If I had an FSR affixed to the ground I couldn't easily wear the MIDItron and also have it connected to an arm sensor without risk of yanking wires free, and certainly any walking mobility would be compromised. So in a modular performance rig with many inputs there may need to be a separation between ground inputs and body ones (this is essentially the route I took, but with the Behringer/stretch sensor/MIDItron (not on my body) and Wii (on my body)). The Wii works so well I could see attaching one to every limb. Since I understand it to have a gyroscope sensor, with some long-term work a smaller, more stealthy version of it could be developed. I also think it's reasonable to develop a simpler wearable project that is designed to do just 1 or 2 things sonically, and incorporates soft circuits (an Arduino Lilypad, conductive thread stitching) and a small wireless transmitter (XBee system) to a speaker or some other output. This is more aligned with the spirit of PhysComp, an object-oriented interface (the one wearable item) that anyone can use and have fun with. That said, in future iterations of my modular music rig I could see giving the audience some control by handing out the Wiis.

The toe-raising on an FSR during a climactic note was part of my live demo in class, fairly effective in terms of making my body movement work for me, and I'd like one day to find a larger, more rugged FSR and make a larger square that with foot pressure can apply the needed mechanical pressure to send out usable values (this vision is a dreamy abstraction, I have no idea how practically to implement this or I might have done so sooner). But these results too can be accomplished with the Wii affixed to a leg calf, so it all comes down to: how much time do I want to put into the physical interface vs. the programming vs. actually performing. If the point was to have a sellable product to mass-market, the physical interface becomes more important, along with cost issues. For me, I'm happy with something that just I can use.

In my video demo I kept the trigger types to a minimum so it's clearer what I'm doing with what. I used mostly one channel of playback (Behringer switch triggered), a sample-and-hold loop of a recorded sax phrase being retriggered to random pitches (the record and loop functions are Behringer switch triggered), a sax delay line that varies in speed according to Wii tilt and varies in pitch of the echoes according to Wii roll, a delay line on the playback channel that increases feedback proportionately to the stretch of the sensor affixed to the Behringer's continuous controller pedal (this stretch sensor is routed to the MIDItron, also laying on the floor). Had I set up two stereo speakers there would be audible a ping-pong effect on the panning of the sax delay (which in turn would be influenced by some other aspect of Wii motion such as yaw). The CC pedal is itself multi-functional - depending on bank select, it can fade in/out a channel or adjust the overall tempo.

There is a moment where I'm not sure what to do next, and it will require more actual practice for me to become fluent in performance. That said, the channel playback being continuous is a great feature that will enable me to take "breathers" while I plan my next sax move, switch to flute, load other sound buffers, or make some other transition.

Note: Sorry, at this time you will have to view my video and pics from my ftp site - type in mattsteckler dot com slash remoteuser slash PhysCompFinalDocumentation

http://www.mattsteckler.com/remoteuser/PhysCompFinalDocumentation

Sunday, May 4, 2008

MIDI Lab - Extra Credit




Having already tinkered on my own with MIDI in the past - and utilized it in my final project's first iteration - I understood the logic behind the schematics and the code in this lab far quicker than in my previous labs. Above all, I can immediately jump to conclude several applications for this circuit. For my 4 Channel DJ + Sax rig, for example, I can send notein messages into MAX to toggle on/off states for the muting of each channel, in lieu of the laptop key commands I had used prior.

Most everything worked as indicated, save a couple hiccups on the way. One was making sure my correct Arduino serial port and board were selected (they'd been changed in the application from when I borrowed someone's Diecimila). Another was that I connected my MIDI-to-USB cable to the MIDI port of the Arduino with its MIDI In plug rather than Out. I lapsed thinking "going IN to the computer", but with MIDI you have to align like directions, so it's OUT going to OUT, then the USB side goes IN to the computer.


I knew ahead of time to configure a new setup in my Audio/MIDI Setup control panel so it would listen for my new device, and I also knew that I needed a MIDI-triggerable sound source, so I opened Garage Band - which automatically opens to a MIDI piano track. The first program played an ascending chromatic scale at 2/10 of a second per note (1/10 for note on, 1/10 for note off) across 2 octaves, then repeated the sequence indefinitely. I recorded the results in Garage Band with a notation view and also filmed this result:



When I moved to the second program, I first tried to use an FSR, but I only very faintly heard a sound upon pressuring the sensor and then toggle switching on, and it was a pretty low one. Perhaps if I scaled the lowest analog state to a higher pitch value I might have corrected this, but I knew the given code would work with a potentiometer since it had a direct scaling function of the pot's natural 0 to 1023 range, so I switched to that and the results were instantly clean. Most notes were within reach of a treble clef staff.


For a real music application, I imagine the separate variable resitor-pitch vs. toggle switch-note on/off scenario would get quite cumbersome. To make a more keyboard-like setup, I assume one would use pushbutton type switches, each "tuned" in the code to a different pitch in a 12 half-step octave range or something like that. To send variable velocity messages with one keystroke, the pushbutton switches would have to somehow mechanically apply pressure to some FSRs located beneath them, then those values could be scaled to an appropriate MIDI velocity range. But I'm not one to advocate reinventing the wheel, I think it's better to send messages that trigger something not so 1-to-1 as far as pitch goes, like my channel muting idea where each channel reads from a playback audio buffer. I may try 4 switches to this end.