This week’s assignment is to make a music playback controller using MIDI. Seeing all the NIME performances last year intrigued me, so I want to be more open to experimentation and think outside of the box and buttons. I have very long wavy hair and my fingers are constantly running through it to untangle all the knots so naturally, I decided I wanted to play my hair as a musical instrument. Initially I wanted to experiment with an electrostatic generator so my hair becomes conductive and can stick up in all directions as I play, but that was quickly shot down before I could work out the logistics after an office hour session with Tom (given the two week deadline and mostly because it’s potentially dangerous). After speaking to Tom, I decided my best option is to use conductive thread to simulate hair. On the last game controller assignment, I regretted not getting the MPR121 breakout board for my capacitive touch arrow keys, so I decided to get it for this assignment to map 12 strands of “hair” to 12 notes.
Lauren and I decided to collaborate on this project together. On Thursday night, we got together to look at the class notes and wire the circuit for MIDI output on our Arduino UNO. After we got it working via USB MIDISPORT2x2 and GarageBand, we played with some of Aaron Montoya’s personal drum and bass synths, and the Yamaha tone generator that had some nice string options with vibrato and pizzicato. Over the next three days, I poured over documentation and instructables on MPR121 and MIDI. I referenced Adafruit’s example code to understand how the capacitive touch sensor works and adapted the logic of Tom’s 4-key Piano Controller to generate sound. I am slowly learning how to understand and reformulate existing code to get it to do what I want, but I definitely want to get to a point where I can write my own code from scratch.
Since we wanted to keep our board set up compact enough to possibly fit in a headpiece, Lauren suggested we use the MKRZERO, which is a board recommended on the Arduino website for sound, music, and digital audio data. We spent the weekend debugging, troubleshooting, and failing to generate sound with the set up below:
The screen on the Yamaha tone generator kept reading: Err IlglData. Our classmate Elizabeth also ran into the same issue, but she was able to get some sound with delay. In retrospect, I’m sure there were mistakes I didn’t catch, but eventually we went back to what we know worked:
This is our first prototype in action. The challenge now resides in that the “hairs” keep touching each other so there is less control and distinction of individual notes. I would like to resolve this for a future iteration.
Special thanks to Tom Igoe, Tiri Kananuruk, Sebastian Morales, Jim Schmitz, Aaron Montoya-Moraga, and Aaron Parsekian!