Live Web Final Project: Teach Me Something

For my final project, I decided to focus entirely on creating a tool for teachers and students to collaborate on a whiteboard and learn together. I believe having access to video/audio enables the teacher detect facial expressions/body language and better understand when the student is confused or having trouble understanding a concept. This project uses https, WebRTC, socket.io, peer js, and p5 canvas.

Special thanks to Alejandro Matamala, Vidia Anindhita, and Bruce Nguyen for your help and patience! Thank you!

Packet Sniffing

The assignment this week is to capture and analyze network traffic while going about our usual network activities. Before buying my first MacBook in 2016, I was still using a hunky 8 lbs Sony Vaio that was stretched past the end of its life. It was a present from my parents for college and in its early years, it worked like a dream. About four years in, it started showing its first signs of decline when the battery charger suddenly blew out while I was living in Tokyo. I had to replace the charger for it three more times, and my encounters with the blue screen of death were so frequent that I found it unusual if my laptop didn’t crash at least once a day. I thought it might be interesting to run the same applications I typically run on my Mac vs PC and see if I find something interesting on Herbivore and Wireshark.

Something I didn’t expect to find out through Herbivore when I got back to my apartment today is that my roommate is not home. She is typically home before I am, but when I ran Herbivore, I saw that only my devices are connected to our router.  It’s probably best this way so there are no…temptations.

I started by clearing all my applications on both laptops so I could see the packets as they come in on both Herbivore and Wireshark. I opened up a couple websites that I apparently frequented in 2016 on my PC, then gave myself five minutes to just surf mindlessly on my MacBook. In Herbivore, I noticed that the packets I was seeing between my Mac and PC were the same. At first, I was a little confused because I was under the impression I would only see activity filtered by the IP addresses but it turns out that it picks up on all activity communicating with my router. I like that I’m able to see all the devices on my network, but it would be helpful if I could filter the data the way Wireshark allows.

The funny thing is, after I moved on to Wireshark, I tried filtering for the activity of my PC and got nothing. I didn’t understand because they were all there on Herbivore??

Then I remembered reading Ellen’s blog post this morning (best documentation/investigative journalism I’ve ever read) about a similar experience she had while doing this assignment. Apparently, Apple changed their network card configurations to only see what packets are sent to them. Ellen then ran ARP to see the communication between her Macbook and other Apple devices and sure enough, she saw them. In my case however, since Apple doesn’t pick up on non-Apple devices, I only see the communication between my MacBook and router:

I went to download Wireshark on my PC and got a prompt saying that the new release doesn’t support Windows Vista and to download WireShark 2.2 instead. Unsurprisingly, after I tried to run it, my PC crashed. I attempted again and it crashed again so I decided to give it a rest. Poor thing.

Something interesting I noticed is that the majority of my packets use TCP protocol. This surprised me because I purposely opened up an insecure HTTP movie streaming website to see some UDP protocols. Considering how poor the video quality often are on these websites, I assumed UDP protocol might have something to do with pixelated and missing frames. When I filtered for HTTP, I came across some bad TCPs for putlockers.cm:

From Tom and Shawn, I learned that TCP protocols are reliable, so it surprised me to learn that there are “bad” TCPs. I did some research and it turns out this packet [TCP Spurious Retransmission] just means that the sender thought they were lost and sent them again, even though the receiver sent an acknowledge packet for it.

References:

Ellen Nickles

Spurious Retransmission

TraceRoute from Coffee Shops

Blog post updated 10/05/2018.

The websites I most frequent are Youtube and Facebook. I decided to collect my data from the work spaces I frequent – coffee shops La Colombe, Starbucks, and The Bean. When typing traceroute into command line, it spits back a lot of information. I’m only interested in the IP address of each hop, so I referenced ITP alum Patrick Presto’s documentation and made the same modification to the command line to print only the IP addresses each hop:

After collecting my data, I looked up all the IP addresses on ipinfo.io and converted them to JSON files. I later learned from Vidia that you can literally make csv files in a matter of seconds and I felt silly for spending so much time formatting and validating each JSON.

I then used p5.js to plot all the hops to better visualize the path. I later noticed that the results don’t really vary by changing my location since all the coffee shops I frequent are in close proximity to each other.

Facebook from La Colombe and Starbucks:

Time Warner (New York City) > Time Warner (Englewood, Colorado) > Cheney, Kansas > Facebook (Upper Peirce Reservoir, Singapore) > Facebook (Dublin, Ireland) > Cheney, Kansas > Facebook (Dublin, Ireland)

I was getting free wifi from Blink when I was at La Colombe and Starbucks’ free wifi. Both are on Time Warner so the hops went to the same places. It makes sense that Dublin and Singpore are on the map because FB has offices in both countries, but to be honest, I initially assumed it would be a more straightforward and domestic route from New York City and Menlo Park, California.

Facebook from The Bean

New York University (New York City) > Cheney, Kansas > Facebook (Dublin, Ireland)

This path was more straightforward as it bounced around the NYU network, made its way to Cheney, Kansas and Facebook’s Dublin office before appearing before my screen. I was wondering why I kept seeing Cheney Reservoir in my results, and thanks to Lucas’s investigation and explanation, it makes total sense.

Youtube from La Colombe & Starbucks:

Time Warner (New York City) > Time Warner (Cheney, Kansas) >Time Warner (Englewood, Colorado) > Time Warner (Cheney, Kansas) > Time Warner (Englewood, Colorado) > AS6453 TATA Communications (New York City) > Google LLC (Cheney, Kansas)

Again, both were on Time Warner so the paths are the same.

The Bean

New York University (New York City) > New York University (Cheney, Kansas) > AS6453 TATA COMMUNICATIONS (Jersey City, New Jersey) >AS6453 TATA COMMUNICATIONS (Cheney, Kansas) > AS15169 Google LLC (Herriman, Utah)

Final Proposal : Continuation of Live Chat – Horoscope Compatibility

Update 11/19:

For the Final, I plan to continue working on my Horoscope Compatibility Match. I want to use all the elements I’ve learned in this class: Https, Live Chat, Live Canvas drawing, WebRTC, Peer.js, and databases if I can. I’m a bit concerned for time because of NIME, so I will do as much as I can but minimum I have to get canvas, video/audio chat working. So far, I’ve been able to do these separately, but everything seems to break when I try to put them together. I know I’m really close so I look forward to finishing this project and hope I can implement user testing when I am finished.

This is how the chat will look like after two students are matched and start collaborating on a canvas:

Besides collaboration, it can be used for teaching, communication with people who speak different languages, an accessibility tool, etc.

__

Last week’s assignment was to get familiarized with Node.js on Digital Ocean and extend the example live chat application into something more fun. I like that Shawn’s approach for all our homework assignments are to make things fun and silly, because I typically associate coding with dull, confused, and frustrated sentiments. I realize however, those sentiments inevitably arise because my ideas always seem straightforward in my head, but tend to be ambitious and difficult to implement in code.

First idea: Jinx! Personal Jinx!

I thought it might be fun to create a chatroom that penalizes users based on the rules of Taboo and playground game Jinx. Everyone who joins the chatroom is invited to contribute to a topic likely to trigger them into typing the same keywords. When the same keywords are detected between two or more users in the chat, a prompt will appear on each of their screens, giving them the option to jinx the other users. Whomever clicks the Jinx! button the quickest freezes the others out of the chat. The objective of the game is to figure out the keywords and manipulate as many other users into typing it within a designated time frame, and/or be really fast at clicking the Jinx! button when the prompt appears. Last man standing wins.

If you don’t know the game, you can reference the wikiHow article here. [Side note: Apparently wikiHow illustrators do their illustrations by tracing stock photos. That’s why they all have that I’m smiling but I’m really dead inside look about them.)

I still think it’s a fun concept, but in the process of brainstorming with resident Alejandro, I thought of another fun idea:

Horoscope Compatibility Match!

The problem:

I notice that in a lot of my classes that involve collaborative projects, professors will pair up students by running names through a randomizer. Typical of leaving fate up to chance, some collaborations are more successful than others, some can get downright ugly. Well, what if horoscope compatibility charts actually had some merit? I personally don’t believe in this pseudoscience hocus pocus, but I know there are many people that do. I have several friends who have taken notice of patterns in the people they befriend/date, and feel reinforced by their choices because of the fact. Therefore, I wonder if we can be slightly more strategic about pairing together students from the get go? Unsuccessful collaborations tend to involve the clash of egos and ideas, lack of communication, and build up of resentment over imbalanced division of labor. These astrology compatibility charts tout that given certain signs possess similar personality traits and priorities, they are able to foster better relationships, communicate with one another, and make compromises.  Is it possible to create balanced, compatible teams so that not one person ever feels the overwhelming responsibility of doing all/most of the work? I think it’s a worthy cause to find out. I want to conduct an experiment and follow-up study comparing experiences from teams formed from a randomizer versus a randomizer that takes horoscope compatibility into account. 

Hypothesis:

Our personalities are definitely more nuanced than our birth months and whatever stars and planets are rising, aligned, in retrograde, blah blah blah. On the other hand, I can’t help but be reminded of Malcolm Gladwell’s case study on the relative age effect in respect to Canadian ice-hockey players. Could there be something there? In all honesty, I doubt it, but I will develop this project further because it makes for a good learning platform to build upon the weekly homework assignments for this class . At worst, this project makes no considerable difference but gets a laugh from everyone who chooses to participate. At best, people are paired with more compatible people to collaborate on interesting projects. For me, that’s a win-win.

What I want it to do in pseudocode:

  1. All students join the chatroom.
  2. They type in their name in the text field, choose their sign from the drop down menu, and click submit.
  3. Program collects the person’s data (user’s name and sign) and pushes it into an array containing all the people.
  4. After submit is clicked, the amount of people who have submitted their signs are logged and broadcast to everyone.
  5. To find a match, the program will need to loop the user’s sign through the signs in the compatibility list array to find the signs of all the people that are compatible  with the user, making sure not to match the user with him/herself if his/her sign is also a compatible sign of his/her sign. When the user clicks the Find my match! button, the program will return a random person from the array of all the people that are compatible with the user and broadcast it the user.

With the help of my favorite people, here is where I’ve managed to get the project up to so far:

I put it on my Digital Ocean, but for some reason, the matching function only works on my local server so one more thing to debug! Here is a screenshot to get the idea.

To Do:

Debug: Currently, the users are matched up with a random person from the list of all the people with signs compatible to them. I need to configure it so that two compatible people get matched to each other. I am still thinking this through…

What it *hopefully* will do as the project develops:

  1. Users that get matched together will be transported to their individual chat rooms.
  2. They can draw on a “whiteboard” canvas together to brainstorm ideas for collaboration. (Class 3)
  3. They can see/hear each other over audio/video. (Class 4/5)
  4. Link to a Google survey to rate the compatibility of their team member, the quality of collaboration, whether they noticed a significant difference between this matching tool versus a randomizer, if they believe in horoscopes, etc.

Additional Notes:

To get my compatibility list array, I compared the top three cheesy charts that came up on my Google search results, attributed three compatible signs for each sign, then compared it to a few more charts to make sure they check out and eliminate any potential unfavorable matches.

 

Special thanks to:

Alejandro, Mithru, Lin and Lin’s husband, Joey, Jenna, and Shawn.

Ball Drop Socket Game

Our assignment this week was to build a game controller that can connect to a Wifi network and server using a TCP socket  connection to play a multiplayer game that resembles pong. Once the player is connected, his/her IP address will appear within a bar that acts as a paddle to keep the ball from hitting the ground. The objective of the game is to score points by bouncing the ball back and forth between players.

This assignment reminded me of the first assignment from Tangible Interactions last semester — Lunar Lander Game Controller. Since I had a lot of fun building that game controller, I was comfortable with starting on the project and thinking about the UI/UX pretty early on. After Tom set up the server in the hallway, I was able to successfully connect and play the game after changing the IP address in Tom’s Joystick Client code and uploading to a MKR1000.

Next up is designing the user interaction and interface. This is always my favorite part of doing physical computing. I want to always challenge myself against using more conventional sensors/switches so I can learn something new. I originally liked the idea of using a hand-controlled potentiometer for LEFT/RIGHT and a foot pedal potentiometer for UP/DOWN. It’s an interface I’m very comfortable with given my experience with sewing machines. Unfortunately, when I tried wiring it up on a breadboard with a 3.5mm TRS Audio Jack Connector, I kept getting erratic readings in the serial monitor that sometimes not only printed ‘u’ or ‘d’, but random ASCII. This really concerned me and I couldn’t find related documentation online so I decided to set the foot pedal aside for a future project. In retrospect, there’s a chance I didn’t ground it properly (although I followed Adafruit’s instructions to connect one side to ground and the other side to a 10K resistor pull-up to 5V). I’m certain of it now that had I switched from jumper wires to 22-AWG hook-up wires at this prototyping stage, it would have saved me a lot of headache later on.

I ended up going with an ultrasonic sensor (HC-SR04). The clear distinction between the twisting motion mapped to left/right and hovering motion to up/down makes for an intuitive interface:

The problem was, it didn’t work together. I can’t recall how many times I rewired, altered the code, and tested the sensors individually. Koji was very patient and helped me try every combination to debug this controller. Nothing seemed to work until he suggested that we use different wires (providing his own anecdote of dealing with flimsy, unreliable wires in the past). Haley stopped by to also suggest connecting the power rails because apparently, they are not always connected in the long breadboards!

 We replaced some of the wires but it still didn’t resolve this issue. It was midnight at this point, so I told Koji I was giving up and he should go home. Half asleep, I looked at my breadboard thinking I can’t let Tom see this hot mess of wires tomorrow morning, I cut and stripped new wires, trimming them all down so everything lay flat. As a last attempt, I connected the controller to the server and motherf***** it worked.

Thank you Koji and Haley for your help!

You Are What You Watch:
Interactive Self Portrait

I have a nightly ritual of watching Youtube videos before going to bed. Looking at my YouTube suggestions, I noticed that what puts me at ease before losing consciousness can be divided up into four categories: Cooking/Food, Animals/Animal Rescue, Parasites, and Humor.

As you can see, I’ve been hardcore getting back into Seinfeld recently. I think these categories are pretty indicative of my personality so I decided to take the basics of what Shawn went over in class to create a page with a video from each category, so I can just open this up before bed and choose whatever I’m in the mood for. For now, I’ve just downloaded one video per category, but eventually, I’d like to learn how to compile playlists for each category and be able to shuffle and receive new content from the channels I’m subscribed to. This is the first iteration of it:

When the categories are clicked, it triggers the video.

Code

Link

It looks pretty sad and embarrassing, which made me laugh so I decided to make it prettier. This is version 2:

Code 

Link

Thank you Mithru for your help!

Chatroulette

From 2010 to 2012, Chatroulette was a thing. It was a popular past time to gather around a laptop with a group of friends and get randomly paired with strangers on the internet for a video chat. You heard stories from friends of friends about someone who matched with some hot guy from Europe, had the most amazing conversation and continued to stay in touch with each other until eventually, s/he bought a plane ticket to visit her/him. (Talk about a meet-cute!) Videos of celebrity sightings on the site also surfaced and went viral, and you got to see amazing stuff like this every now and again:

Chatroulette is a live video-chat website that pairs random strangers from around the world. Users can enable their web-cam and/or microphone in addition to the text box to communicate with one another. It was developed by a 17-year-old high school student named Andrey Ternovskiy, who wrote its first version in “two days and two nights”. How it works is it utilizes Adobe Flash’s peer-to-peer network capabilities—RTMFP (Real-Time Media Flow Protocol) to allow multimedia to efficiently traverse between users’ computers without using server bandwidth.

Although the site is still active today, Chatroulette received criticism and backlash within the first year of its launch and gradually lost its popularity when it became the obvious choice for people to exhibit offensive, obscene, and pornographic material and acts. According to this Verge article, there persists a small community of male users on Chatroulette. I know part of the assignment is to try it out, but I’m going to take a hard pass…

Sources:

https://en.wikipedia.org/wiki/Chatroulette

https://en.wikipedia.org/wiki/Real-Time_Media_Flow_Protocol

Material Exploration: Conductive Crystallized Textiles

I had the pleasure of visiting the Material Connexion library this semester and during my visit, a particular textile innovation caught my eye: MUUNA’s “artificilae/matter”. Post visit, I did some research and learned that Hannah Croft, the textile designer behind “artificilae/matter” created an extensive collection of samples where crystals are grown onto woven and embroidered textiles to create mineral surfaces of ‘cultivated embellishment.’ The concept is genius, given how abundantly crystals are used for embellishing surfaces, even capable of evoking nostalgia and/or healing. Conventional methods of crystal application typically involve hand-sewing or adhesives, which can be labor intensive and expensive. Growing crystals is an obvious and elegant alternative. As evidenced below, the compositions are breathtaking and organic:

  

Inspired, I began experimenting with growing crystals on textiles myself. I submerged some lace and trims in a concentrated bath of Borax and distilled water. I was happy with the result, but found that after the leaving them to dry for a few days, the crystals would flake off and leave powdery residues everywhere. That is not ideal.

More hours spent down a Youtube rabbit hole led me to crystallized metals and electrochemistry. For my final project, I will be combining my final for Soft Robotics to cultivate conductive crystallized textiles by method of electrolysis. I will focusing on applying the crystallization of copper to potentially generate traces for a working circuit.

^ Measuring Copper Sulphate Pentahydrate Crystals to make Copper Sulphate solution.

^ Submerging a bundle of feathers coiled with copper wire

^ Applying current for electrolysis

This is a sample of one of my swatches. As I predicted, the crystals adhere best to napped surfaces. I also examined the crystal formations under a microscope and they look fantastic. More photos to come as I work on my next iteration of swatches.

To be continued…

Wearable Voice Assistant

For the first wearable project, I want to examine my struggle with self-censorship. I imagine it stems from a culmination of conservative culture and upbringing, lessons and rules of etiquette my parents imparted on me as a child that continues to prevent me from vocalizing my thoughts, opinions and desires well into adulthood. The saying goes, “If you have nothing nice to say, don’t say anything at all.” This is a rule I generally try to follow, but there are always instances when I’m tried beyond the limits of my patience. Yet, even at my limit, I still find it quite challenging to voice negativity without concern that it comes with repercussions. Therefore, I wanted to explore mediums of expression through visualization and/or layers of encryption.

My original sketch was of an overcomplicated headgear that transmits thoughts from the brain (visualized through Neopixel strips) that travel to the voicebox (wherein a microphone is mounted, symbolic of the lumps that form in our throats in the act of self-censorship) and translates onto an LCD screen mounted over the mouth to both encrypt and visualize the words I want to express.

In essence, it is a very cowardly project as I frantically attempt to cover all grounds to avoid facing my own vulnerabilities and eliminate the possibility of offending others. In the weeks leading up to presentation, I talked to a number of people — Jingwen, Vidia, Elizabeth, Nicolas, Aaron Montoya-Moraga, and Alejandro about the concept to resolve the glaring holes and discomfort of putting this whole thing together. Nicolas was particularly insightful by suggesting I look into Lyrebird, a voice-imitation algorithm capable of generating one’s digital voice by recording a number of audio samples for a minimum of 1 minute. This suggestion opened up a new door of possibilities for me because it allows for self-expression, but shifts the blame onto an alias/machine…

 

The interesting thing about Lyrebird is it states that the more audio samples you record, the better the quality and likeness to your voice it becomes. This was not the case for me. I was surprised to find that my digital voice sounded more human at about 109 audio samples versus 502, when it sounded downright robotic. My voice has a slight raspiness due to smoking, which is more evident when I’m either just waking up or extremely exhausted. That particular nuance disappeared the more recordings I made. When I tried to test my digital voice to say, “I love you.” the output was muffled and unclear, so I began deleting recordings until those words were discernible again.

Given a tool like this, the obvious thing to do is to get your voice to say naughty things. (I do curse every now and again for emphasis, out of frustration, or when I’m late for an appointment, but I tend to keep my speech clean for the most part.) I began plugging in filthy gangster rap lyrics and it was hilarious. I decided this direction would make for a better interaction, so I continued to develop its potential for humor instead.

To fabricate this wearable, I repurposed an old cardigan that I never wear into a turtleneck crop top. As for my hardware, I used Adafruit’s Audio FX Sound Board, JST connector and LiPo battery, perf board and female headers (so the soundboard can be easily removed and reprogrammed), and a thin plastic speaker. I specifically wanted to use a knit fabric to insulate my soft circuits of conductive thread and conductive fabric buttons to trigger the recordings I edited on Audacity. The fold over of the oversized turtleneck will cover the hardware so everything is hidden in plain sight and virtually indistinguishable from any regular garment. So much of wearable tech prefers to put the technology on display, but I don’t believe the general masses are prepared to embrace that aesthetic quite yet.

      

I created three scenarios for this wearable to play out and demonstrate its versatility:

1. Introduction to my Voice Assistant
2. Catcalling
3. ER Shift

[Video documentation]

On the day of presentation, I tested it repeatedly and everything worked up until I got in front of the class.  The wearable ultimately failed for the following reasons:

1. My placement of the microcontroller in the back neck was a strategic decision to hide the hardware, but it also made it inaccessible for me to connect the LiPo battery to power it up while wearing it.

2. Although I calculated some ease for the conductive thread circuit, it was not enough to accommodate enough stretch for my head as I was putting it on, which ended up compromising the circuit.

Lessons learned:

1. I could have easily replaced conductive thread with thin, silicone-coated wires instead for purposes of reliability. I have had conductive thread fail me so many times I don’t know why I thought this time would be any different.
2. Always remember to place the microcontroller some place accessible or integrate an accessible on/off switch.

Overall, I’m slightly disappointed that the wearable failed during my presentation, but I’m happy the interaction was successful given the class erupted in laughter!

Special thanks to Jingwen Zhu, Vidia Anindhita, Elizabeth Ferguson, Nicolás Peña-Escarpentier, Aaron Montoya-Moraga, and Alejandro Matamala Ortiz for talking through this project with me and giving me valuable feedback!

Soft Circuit: Bee Pollination

In the first class, Jingwen introduced us to soft circuitry and our assignment this week is to create a soft circuit with a switch and elements of embroidery. I started playing with soft circuits last semester so the basics are not completely foreign to me. I’m looking forward to learning beyond getting an LED to light up in the upcoming weeks. I’ve never embroidered with embroidery floss before, so I decided to start with that. I found some fabric with a floral print in the soft lab, so I decided to use it as a guide. The process was very therapeutic and I can definitely see myself doing this in my retirement.

I made sure to have my multimeter and test in between to make sure the circuit is working. The LED will act as a bee pollinating the two embroidered flowers.

I sewed a pocket behind the first flower to house the battery and conductive sheer fabric over the second flower to function as a switch.  When the second flower is pressed, the yellow bee lights up!

To make the circuit more practical, I folded the fabric in half and stitched up the side seams to create a pencil case.