Archive for the 'Fall 2009' Category

Next Entries »

How Data Gets to Asia

Asian Crossing (Click to enlarge)
Asian Data Routes Visualization (Click to enlarge)

Regardless of where I start, it always takes me forever to get to Asia, though given a choice, I prefer flying overland from Western Europe. Data packets seem to make the same trip pretty effortlessly, so I was curious to see what route they take.

I selected thirty Asian websites and traced the route the data took from my computer in New York to get to them using Tellurian.net’s handy traceroute script (I’m behind an NYU firewall that makes tracerouting pretty difficult). The results were pretty crappy. Half the time, the trace timed out before it even reached Asia.

Show/hide raw traceroute data

I ended up using this nifty visual traceroute tool which uses Google maps to plot an approximate route to figure out how packets got from here to there. I discovered a number of interesting things:

  • Most data heads to China from the Los Angeles area, though interestingly enough, Baidu always seems to go through Mexico. I’m guessing this has something to do with the wires it favors.
  • The latency between hops once the data reaches China jumps from between 4 and 40ms to well over 200ms (an effect of the Great Firewall, I assume).
  • Because of this, most data that is bound for Asian destinations other than China tends to avoid China, with the notable exception of SK Telecom’s website which is routed through Suide, a Chinese city I’d never heard of.
  • The majority of the data lines belong either to Verizon or to AT&T, though there are other providers, such as Cogentco also pop up occasionally.
  • Many of the Indian and Vietnamese sites I looked up are hosted in the US so they didn’t make it onto my visualization.
  • Traceroutes are not all that reliable.

Interesting stuff. I’ll do the same exercise from Shanghai the next time I’m in China just for comparison’s sake.

Painting Pong

IMG_4590

For my networked pong game controller, I thought I’d have a go at using an accelerometer and a paint roller. Instinctively, everyone knows how to use a roller, so it seemed like a natural interface for a paddle that moves either up and down or left and right. I initially thought I was going to be using the accelerometer to measure movement. Turns out that accelerometers only measure movement with respect to gravity, which makes them kind of sucky for anything but determining orientation in space. Plan B was to use a photo sensor, an LED, and some black tape to make a rotary encoder.

IMG_4598

To get my encoder working, I counted the transitions from light to dark and dark to light and timed them, figuring that the longer of the two would represent the larger piece of tape and thus tell me which direction we were moving in. It sort of works.

The Arduino Code

I need to sit on this for a while. I'm sure there are plenty of documented ways of doing this (Tom Gerhardt used this method with his awesome spinning plates synthesizer) but I sort of want to figure it out on my own. I can easily get the orientation of the roller using the accelerometer but I might not be able to get the direction of its rolling using the method I'm using. Some sort of rotary switch attached along the actual axis of rotation would probably do the trick. I like the idea of doing it with light, though, so I'm going to keep on thinking about this, though I may give in to the Google soon.

IMG_4589

In-class Visualization

IMG_4604After the debacle at The Smith last weekend where my shoddy GSR crapped out on me, I had a chance to rethink what I was trying to sense, how, and why. Accelerometers, I learned, aren’t so good at measuring a turning head (though it does register slightly on one axis), so I had to consciously tilt my head to one side when looking to the left and to the other when looking to the right to ensure I got good readings. Which meant I was conscious at all times of which way I was looking—not ideal. It also meant that I could discard two of the three accelerometer axes readings and focus on how the GSR readings matched up with where (at whom) I was looking. To that end, I made a new GSR sensor, which more than makes up in robustness what it lacks in subtlety.

My short-lived attempt last week was enough to establish that there is no direct correlation (at least not one my setup can detect) between my feelings towards a person and my micro-sweating. So instead, this week I re-hot glued my glasses and attempted to measure my engagement in the discussion going on during this week’s class. I thought a bunch about how I wanted my data to look and decided the visualization should graphically represent what I was actually measuring—as opposed to a more abstract rising and falling line. The eyeballs approximate where I was looking and the size of the mouth represents my GSR. I would have liked to have the eyes grow wider at local maxima and blink at local minima but I couldn’t figure out how to access these values in code. I would also have liked to give the viewer control over the playback, but this too proved too daunting a programming task.

I’m not sure I can derive any solid conclusions other than I spent a lot of time looking at Dan O, and that I’m apparently obsessed with changing facial expressions. Here’s a sample of the output:

The Rest of ‘Em

SmithTo commemorate my dinner with family and impending in-laws on the eve of my wedding, I simultaneously logged my galvanic skin response (GSR) and which way I was looking using a three-axis accelerometer mounted on my glasses (overkill, I know), which involved having my computer in my lap while wearing wired glasses and trying to carry on eleven conversations at once! It was a bold plan to discover whether I responded predictably (and differently) to my own family and my wife’s and it seemed to be working until three minutes in, an over-enthusiastic waitress howled at a joke nearby, which made me jump and pulled the wires out of my GSR sensor. Without a hot glue gun, I was helpless to continue, so I closed up the computer, resolved to continue next week, and enjoyed my beer.

Please note that my accelerometer and ribbon wire matched my shirt. Because that’s how I roll.

Click for a sample of the data I logged.

My glasses with accelerometer:
IMG_4600

IMG_4601

My trusty Arduino, wired up to lie flat:
IMG_4602

My homemade GSR sensor:
IMG_4603

DoorSob

doorDoorSob is a door that doesn’t want you to leave a room. A Processing sketch allows the playback on a screen of a human face’s progression from ecstatic happiness to utter misery to be controlled by a potentiometer activated by turning a doorknob. Depending on the state of the face (and by extension, the potentiometer), a voice repeats either “yes” or “no” more or less emphatically. The volume of the voice and the brightness of the face are affected by the amount of ambient light falling on a photoresistor. My intention is to install the photo sensor next to a doorknob so that when someone puts their hand on the knob, it blocks the light and brightens the screen so that the video is visible and the sound is audible. The pot is moved by the knob, so that as a person starts to move the knob to open the door, it reacts, getting more and more distraught the closer the person is to opening the door (and leaving the room).

A week reading about the location of consciousness (apparently behind the eyes according to most people with a minority locating it in their upper chest) and our dubious awareness of our own perceptual and cognitive shortcomings has left me scratching my head. I haven’t done huge amounts of reading in the cognitive sciences, but I’ve done enough to feel that Julian Jaynes’s arguments against the necessity of consciousness in “Origins of Consciousness” and Dan Ariely’s TEDtalk about the limits of free will are a series of cleverly erected straw men. I’ve never heard anyone claim that consciousness is as ubiquitous and constant as implied in Jaynes’ refutation, nor do I buy Ariely’s claims that people’s laziness and susceptibility to influence constitute proof of sensory and cognitive deficiencies. The self-awareness and introspection that these men refer to as consciousness seems to me a response to complicated social structures. It’s essential not to the survival of the individual but to the survival of the group. It’s no wonder then that it tends to lag a little when considered in conjunction with the senses.

And it was thinking about the conniving, scheming, backroom dealing, weighing, and planning to which consciousness presumably emerged as a response that I started thinking about all the unconscious social and physical cues that US Weekly body language experts and NLP practitioners are constantly harping on about. We like it when people laugh at our jokes and praise us, we don’t for the most part like making people unhappy or getting yelled at. How would we feel if everyday objects called our attention to the actions we perform unconsciously?

Next Entries »