« »

In-class Visualization

IMG_4604After the debacle at The Smith last weekend where my shoddy GSR crapped out on me, I had a chance to rethink what I was trying to sense, how, and why. Accelerometers, I learned, aren’t so good at measuring a turning head (though it does register slightly on one axis), so I had to consciously tilt my head to one side when looking to the left and to the other when looking to the right to ensure I got good readings. Which meant I was conscious at all times of which way I was looking—not ideal. It also meant that I could discard two of the three accelerometer axes readings and focus on how the GSR readings matched up with where (at whom) I was looking. To that end, I made a new GSR sensor, which more than makes up in robustness what it lacks in subtlety.

My short-lived attempt last week was enough to establish that there is no direct correlation (at least not one my setup can detect) between my feelings towards a person and my micro-sweating. So instead, this week I re-hot glued my glasses and attempted to measure my engagement in the discussion going on during this week’s class. I thought a bunch about how I wanted my data to look and decided the visualization should graphically represent what I was actually measuring—as opposed to a more abstract rising and falling line. The eyeballs approximate where I was looking and the size of the mouth represents my GSR. I would have liked to have the eyes grow wider at local maxima and blink at local minima but I couldn’t figure out how to access these values in code. I would also have liked to give the viewer control over the playback, but this too proved too daunting a programming task.

I’m not sure I can derive any solid conclusions other than I spent a lot of time looking at Dan O, and that I’m apparently obsessed with changing facial expressions. Here’s a sample of the output:


Comments are closed.