Archive for the 'Rest of You' Category

Scents and Sensibilities

Scratch&Sniff

I am hideously behind in my documentation, so much so that I didn’t have anything up in time for the first day of the Winter Show. Which turns out to have been kind of a blessing, as I hadn’t really devoted much time to thinking about the project beyond trying to get it working. Which is uncharacteristic for me, though this project has been nothing if atypical.

The projects [1][2] I submitted to previous shows were polished and worked exactly as intended but were complicated to explain. They came with highly polished spiels that explained the hows and whys and, at least in my mind, completely airtight rationalizations for their existence.

But smells don’t bend to the will as easily as wire and LEDs. They’re invisible, often uncooperative, and, as I learned yesterday, highly subjective. The beauty for me of Scratch&Sniff has been the ease of its explanation (partly because the concept is simple: it’s a scratch and sniff television screen, and partly because I just haven’t had time to rationalize its construction beyond “wouldn’t it be cool if”) coupled with the growth of its meaning and interpretation while watching it in use.

Experiencing the wonders of scratch and sniff TV

In the past I’ve been afraid of allowing too much human element into my work. Both T.H.A.W. (a synthesizer interface that used a hairdryer and “melting” ice cubes) and Al-Gorithm (the paper output of a computer program written for a human being) minimized the subjective experience of the viewer/participant. They were both hermetic systems. You could like them or not, but you couldn’t really argue about how they worked or your role within them.

Scratch&Sniff on the other hand eschews all discourse in favor of an extremely simple concept that the viewer’s experience validates (or doesn’t, depending on the viewer). As at every show, the people I’ve spoken to have fallen into the three rough camps they always seem to:

  1. The Wowers: “This is so cool, how did you do it?”
  2. The Insiders: “This reminds me of a project by x at y, and have you thought about z?”
  3. The Skeptics: “What is the point of a smelly TV? How are you going to monetize it?”

The fascinating thing, though, has been how many people have taken issue with the project based not on how it works but on the smells themselves. “This smells like air freshener, not grass.” The technology vanishes! And what exactly does your TV smell like when you scratch it, I want to ask.

Another surprising thing has been the trepidation with which people approach scratching the screen (“Can I really push hard? Won’t it hurt the screen?”) coupled with the brute force to which they subject the obviously delicate surrounding foam core.

In any case, really interesting. I will post more once I get caught up with my documentation (and sleep) in the coming week.

EyeR

IMG_9427
I’m working with a small side-looking infrared emitter/receiver pair that I got from Sparkfun to see if I can detect blinks. My theory is that the surfaces of my cornea and of my eyelid will reflect (detectably) different amounts of IR light, thus allowing me to sense blinks. I’m getting awesome values from the pair using a 10k resistor, 25-1000, so almost the full possible range.

There’s not much reliable research online about the effects of long-term IR exposure. Some people say that because it doesn’t cause the iris to contract as bright visible light does, it will blind you if you look at it too long. Other people say that’s nonsense, claiming that the IR light in question would have to be much brighter than an LED to cause any damage. Still others (my favorite group) post frantically to medical forums after having spent hours staring at their remote controls while pushing the buttons (?!), suddenly panicked that they may have caused themselves irreparable damage.


I tested various brightnesses using a digital camera and varied resistance, and am using a highly directional light that will be aimed at the side of my cornea rather than directly at my retina, so I’m not too worried.

IMG_9428To detect reflection, the emitter and receiver need to be installed completely flush to the same surface and about 10mm apart. I get good readings when holding them up to my eye, a variation of between 50 and a 100, good enough to work with.

One hour later…

Mounted on the glasses, it doesn’t really work. There’s too much infrared variation in ambient light. I may need to use a camera. And my eye feels like I’ve been staring into the sun for too long. It might be fine to stare at an IR LED from a distance, but right up against your eye, it starts to feel not so good after very little time. I am nixing this particular plan.

In-class Visualization

IMG_4604After the debacle at The Smith last weekend where my shoddy GSR crapped out on me, I had a chance to rethink what I was trying to sense, how, and why. Accelerometers, I learned, aren’t so good at measuring a turning head (though it does register slightly on one axis), so I had to consciously tilt my head to one side when looking to the left and to the other when looking to the right to ensure I got good readings. Which meant I was conscious at all times of which way I was looking—not ideal. It also meant that I could discard two of the three accelerometer axes readings and focus on how the GSR readings matched up with where (at whom) I was looking. To that end, I made a new GSR sensor, which more than makes up in robustness what it lacks in subtlety.

My short-lived attempt last week was enough to establish that there is no direct correlation (at least not one my setup can detect) between my feelings towards a person and my micro-sweating. So instead, this week I re-hot glued my glasses and attempted to measure my engagement in the discussion going on during this week’s class. I thought a bunch about how I wanted my data to look and decided the visualization should graphically represent what I was actually measuring—as opposed to a more abstract rising and falling line. The eyeballs approximate where I was looking and the size of the mouth represents my GSR. I would have liked to have the eyes grow wider at local maxima and blink at local minima but I couldn’t figure out how to access these values in code. I would also have liked to give the viewer control over the playback, but this too proved too daunting a programming task.

I’m not sure I can derive any solid conclusions other than I spent a lot of time looking at Dan O, and that I’m apparently obsessed with changing facial expressions. Here’s a sample of the output:

The Rest of ‘Em

SmithTo commemorate my dinner with family and impending in-laws on the eve of my wedding, I simultaneously logged my galvanic skin response (GSR) and which way I was looking using a three-axis accelerometer mounted on my glasses (overkill, I know), which involved having my computer in my lap while wearing wired glasses and trying to carry on eleven conversations at once! It was a bold plan to discover whether I responded predictably (and differently) to my own family and my wife’s and it seemed to be working until three minutes in, an over-enthusiastic waitress howled at a joke nearby, which made me jump and pulled the wires out of my GSR sensor. Without a hot glue gun, I was helpless to continue, so I closed up the computer, resolved to continue next week, and enjoyed my beer.

Please note that my accelerometer and ribbon wire matched my shirt. Because that’s how I roll.

Click for a sample of the data I logged.

My glasses with accelerometer:
IMG_4600

IMG_4601

My trusty Arduino, wired up to lie flat:
IMG_4602

My homemade GSR sensor:
IMG_4603

DoorSob

doorDoorSob is a door that doesn’t want you to leave a room. A Processing sketch allows the playback on a screen of a human face’s progression from ecstatic happiness to utter misery to be controlled by a potentiometer activated by turning a doorknob. Depending on the state of the face (and by extension, the potentiometer), a voice repeats either “yes” or “no” more or less emphatically. The volume of the voice and the brightness of the face are affected by the amount of ambient light falling on a photoresistor. My intention is to install the photo sensor next to a doorknob so that when someone puts their hand on the knob, it blocks the light and brightens the screen so that the video is visible and the sound is audible. The pot is moved by the knob, so that as a person starts to move the knob to open the door, it reacts, getting more and more distraught the closer the person is to opening the door (and leaving the room).

A week reading about the location of consciousness (apparently behind the eyes according to most people with a minority locating it in their upper chest) and our dubious awareness of our own perceptual and cognitive shortcomings has left me scratching my head. I haven’t done huge amounts of reading in the cognitive sciences, but I’ve done enough to feel that Julian Jaynes’s arguments against the necessity of consciousness in “Origins of Consciousness” and Dan Ariely’s TEDtalk about the limits of free will are a series of cleverly erected straw men. I’ve never heard anyone claim that consciousness is as ubiquitous and constant as implied in Jaynes’ refutation, nor do I buy Ariely’s claims that people’s laziness and susceptibility to influence constitute proof of sensory and cognitive deficiencies. The self-awareness and introspection that these men refer to as consciousness seems to me a response to complicated social structures. It’s essential not to the survival of the individual but to the survival of the group. It’s no wonder then that it tends to lag a little when considered in conjunction with the senses.

And it was thinking about the conniving, scheming, backroom dealing, weighing, and planning to which consciousness presumably emerged as a response that I started thinking about all the unconscious social and physical cues that US Weekly body language experts and NLP practitioners are constantly harping on about. We like it when people laugh at our jokes and praise us, we don’t for the most part like making people unhappy or getting yelled at. How would we feel if everyday objects called our attention to the actions we perform unconsciously?