Skip to main content

See also:

Commentary: Take out your old Sharper Image Truth.Quest

New children's picture book promotes empathy, relationships
Deja Vu Productions

This past week, a provocative article about a new invention was published in The Washington Post online. Throwing on Saturday morning parka and sweats, I headed out to Target (where I paid cash!) to purchase a new 9-volt battery for my old Sharper Image Truth.Quest.

Well over a decade old but still maintaining its status in an upstairs closet instead being relegated to the basement, its battery had conked out long ago. The wiring of the now-classic model still being sold on E-bay notes changes in voice and responds by lighting up green, yellow, and red plastic nubbins, like a prop for a 1950’s space show. A simple Google search brought up a frank assessment: “It simply doesn’t work” for detecting truths and untruths.

The Washington Post article dissected a newer invention in the field of “affective computing,” a MIT laboratory initiative. According to author Arthur Allen, researcher Rosalind Picard and her team wanted to create “caring robots,” and so believed that machines that displayed our emotions in some visible form might be a good first step.

Those who remember the mood rings of the 1970’s already have a handle on the worth of displaying their supposed moods along with those of those of fellow faddists. Created by Josh Reynolds and Maris Ambats in 1975, mood rings gauged the heat from the hands where they perched and triggered color changes in liquid crystals. Moods, it was thought, changed the body temperature—a new idea only to those not conversant with the mind-body connection.

Enter a very tiny, not very random experiment this morning with my Truth.Quest from Sharper Image: Sunday morning PBS music and commentary. What could be a better gauge of moderation? But the Truth.Quest saw things differently. The red lights flickered on—the red lights meaning “untruthful”—when the notes were higher or more emphatically played, or when a commentator voice rose higher on the treble scale or emphasized a syllable.

When the voice, either female or male, dropped to a lower pitch or deemphasized syllables, the line of red lights flickered off and the yellow or green ones flickered on. The machine equated untruth or truth with a voice that rose or emphasized or didn’t.

Both musical notes, assumed to be incapable of truth-telling or deception, and the spoken word, assumed to be disseminating factual material, were actually being gauged on higher or lower pitch and emphasis or non-emphasis.

Feedback mechanisms aren’t new, and they have a place in good science. Taking blood pressure at home on a reliable machine can help patients learn to keep their blood pressure down continuously. Pain management uses biofeedback. So does anxiety reduction.

But a mechanism that psychologist Mary Czerwinski was wearing while arguing with her boyfriend in the car takes the discussion in a different direction. According to Allen’s article, Czerwinski’s boyfriend’s phone rang during that argument. A text message had arrived that said, “Your friend Mary isn’t feeling well. You might want to give her a call.”

The butterfly-shaped device she was wearing that interpreted heart rate and skin changes had sent a message to her cell phone, and her own cell phone had then sent it on to an already established network. One of the people on that network, Allen wrote, was her boyfriend.

Czerwinski had created—in league with coworkers who include Asta Roseway, a Microsoft senior research designer—the wristband that incorporated fake butterfly wings that flap in concert with emotional distress. In a flashback to mood rings, the article refers to them as “Mood Wings.” The butterfly idea sounds cute, sure, but, the article notes, “ The developers also made a jacket that resembles a chain-mail vest whose bendable, wired “leaves” use 40 motors to flap when the wearer is happy. To show stress, similar motors on the back of the vest literally raise its hackles.”

When does the robotic become the "No-botic" in the minds of skeptical observers? For one, when the difference between man and machine becomes blurred, not in the sense of the bygone Bionic Man television show, but in the sense of the mind that lies behind the body. A jacket that resembles a chain mail vest that has hackles where one’s own hackles reside is enough to raise, well, one’s own hackles. It substitutes its own interpretation and rights, even to the place of its physical presence, for those of the person it quite literally covers.

No matter how useful projects like these may be in science labs or in Czerwinski’s concepts for understanding autism, nonetheless, any possible off-label uses of technology should be carefully examined. Ethicists should weigh in on these matters and discuss seriously what the meaning of “human” is, and in fact, what people are and are for in the first place. That’s a tough subject, but it is timeless and worth tackling—again and again--even if it triggers red lights to flash in the process.

Privacy is important in self-determination, and The Washington Post article underscores that by what it didn’t say. Czerwinski couldn’t remember what the argument with her boyfriend was about, the article notes, and the boyfriend’s name isn’t mentioned. My guess, without looking for flashing lights or fluttering wings, is that both were omitted because they are, rightly, private. These are the same privacy issues that are at stake in preserving the “who we are” that is part and parcel of what makes us human.

Linda Chalmer Zemel also writes the Buffalo Books column. She teaches in the Communication Department at SUNY Buffalo State College. She received the Exceptional Performance Award from the National Guild of Hypnotists as a member of their faculty.

All opinions expressed in this column are those of the author. Contact Linda at writer14221@yahoo.com