Skip to main content

See also:

AWE 2014 showcases wearable tech and augmented reality

Meta Spaceglasses prototype demoed at AWE2014
Meta Spaceglasses prototype demoed at AWE2014
Courtesy Greg Maltz

The major theme at the Augmented World Expo in Santa Clara, CA last week is best articulated by Robert Scoble and Shel Silverstein in their book, The Age of Context. In it, as Scoble summarized in his keynote address on day one of the expo, the authors make the case that devices must learn everything about us. Our behavior, buying habits, contacts, favorite hangouts, location, biometrics, method of transportation, taste in foods, beverages, music, film--all will be uploaded in a constant flow of data automatically defining us without our knowledge.

Much of this information is already being fed from our handhelds but it will enable the devices of the future, packed with sensors more powerful and biometrical, to spit out useful data that enhances reality and enriches our lives. Or so proponents of augmented reality postulate.

Software and hardware vendors alike at AWE seek to solve the problem of how to add visual and informational data to real experiences. But very few of these vendors are thinking about wearable devices and applications as a way to empower individuals to make their own choices and select the data being input to and output from the device. Until they do, wearables will seem like tools of Google and Big Brother to the public, and the fear of augmented reality applications like facial recognition remains palpable among consumers.

After Scoble's talk, another author and startup consultant, Nir Eyal, took the stage and presented slides that in some sense gave a message opposite of the keynote address. Drawing heavily from his book, Hooked, Eyal talked of a user experience that is triggered by an engrained behavior or addictive user experience that cascades from trigger to action to reward to investment and back to trigger in an infinite loop of user engagement.

These seemingly opposite views of UX--one in which the user's own choices hook them and the other in which data not consciously selected by the user automatically defines the user--point to two possible kinds of killer apps for wearables: passive and active. Will wearers of these devices want their choices made for them, or will they want to be an active participant in the data they output and receive? More realistically, what is the exact combination of the two that will provide the proper UX so wearables can contribute to humanity in a way handhelds cannot?

That question will be answered in the future, but AWE2014 showcased many inventive companies working hard on their piece of the equation. Right now, most of the technology presented at the expo suggests a passive user whose choices are mostly made by the device and AR applications.