Finding new metaphors for interaction

One of the slowest technological areas to advance has been the means by which we interact with our computers. Inherited typewriter-based metaphors have prevailed as the most familiar, although often most impractical, means of interaction. GUIs offer users a friendlier interface than a stark DOS command line, but often introduce a new set of tribulations.

Although advertisers would have us believe that we will all be conducting business in the park wearing our voice-activated, wireless eyeglass monitors, and annoying the pigeons with our shouted market order to "Buy! Buy! Buy!" the practicality of large-scale business adoption remains elusive.

We have recently seen a surge in nifty devices for storing personal data and accessing e-mail anytime, anywhere. Light-emitting polymers, digital inks, and digital books will provide us with new possibilities for presenting and accessing information.

Yet advancements in UI technology have not kept pace with other innovations. Most of these new hardware items remain cost prohibitive, but more significantly they remain disadvantaged by an inherited need for a tactile interface. Changing the way we relate to our computers will require a fundamental shift away from the typewriter-based metaphor. Key technologies that show promise for enabling tomorrow's interfaces are the impetus for progress; however, each demonstrates its own limitations for practical business application.

Most UI redesign on the horizon focuses on speech recognition capability. Good at filling short, "burst" recognition needs, such as in command and control implementations, the technology's weaknesses have impeded effective data transfer. The natural-language capabilities required for on-the-fly speech recognition are still a couple of years away.

Voice activation further plays an important part in developing ubiquitous, or pervasive, computing.

Ubiquitous computing builds an environment in which human-computer interaction no longer is tied to a single device such as a keyboard.

Further fuelling these possibilities are rapidly shrinking computational devices, low-power design, and critical wireless technologies and standards, such as Bluetooth, which will enable devices embedded within our appliances, furniture, and clothing to interact, helping to make communicating with computers a somewhat transparent endeavour.

Interesting UI possibilities have also grown from experimentation in biofeedback and stimulus response on the human brain. Doctors at Emory University Medical Centre in the US have successfully implanted in a patient's head electrodes that capture impulses and translate them into software commands.

Work continues on less intrusive brain-to-computer interfaces, such as helmets that harness brain impulses, which are being developed at the University of Rochester, also in the US.

Brain waves are difficult to isolate and even harder for subjects to control, yet brain-to-computer interfaces hold astounding potential in accessibility applications. Still, the prospect of everyone at work donning a brain-wave cap might prove unrealistic.

For all the improvements made in computing in the past several decades, it doesn't look like we'll be tossing out our keyboards and monitors yet.

User interfaces

Improvements to computer interfaces, such as voice recognition, will streamline UIs and will provide greater expansion of ubiquitous computing opportunities and other hands-off implementations. Many of these interfaces show promise for consumer and speciality markets, but the practical impact on businesses will be marginal.

Join the newsletter!

Error: Please check your email address.

More about Emory University

Show Comments

Market Place