Anyone who doubts voice-driven computers are the way of the future should talk to Stephen Reed. The Auckland University of Technology masters student is working on a project that will voice-enable handheld PCs so business users will be able to retrieve messages and other information with a voice command.
"Speech is the next killer application," says Reed. "Why use a keyboard when you can tell the computer what to do?"
Reed's immersion in speech technology has been deep enough to take him to Stanford University, where last month he did a stint with the Archimedes Project team led by expatriate New Zealander Neil Scott. Scott's goal is to develop interfaces by which disabled can use computers.
According to Reed, Scott's 15 years of voice recognition experience and background as an electronics engineer have convinced him that hardware-based solutions hold the best promise.
"Voice applications in software are too slow," Reed says.
One of the fruits of Scott's efforts is a $US100 device with a vocabulary of about 40 words for interfacing with PCs and ATMs. More elaborate systems enable common systems in the home - lighting, opening and closing of curtains, switching on appliances - to be controlled by voice.
Reed's search for voice-enabled project management tools led him to mainstream speech recognition software, which he describes as "too cumbersome".
So the goal of his masters thesis, which has a deadline of the end of next year, is to create software which will have initial application for the disabled, with potential for wider use.
His development tool of choice is Voice XML or Microsoft's Visual Studio .Net.
"I want my software to be small and cheap," Reed says.