Officials from the systems vendor convened at IBM's offices today to discuss the new "freeform command recognition." This is a technology that uses statistical language modeling and semantic interpretation to ensure natural communication between the voice recognition system and the user.
To this point, speech-activated electronics devices relied on preset commands to performing such tasks as calling up specific songs, selecting a specific radio station, or requesting directions in a car. Consumers of this technology had to memorize exact commands to do what they wanted to do.
Now IBM's Embedded ViaVoice 4.4 software allows consumers to communicate with the software using natural speech.
Igor Jablokov, program director of multimodal and voice portals at IBM, said many of the 2 billion telephone users in the world are becoming Internet users if they aren't already.
Increasingly, these technology users want converged devices that allow them to both access the Internet and communicate via keystrokes, as well as voice-activate their handheld devices or other machines that act on speech. It's these type of user behaviors that IBM wants to target with its information on demand and information-as-a-service strategies.
"Our goal is to deliver information on demand for people who want one-button access to the Web," Jablokov said during his turn as speaker for the event.
IBM engineers also provided demos of how the Embedded ViaVoice speech recognition engine works.
In one demo, IBM officials showed how the free-form technology can be used to control automobile functions, such as radios, CD players and climate control systems. In another, Websphere Voice Server was used to let consumers access a bank account and complete transactions.