Now imagine a future world in which microphones are everywhere and screens are everywhere. When you wanted to see something, watch a video, have a video call with someone, you could just tell your assistant to show you, and -- knowing your location and having access to the nearby TV, PC, refrigerator screen, in-dash screen or whatever -- simply display the content.
This is where I believe we're heading. The convergence of ubiquitous computing, wearable computing and, above all, artificial intelligence virtual assistants will make voice-based interface -- call it hearable computing -- the default way we interact with computers and the Internet.
Why Hearable Computing?
There's one and only one reason why hearable computing is the future: Because talking and listening is what the human brain is hard-wired for. It's the ultimate human user interface.
If someone is sitting at the same table as you, you're not going to send them an email. You'll just talk to them. If that someone knows a fact that you would also like to know, you don't look it up. You just ask them.
The easiest, most natural way to interact is to talk and listen, to have conversations.
The technology for in-ear devices that are small and comfortable enough to be worn all the time is already here. And the habit of wearing such a device is further reinforced by the ability to make and receive calls easily, and also to get audio notifications.
The only missing ingredients for the hearable computing future are better voice recognition and natural language voice processing, better virtual assistant technology, far more app integration with virtual assistance and better in-ear Bluetooth devices. And of course all those are coming soon.
You heard it here first: Hearable computing is coming to an ear near you.
Photo courtesy of Shutterstock.