Download the authoritative guide: Cloud Computing 2018: Using the Cloud to Transform Your Business
I believe the next big thing in computing is the audio interface. Call it hearable computing.
Hearable computing is not a sub-genre of wearable computing, nor mobile computing, nor desktop computing. It’s a thing unto its own. And I believe it will take two forms: First, the always listening, ubiquitous microphone; and second, the always listening user.
The Always-Listening Microphone
Forrester Research posted a report this week about what they call the future of voice control. They call it voice control and monitoring, or “vox” (not sure how they arrive at “vox,” but there it is.).
The idea is that companies like Amazon, Apple, Facebook, Google and Microsoft are evolving their products toward a default mode where they will be listening 24/7, and harvesting data from what they hear, as well as accepting commands.
Low-cost microphones will be ubiquitous, in every room in our homes, in the car, in the office, clipped to a shirt and, of course, in our phones, smartwatches, smart glasses and elsewhere.
Whenever we want something, we just talk no matter where we are: “I need more Nutella,” and the product is delivered the next day.
To take just one example: Amazon makes tablets, has announced a TV box and will soon announce a smartphone. Why would an online retailer make consumer gadgets? Because they’re storefronts for buying things from Amazon.
“Vox,” serves two primary Amazon objectives -- to harvest user data so Amazon knows exactly what to promote to the user and to make ordering as brain-dead easy as humanly possible.
Baby steps toward the always-listening future are emerging. For example, one of the most popular smartphones on the market is Google’s (soon Lenovo’s) Moto X phone. And the main reason it’s popular is that it’s always listening.
Another example is Microsoft’s Xbox One, which is always listening.
To be clear, both the Moto X and the Xbox One are listening only for the specific command that each has to trigger the listen-for-a-command mode. But I believe that this limited listening is merely an interim step to a future where always listening is the default mode for many of our devices.
They will always be ready to receive commands, and they will always be harvesting data in the same way: When you’re searching the web, using social networks and sending email, those words are actions that are being saved and collected as well.
That sounds like an unacceptable invasion of privacy, but of course people will continue to accept new encroachments in the future just as they have in the past.
This idea of the always-listening Internet will become more appealing as voice recognition technology gets better, and of course it will do so.
The Always-Listening User
In the movie “Her” (starring Joaquin Phoenix and Amy Adams), the lead character falls in love with a Siri-like virtual assistant.
The only difference between Siri and “Her” is that the movie version is simply more advanced -- as advanced as such assistances will inevitably get. It’s only a matter of time before Siri, Google Now, Cortana and others can all pass the Turing test.
In the movie, Joaquin Phoenix's character develops what he believes is a satisfying relationship with the virtual assistant entirely through a Bluetooth earpiece that fits almost entirely into his ear. He puts it in his ear and forgets about it. He talks, the assistant listens. The assistant talks, he listens. They have conversations.
If you can imagine sufficiently advanced A.I., you can imagine that this interface to the world of computers and the Internet is just about all you would need. Think about what you do with computers -- browse the Internet, do social networking, make calls, buy things, schedule meetings, maintain contacts, create business reports -- it could and I believe will be handled almost entirely by talking to a virtual assistant.
Now imagine a future world in which microphones are everywhere and screens are everywhere. When you wanted to see something, watch a video, have a video call with someone, you could just tell your assistant to show you, and -- knowing your location and having access to the nearby TV, PC, refrigerator screen, in-dash screen or whatever -- simply display the content.
This is where I believe we're heading. The convergence of ubiquitous computing, wearable computing and, above all, artificial intelligence virtual assistants will make voice-based interface -- call it hearable computing -- the default way we interact with computers and the Internet.
Why Hearable Computing?
There's one and only one reason why hearable computing is the future: Because talking and listening is what the human brain is hard-wired for. It's the ultimate human user interface.
If someone is sitting at the same table as you, you're not going to send them an email. You'll just talk to them. If that someone knows a fact that you would also like to know, you don't look it up. You just ask them.
The easiest, most natural way to interact is to talk and listen, to have conversations.
The technology for in-ear devices that are small and comfortable enough to be worn all the time is already here. And the habit of wearing such a device is further reinforced by the ability to make and receive calls easily, and also to get audio notifications.
The only missing ingredients for the hearable computing future are better voice recognition and natural language voice processing, better virtual assistant technology, far more app integration with virtual assistance and better in-ear Bluetooth devices. And of course all those are coming soon.
You heard it here first: Hearable computing is coming to an ear near you.
Photo courtesy of Shutterstock.