Amazing advancements in the speed of Internet searches have brought the gap between search and result down to fractions of a second.
How could Google and other search companies possibly make them even faster?
The answer is for search engines to harvest so much information about you that they know what you’re going to search for before you do – and then provide the results as soon as you look at your screen.
Here Comes MindMeld
A San Francisco-based company called Expect Labs introduced an iPad app called MindMeld this week at the TechCrunch Disrupt conference.
The app is a voice-call and video-chat program. Based on what it hears people say during an online conversations, it initiates its own search without any other action by the user.
As you talk, the app continuously pops up topics you may want to search. If you mention a restaurant, a map pops up with its location. The restaurant’s web site opens.
When you move on to another topic, additional search results appear and so on continuously through the conversation.
You can capture and share search results, leaving behind a kind of journal of topics based on the conversation.
The app also improves your results by preemptively harvesting Facebook data to know more about you.
MineMeld is not available yet, but when it does go online next month it may cost $1.99, according to Expect Labs CEO Tim Tuttle.
Initially, the app will work only with voice calls. Video comes later. Other platforms besides the iPad are also likely, and the company is working on a developer API.
And Then There’s Google
Earlier this year, Google launched its Knowledge Graph, which is a first step in providing answers, rather than search results for you to sift through.
One of the most impressive applications of the Knowledge Graph is Google Now, which is a voice-based predictive search engine built into the Jelly Bean version of Android.
Google Now uses the location sensor in your phone, and extracts information from your behavior to predict what information you’re going to want. And it learns.
You can say “what’s 453 divided by 11,” “who won the game last night” and “show me pictures of bacon,” and Google Now will pop up answers.
Google now already does minimal preemptive search. Just turning it on it will pop up with the current weather for your current location.
To interact with Google Now, you tap a button and talk. But it’s easy to imagine a near-future where no button tapping is necessary, where the app just listens to people talking, providing a constant stream of answers to questions plucked out of the chatter.
And this, I believe, is the best way to imagine the future of search. The future of search will be constant, preemptive and contextual.
It won’t “feel” like a search engine, but an omniscient personal assistant.
It will look at your to do list, and see that “Call David” is on it. It will figure out which David in your contacts you mean, look at David’s public calendar and notice nothing scheduled. It’ll notice that you’re not doing anything, either, and tell you that “right now would be a good time to call David, and would you like me to dial the phone?”
As you’re blathering to a friend about movies, and if you can’t remember the name of an actress, just glance at your phone and there she is, with photo, name, biography and so on.
Is Knowledge Obsolete?
I think a good descriptive label for this thing I’m talking about would be “contextual preemptive search.” And it’s a set of capabilities that will show up all over the place — built into car dashboards, built into search engines and, most of all, built into Siri-like personal assistants in our phones.
Contextual preemptive search has four components:
1. Data harvesting. The idea is to harvest as much contextual information as possible about you and your life — when you sleep, what you buy, who you know, what you like and so on, as well as where you are and where you’ve been.
2. Answer streams. As you live your life, your devices will be constantly displaying new information based on your context, including what’s going on where you are. It will listen to your conversations and even listen to the show you’re watching on TV, and pop up helpful and interesting facts and information related to whatever you’re currently experiencing.
3. Initiative. In addition to being passive, the future of search will also be active. It will occasionally alert you to rare opportunities, new events and important facts.
4. Agency. In some cases, the search engine of tomorrow will do things on your behalf. For example, it may add things to your calendar, or even buy things and have them sent to you.
5. Learning. These won’t be off-the-shelf capabilities, but determined by your own behavior. It will learn which pre-emptive search results you interact with, and favor more like those in the future. It will get the hint when you constantly turn down or reject certain types of suggestions.
With these capabilities, the behavior known as “searching the web” will go away. We won’t do searches, for the most part. Selected search results will be generated constantly, and also will come to us when we need them.
All this raises a disturbing question, which is: If knowledge is always presented to us in real-time, why learn?
Yes, it would be nice if we all went to school and learned how to think. But why study facts, when correct facts are always right in front of us?
To illustrate that point, imagine this contextual preemptive search running on the cell phones of kids in science class. When the teacher asks if anyone can tell her the second law of thermodynamics, the phones hear that and pops up the answer before the teacher even chooses one of the raised hands.
The teacher might want to force the students to turn off their phones. But why? In what future scenario will these answer machines not be present in the kids’ lives?
One possible outcome will be that kids will forget how to ask questions. And why wouldn’t they when asking questions is no longer necessary in order to get answers?
When everybody has the answers all the time, we’ll ask: Is knowledge obsolete?
I wonder what Google will tell us.