When Search Takes Fewer Than Zero Seconds

The future of search will give you the answer before you even ask.
Posted September 12, 2012
By

Mike Elgan


(Page 1 of 2)

Amazing advancements in the speed of Internet searches have brought the gap between search and result down to fractions of a second.

How could Google and other search companies possibly make them even faster?

The answer is for search engines to harvest so much information about you that they know what you’re going to search for before you do – and then provide the results as soon as you look at your screen.

Here Comes MindMeld

A San Francisco-based company called Expect Labs introduced an iPad app called MindMeld this week at the TechCrunch Disrupt conference.

The app is a voice-call and video-chat program. Based on what it hears people say during an online conversations, it initiates its own search without any other action by the user.

As you talk, the app continuously pops up topics you may want to search. If you mention a restaurant, a map pops up with its location. The restaurant’s web site opens.

When you move on to another topic, additional search results appear and so on continuously through the conversation.

You can capture and share search results, leaving behind a kind of journal of topics based on the conversation.

The app also improves your results by preemptively harvesting Facebook data to know more about you.

Here’s a video demo.

MineMeld is not available yet, but when it does go online next month it may cost $1.99, according to Expect Labs CEO Tim Tuttle.

Initially, the app will work only with voice calls. Video comes later. Other platforms besides the iPad are also likely, and the company is working on a developer API.

And Then There’s Google

Earlier this year, Google launched its Knowledge Graph, which is a first step in providing answers, rather than search results for you to sift through.

One of the most impressive applications of the Knowledge Graph is Google Now, which is a voice-based predictive search engine built into the Jelly Bean version of Android.

Google Now uses the location sensor in your phone, and extracts information from your behavior to predict what information you’re going to want. And it learns.

You can say “what’s 453 divided by 11,” “who won the game last night” and “show me pictures of bacon,” and Google Now will pop up answers.

Google now already does minimal preemptive search. Just turning it on it will pop up with the current weather for your current location.

To interact with Google Now, you tap a button and talk. But it’s easy to imagine a near-future where no button tapping is necessary, where the app just listens to people talking, providing a constant stream of answers to questions plucked out of the chatter.

And this, I believe, is the best way to imagine the future of search. The future of search will be constant, preemptive and contextual.

It won’t “feel” like a search engine, but an omniscient personal assistant.

It will look at your to do list, and see that “Call David” is on it. It will figure out which David in your contacts you mean, look at David’s public calendar and notice nothing scheduled. It’ll notice that you’re not doing anything, either, and tell you that “right now would be a good time to call David, and would you like me to dial the phone?”

As you’re blathering to a friend about movies, and if you can’t remember the name of an actress, just glance at your phone and there she is, with photo, name, biography and so on.

Is Knowledge Obsolete?

I think a good descriptive label for this thing I’m talking about would be “contextual preemptive search.” And it’s a set of capabilities that will show up all over the place -- built into car dashboards, built into search engines and, most of all, built into Siri-like personal assistants in our phones.


Page 1 of 2

 
1 2
Next Page



Tags: Google, search, apps


0 Comments (click to add your comment)
Comment and Contribute

 


(Maximum characters: 1200). You have characters left.