Google Now is great. But how do you make it better?
I believe Google is building a software-based artificial human brain, capable of interacting with people convincingly as if it were a real friend.
And Google isn’t alone in this quest. Here’s what’s going on.
Google buys Wavii
Google outbid Apple to acquire a company called Wavii this week for $30 million. The Seattle-based startup makes software that summarizes information, such as news.
The Wavii app, which I use on my iPhone, takes an article and reduces it to a single sentence. That sounds easy, because it’s an easy thing for humans to do. But that’s really hard for software to do.
For starters, how can software scan words in a story and choose which of those words are the important ones and which are filler or supporting context rather than the main idea?
Second, how can software concisely summarize the main idea of a story in language that sounds natural, rather than clunky computer-speak?
These are computing problems researchers have been working on for decades, and Wavii is an impressive, usable example of that.
Wavii also does some impressive work around content categorization and social sharing.
Just like Yahoo bought Summly
Google’s acquisition should be compared to Yahoo’s recent acquisition of Summly, a news summary app that uses something called a “genetic algorithm” to extract meaning from blather. Not only is the technology conceptually similar, the price was identical — $30 million.
Summly takes articles and summarizes them into 400 characters.
When that acquisition was reported recently, everybody focused on the age of the founder — 17-year-old Nick D’Aloisio, who started the company and launched the app when he was 15.
The most impressive technology in Summly wasn’t created by a kid with acne, but by Inderjeet Mani, a man who has been researching natural language processing for significantly longer than D’Aloisio has been alive, and who has published 92 research papers on the subject.
Summly has some amazing technology, and I expect Yahoo to apply it broadly as an interface for using many of the company’s services.
MindMeld is Next
I predict that one of Google’s next acquisitions will be a company called Expect Labs, which is currently alpha-testing an iPad product called MindMeld.
I told you about this product back in September.
The MindMeld app is fundamentally different from Wavii and Summly in what you use it for. Instead of summarizing news and other content into concise, meaningful, natural-language nuggets, MindMeld eavesdrops on your conversations and shows you related information.
But in concept, the technology is doing something similar behind the scenes. It’s scanning content, separating out the meaning from the fluff, and using the important information to take action in the same way a person would.
MindMeld does something every human does — it listens to and “understands” people’s conversations, then adds to the conversation relevant information based on what it knows. You and I do this every day.
The difference is that what MindMeld “knows” is: all knowledge on the Internet.
MindMeld offers up search results, for lack of a better term, on topics discussed. The idea is to add context and information about your conversations while you’re still talking about it. As you talk about a movie, information about the movie pops up (no more “what-else-was-that-actress-in?” madness — MindMeld will tell you).
MindMeld is mostly about creating an ecosystem around the technology. However, Expect Labs plans to ship the iPad app, as well as versions for iPhone and Android phones at some point in the future.
Google is a major investor in MindMeld, and I think they’ll acquire the startup at some point.
So what all this about?
Researchers at Google, Yahoo, and no doubt Apple, Microsoft and others know that in order for interfaces to function like users want them to, they must interact like humans. And the only way to make them interact like humans is to get them to “think” like humans.
The key to thinking like a human, it turns out, is closely related to language. If you can get a computer to scan information, summarize it, then act on the summary in various ways — including a concise explanation of what it’s about, then you can make phones, tablets and PCs that interact with people in a way that makes users happy.
Google is already working on an artificial human assistant in the form of Google Now and the Google Knowledge Graph, which no doubt Wavii will be assimilated into.
Apple is evolving Siri into a real artificial human, which is almost certainly why they bid on Wavii.
But I think Google’s project is most interesting.
Where to create a mind – Google
Inventor and futurist Ray Kurzweil has been writing for years about the coming “singularity,” when machine intelligence surpasses human intelligence. One of Kurzweil’s books is titles: “How to Create a Mind: The Secret of Human Thought Revealed.”
Kurzweil isn’t just predicting the creation of the mind, he intends to actually do it. That’s why he was hired to work on machine learning and language processing as the new Director of Engineering at Google.
Kurzweil flat-out said that in an interview in which he said he was recruited by Google CEO Larry Page. The two had met about Kurzweil’s “How to Create a Mind” book, where he told Page he intended to start a company to, essentially, “create a mind.”
But Page convinced Kurzweil to do it instead at Google because Google “uniquely” had the resources — the Knowledge Graph and the engineering expertise.
Self-driving cars. Augmented reality glasses. Google is working on all kinds of cool stuff. But I think the most breathtaking project at Google is their larger goal of creating a mind — one that’s both human, and capable of interacting with people in a natural way — but also a machine, with total information awareness and all the knowledge of mankind at its disposal.
And it will probably be free and used by all of us. How cool is that?