Artificial intelligence is seen as future technology, yet in fact AI is already having an enormous effect on our lives. An in-depth look at AI, automation and robotics.
It’s a beautifully sunny day on the campus of UC-Berkeley, students rushing between classes, backpacks and smartphones everywhere. Here in the Robot Learning Lab it’s pure geek heaven. Students code software at desktops, while others assemble odd machines with wires and multi-colored boxes. Earning a spot at this elite university isn’t easy; UC-Berkeley accepted a mere 14.8 percent of applicants for the class of 2020. So this young crew will likely be tomorrow’s tech leaders and pioneers.
Despite all the promise, it appears that BRETT is struggling. Actually, even failing. BRETT is a robot, and he – or she, or it – is attempting to place a small wooden block into a small hole. Again and again, BRETT swings his arm over the opening, attempts to place the block, but fumbles. Just can’t make it fit.
However, as robots go, BRETT has a huge advantage: he can learn. (BRETT’s name is a playful acronym: Berkeley Robot for the Elimination of Tedious Tasks.) Every time BRETT swings his arm and fails, he calculates what went wrong. In essence he’s doing what we humans do: he’s failing, and in response he’s deciding how to improve the next effort.
I stand watching for about 15 minutes, and finally BRETT succeeds – a lengthy period given the simple task. But the astounding point is that the robot really did learn. He’s not merely a machine repeating a single task. He’s evolving, he’s improving himself.
If you’re a pessimist, BRETT’s slow success on this sunny day casts a shadow over those of us who aren’t robots: Will BRETT, or his next generation peers, eventually learn far better than the humans who program him?
That is, will there come a day when BRETT no longer needs us?
UC-Berkeley’s robot BRETT is capable of learning, if slowly.
Artificial Intelligence and Frankenstein
It’s the defining question of Artificial Intelligence: can a computer attain the human mind’s ability to conceptualize?
Can a computer not just power the system, but conceive of the system? Most significantly, can a computer look at itself critically, self assess, and devise a novel new solution?
As of 2017, the answer is very much no. AI is in its infancy, despite high profile wins like Google DeepMind’s victory against a world-class player in Go, and Watson’s $1 million prize for winning Jeopardy. Compared with the human mind, computers are powerful but limited workhorses.
Computers, to be sure, have a vast advantage in raw processing power. IBM’s Watson can digest upwards of 500 gigabytes – or more than a million books – in a single second. Google’s DeepMind was “trained” to compete at Go by being fed 30 million examples. This massive compute power will only grow ever greater.
Still, Watson’s Jeopardy win was, at its core, simply data retrieval. And while DeepMind’s Go victory required considerably more cognitive agility, it wasn’t creative. It was advanced logistical reasoning powered by brute force compute power.
At the risk of flattering we humans, we not only have intelligence, we have meta-intelligence. We create new and unforeseen leaps in thinking; we turn the framework on its side, squeeze it, shatter it, invent something surprising. Developing artificial intelligence is so difficult because (among many reasons) we don’t know exactly how the human mind works. We are a mystery to ourselves, so how can we replicate ourselves?
And yet we see AI replicate aspects of ourselves every year. Humans are an eccentric lot, but the tasks we do are mostly reducible to routine. Assistive robotics like the iRobot Roomba 650 helps us clean the house. Self-driving cars are being developed by automakers from BMW to Hyundai. Autonomous drones will be delivering our online purchases. AI computers can now recognize images (in limited settings) and respond to natural language (awkwardly).
Indeed, the foundational tools of AI all perform some function akin to human thought. Machine learning uses an algorithm that “learns” to respond to changing inputs; it often outputs a prediction or some next level summary.
A neural net is software that’s analogous to the network in the human central nervous system, including the brain. A neural net employs an adaptive software architecture; it uses a toolset of rule-programming that enables both multi-variable inputs and outputs. The neural net “learns” and can generate output from diverse, non-linear inputs – which is exactly what the human mind does.
Deep learning combines neural nets into a sophisticated responsive structure that can produce an abstract data model. Deep learning – powered by today’s ultrafast GPU computer processors – is AI’s furthest frontier. In a famous example of deep learning, AI pioneer Andy Ng fed 10 million photos from YouTube videos into a neural network, enabling a computer to recognize cat images.
The AI advances enabled by these tools suggest that the fear, once the domain of pulp sci-fi, is now oddly plausible: an AI-equipped robot could at some point surpass a human. Filling the robot’s mind would be all the knowledge of the Library of Congress, Wikipedia, and billions of example patterns. Enabling its mind to “think” would be deep learning neural networks. This mind (if you can call it that) could synthesize learning based on past experience to create novel, unique outputs.
This self-learning robot could then spark the singularity, the turning point – inspired by a physics concept in which the known physical laws no longer apply – when AI transcends human intelligence. At that point super-intelligent machines could direct their own future, striding forward in ways we can no longer predict or control. In this scenario, robots could indeed “rebel.” Or, more accurately, be fully independent actors. To follow the scenario to its dystopian end, we humans would be mere Help Desk support for our technological overlords.
This fear is longstanding. We humans have a deep apprehension of being overtaken by some science we ourselves create. In Mary Shelley’s Frankenstein, published in 1818, the young scientist finds a way to give human awareness to his oversized lab experiment, and the monstrous creature escapes from the lab, wreaking havoc in his creator’s life.
This theme of the rebelling humanoid invention would repeat itself in countless sci-fi novels, movies and TV show. George Jetson’s co-worker robot Uniblab turns out to be a duplicitous rival, tricking him into trash-talking his boss. In 2001: A Space Odyssey, HAL 9000 refuses to let the spaceman back in the ship, famously intoning “I’m sorry Dave, I’m afraid can’t do that.” More recently, the robot Ava in Ex Machina liberates herself, and on TV’s Westworld, the robots – abused by humans – turn the tables on their supposed masters.
Will artificial intelligence actually surpass its human creators? Noted futurist Ray Kurzweil forecasts the singularity for 2045, or about one human generation from now. Kurzweil’s film The Singularity is Near explores the possibilities. Many leading technologists dismiss the singularity as overheated sci-fi fantasy – or so distant as to be hardly worth discussing. The human mind, in their view, is so multi-faceted that no computer system will ever encompass it. Yet the history of science and technology suggests that the exponential leap is the norm. In 1927 it was global news that Lindbergh crossed the Atlantic in an airplane; his flight took 33 hours. In 1969 man walked on the Moon; Apollo 11 reached the Moon in just under 76 hours.
And even if the singularity is distant, or an impossibility, the surging progress in artificial intelligence will create myriad possibilities. What about genetic engineering with AI, in an effort to create super humans? What about some Frankensteinian combination of AI with the human brain? A USB port to our brains? AI combined with virtual reality to form an entirely new reality?
So while we can’t know exactly how artificial intelligence will affect human life, it’s certain that AI will affect us profoundly, and unpredictably.
In short, when in doubt, it’s best to be kind to your robot.
The trailer for Ray Kurzweil’s film The Singularity is Near.
Artificial Intelligence and Your Personal Life
Although AI is often viewed as otherworldly whiz-bang technology, it’s already pervasive in our everyday lives, even commonplace. Every time you Google something, use Map software, shop on Amazon or speak to your smartphone’s voice recognition software, you’re using artificial intelligence. Every time you log on to Facebook and enjoy those lovable baby photos, AI shapes your experience.
All these applications leverage an algorithm, which at its most basic is a set of rules that form an analytic process, capable of responding to variable input. Today’s algorithms – especially those from giants like Amazon and Facebook – are responsive and constantly learning. They are programmed to harvest better response from users; that is, results that serve the vendors who control the algorithm.
When you shop on Amazon, behind the scenes the algorithm is making fantastically advanced calculations – based on a huge database of buying patterns – about what to show you. It’s responding in real time to your trail of clicks. You might think that having a human personal shopping assistant is the deluxe choice; she knows trends, she knows you personally. But she can’t compete with Amazon, says Daniel Druker, CMO at Ayasdi, an AI vendor. Amazon is “using AI to figure out, from a million items, what’s going to be most interesting to you right now, from everything you’ve ever done. No human could ever do that.”
On Facebook, very few of your friends show up in your feed; the Facebook AI algorithm knows you’d be overwhelmed if your feed was too long. So Facebook uses AI to sensitively respond to your signals about your circle of personal relationships, shaping your feed to promote a more effective emotional connection. In case you thought AI was cold and scientific, Facebook uses it to peer into your heart (and the hearts of Facebook’s other 1.23 billion daily users). And it’s powerful: It’s no exaggeration to say that Facebook AI influenced the recent election.
Despite AI’s enormous current impact, it continues to be seen as a magical technology that looms over a distant horizon. “It doesn’t matter how fascinating and cool and powerful the algorithm or the app,” says Babak Hodjat, founder and chief scientist at Sentient Technologies, an AI vendor. “Often when I go out and describe these systems, always people say, ‘Yeah, that’s smart and that’s cool, but it’s not AI.’”
The reason for this skepticism, he says, is that “AI is often, by the general public, not by the practitioners, confused as being human level general intelligence that includes emotional intelligence, creativity, autonomy, a whole slew of things. Consequently, AI is “always lurking as the next big thing that we will invent,” Hodjat says. “I think that is going to continue being the case even 10-15 years from now.”
In truth, in many current applications, “AI is more powerful than humans,” he says. “You name that facet and I will tell you how that particular facet is implemented and is more powerful than humans.” At the very least, “AI is faster, and so the decision and action cycle for AI in today’s world is much faster than how humans react to the world.”
Still, he runs into the attitude: “It’s really cool but it’s not AI – it can’t tell me a funny joke.”
Artificial Intelligence: Behind the Curtain
The last few years have seen big leaps forward for AI. Adam Coates, Director, Baidu Silicon Valley AI Lab, points to many examples, including IBM’s Watson. The AI supercomputer can answer a complex question based on a natural language query. “That’s something that would have been very hard to do ten years ago,” he says. However, he notes, echoing Hodjat, “I also think there’s a lot of hype out there about what AI is and what it’s going to do.”
To be sure, “over the next few years a lot of problems that we’ve thought of as the core AI problems, that humans have been very good at and historically computers have been very bad at,” will see major advances, Coates says. “For example, recognizing objects in images or understanding speech and responding to spoken language, those are problems where deep learning and AI technology are going to keep getting better over the next few years.”
What functionality fuels these advances, and what functionality must AI attain to move forward?
First, an AI system – robot or computer – “needs to be able to learn by itself without human input,” says Pieter Abbeel, a professor at UC-Berkeley’s computer science department, and co-founder of Gradescope, an AI-based education startup.
Furthermore, “it also should be able to communicate and understand when it’s told things like, ‘maybe when you stack your block coming from this angle it will work more easily.’ If it can’t incorporate things like that we wouldn’t think of it as a real intelligence.”
Humans (in theory at least) can use past experiences to extrapolate and deal with new environments. Robots, much less so. It’s far easier to program a robot to assist in a limited environment; factory robots repetitively perform the same task.
What AI scientists want is to program robots to deal with related variations. “They will need to use experience they’ve had in the past and generalize to new situations that are not the same but similar, and understand the connection,” Abbeel says. “What I’m very interested in is how a robot can really learn to do things from scratch.” Learning from scratch is a particularly human ability; if a robot could truly fill its own blank slate, it could be an independent actor.
But AI robot “learning” can be defined many different ways, some of which are the mundane “trial and reward” style, akin to teaching a dog new tricks. AI reinforcement learning, for instance, is coding the robot’s software to learn from trial and error. UC-Berkeley’s BRETT robot uses reinforcement learning, based on receiving high or low reward after an action. “The variation in reward allows it to distinguish what’s desired and not desired and zone in on strategies that achieve high reward,” Abbeel says.
Similarly, AI scientists employ supervised learning, which feeds the computer many examples of a labeled input (these are cats, these are dogs), with a clear target output (is this a cat or a dog?). Unsupervised learning feeds the computer unlabeled data, (say, photos of many animals) and the computer categorizes or otherwise defines a structural model for this data (these animals are much furrier than these other animals). Unsupervised learning, Coates says, is “an active area of research that is really important, because we know what humans do to a large degree is unsupervised learning.”
At the core of AI “learning” is the neural net, which, as noted earlier, is roughly analogous to the human mind. Like the mind, the neural net alters itself in response to more input. “You show enough of those [examples] then the neural net will adapt itself and say, Well, I guess for that input I needed that output, so the only way to do that is, I need to adjust some strengths of connections so that I get that mapping right,” Abbeel says. “So, in some sense, when you’re training a neural net, you have the computer learn its computer program rather than you having programmed it into it.”
Yet creating a neural net isn’t easy, Coates explains. “The big challenge is that we don’t have very good ideas for how to train a neural network from just a bunch of unlabeled and unstructured data. We don’t know how to quantify what is a good neural net versus what is a bad neural net in these kinds of tasks,” he says. “And when we discover that, that will be a big improvement. But we’re not there yet. And again, this is a far cry from human intelligence.”
Though AI isn’t human intelligence, AI leaders like Google’s DeepMind show how responsive AI learning can be. For a computer to perform well at, for instance, Tic-Tac-Toe, requires no special intelligence; the game is so simple that a computer wins with brute force. In contrast, when DeepMind plays the vintage Atari video game Breakout, it “actually has to learn concept,” Abeel says. As DeepMind learns to play “it needs to learn a visual system. It needs to learn motor control,” in the form of joystick actions, he says. In real time, its neural network rivals human responsive to multiple variables.
DeepMind’s performance at Breakout demonstrates responsive and agile AI learning. Here’s a tool to help you build your own AI bot to play Atari games.
As neural net technology improves, AI learning becomes more lifelike. Still, Abbeel, as an AI futurist who dreams of what might be, imagines some day teaching a robot with all the nuance and personal insight of a human expert teaching a human student. Like, for instance, a professional basketball player coaching a novice: “You’d say, well, it’s good to keep your eyes on the rim while you shoot…using the backboard is going to be beneficial.” That is, with the countless variations of human responsiveness. He notes: “That is well beyond what is possible right now, but that’s the kind of things you would want in the future.”
But Does AI Lead to the Singularity?
The recent leaps in AI performance have produced countless cases of “human level” performance. But in most cases, only in single, isolated tasks.
Even passing the Turing Test, proposed by Alan Turing in 1950, remains elusive. A computer would pass the Turing Test if it can fool humans into thinking that it’s human; if it can imitate true human intelligence. In the test, human evaluators have a conversation, text only, with a computer. If the computer convinces a given number of listeners that they’re speaking with a human, it has successfully played “the imitation game.” (The Imitation Game is also the title of film about Turing’s code breaking work in WW II.)
In 2014 an eloquent chatbot dubbed Eugene Goostman fooled one third of the judges at the University of Reading into thinking it was a 13-year-old Ukrainian boy. Yet AI professionals largely dismissed the event as pointless, a publicity stunt that runs counter to true accomplishment in AI. Over the years the Turing Test itself has lost some of its credibility; fooling a human via text wouldn’t necessarily demonstrate true intelligence.
The core AI challenge remains: while computers excel at specific tasks in limited settings, they remain unable to achieve the larger awareness of the human mind.
“What’s still fundamentally missing is putting it together into a larger, cognitive, architecture where [the AI system] does lots of things that humans are still really good at and computers are not,” says Zsolt Kira, a research scientist at Georgia Tech Research Institute.
A key limit of AI involves memory, he says. The human mind makes countless decisions involving what to focus on enough to remember and what to discard, “meta level things that are not really conscious decisions that we make, but certainly our brain does,” that an AI system cannot replicate, Kira says. Overcoming this difficulty would require solving problems in both long and short term memory. “A lot of these notions, right now, are really not being tackled – or, definitely not solved.”
In sum, it’s the mysterious synthesis that the human mind excels at that eludes artificial intelligence. Human intelligence, Hodjat notes, “is a very particular configuration that has been brought about through millennia of evolution. You might actually end up with a robot that talks and understands and can sense your feelings and be funny, but it will still disappoint.”
Coates speaking glowingly of the many leading edge advances in AI, but as for the rise of truly sentient AI – the singularity? “I think that’s much further out,” he says. “Right now, there’s no realistic plan for how we build technology like that. A lot of the active areas of research are problems that point in that direction, but I still think it’s quite a distance away.”
For Abbeel, the singularity is an interesting question to ponder. The human brain, he notes, is essentially a combination of storage and compute power, with sensory inputs and outputs. If scientists were to assemble a digital system with equivalent compute, storage and sensory inputs/outputs, at that point “it’s really a matter of having a program that’s comparable to something intelligent that lives inside our brains. So, when that exists then it could be quite comparable to human intelligence.”
He conjures the futuristic notion of humans downloading skills directly to their brains, as in The Matrix. If humans could ever do this, he notes, then certainly AI systems could freely download skills and databases from other AI systems.
This scenario of linked super-systems may suggest a future AI breakthrough: while one single AI system has limits, what if several AI systems were linked together? If, say, a system like IBM’s Watson interfaced with a system like Google’s DeepMind? In theory, each unit in the AI super-network would add its learning tools – its array of neural networks – creating a merged entity that transcended human cognition.
Notes Abbeel: “I think there are some very interesting things that could happen that are kind of hard to wrap our heads around.”
I spoke with UC-Berkeley’s Pieter Abbeel about the future of artificial intelligence.
AI Produces Helpful Robots – Too Helpful
Artificial intelligence offers benefits in virtually any field, from medicine to education to finance. It’s likely that human life needs AI to reach its highest potential in achievement and well being. The list of upsides is long: Self-driving cars never get distracted. Robots could allow the elderly to live healthier independent lives. AI-assisted data analytics will enable smarter, faster decisions.
Among the near term improvements, “AI is learning to understand people and how to interact with us on our terms,” Coates says. Deep learning algorithms will enable remarkable use of natural language in interacting with computers and robots. We’ll run our world with simple voice commands.
In fact, Coates is concerned that some areas of AI aren’t progressing fast enough.
“You have a problem that growth is very low in the world right now. You have countries with large retiring populations and shrinking labor forces,” he says. “To grow and be wealthier in the future, we actually need a huge boost in productivity. If AI comes sooner it will be a benefit to a lot of those places.”
Yet if AI’s promise is rosy, the potential threat of AI is a gaping maw of disruption. In 2016, American manufacturing produced more output than ever, yet US factories employ one third less workers. If you hear someone say “we don’t build anything anymore,” you might correct them – we build more than we ever did. We just do it with automation.
“If Watson can answer Jeopardy questions, why can’t Watson answer every question that somebody calling into a call center might ask?” says Druker. “Those are knowledge worker jobs. It’s not a $200,000 a year job, but that’s fairly repetitive, level one support. There’s millions of people doing that.”
As AI proceeds, “you’re talking about the [highly paid] quant who’s used to running the show who now has a computer doing much of what they’ve done,” Druker says. “A large bank can have 10,000 people trying to track down money laundering or terrorist-related financial transactions. Computers can easily do 90% of that.”
A McKinsey report entitled Where Machines Could Replace Humans (And Where They Cant, Yet), notes that “currently demonstrated technologies could automate 45 percent of the activities people are paid to perform, and that about 60 percent of all occupations could see 30 percent or more of their constituent activities automated, again with technologies available today.”
A McKinsey report identifies which types of occupations are most vulnerable to job losses from AI.
At first glance, an AI report from research firm Forrester is less worrying, forecasting that AI and Robots will replace 7% of American jobs by 2025. Yet that number doesn’t reflect the churn involved. The report predicts that “16% of US jobs will be replaced, while the equivalent of 9% jobs will be created,” hence the 7% overall replacement number.
AI advocates point out that AI will create jobs, not merely eliminate them, and this is certainly true. Yet the new jobs will be skilled jobs – the Forrester report lists robot monitoring professionals, data scientists, automation specialists, and content curators.
So lower-skilled displaced workers will need to retrain, which in most cases will require advanced education. That creates a dire problem for the larger percentage of the workforce that can’t afford this education.
This conclusion is detailed in a report from the University of Oxford entitled The Future of Employment: How Susceptible are Jobs to Computerization. The report describes “the current trend towards labour market polarization, with growing employment in high-income cognitive jobs and low-income manual occupations, accompanied by a hollowing-out of middle-income routine jobs.”
Included is a shocking prediction that has prompted great debate: it forecasts that 47% of US jobs are “highly automatable.” Other experts downplay the potential losses, noting that automation replaces certain tasks of jobs, but not always the entire job itself.
Perhaps that rosy thought bears credence, yet note that spending on AI by business is expected to climb steadily. Surely this level of spending indicates that businesses are expecting to lower labor costs.
An IDC report forecasts heavy spending on AI in the years ahead.
Peruse the headlines and it seems there are few jobs a robot can’t take. MIT’s Technology Review reports that a robot bricklayer – or SAM, semi-automated mason – lays three times as many bricks as human bricklayer, without those pesky overtime regulations. “The robot is able to do all of this using a set of algorithms, a handful of sensors that measure incline angles, velocity, and orientation, and a laser,” the article notes.
Farm workers will see fewer openings due to agriculture robots that, for now, are cost effective only for certain tasks, but those tasks will expand. The graph from Lux Research looks at weeding lettuce, yet a similar graph could be drawn for many similar lower-skilled jobs:
While robots pick the fruit, their colleague robots will milk the cows. A Swedish company, DeLaval International, is debuting an automated milking machine that enables the bovine herd to saunter up when ready (lured by feed) and fill a computerized system – no humans are needed. The rise of self-driving vehicles threatens legions of transport workers, from long-distance truckers to Uber drivers; self-driving cars are currently cruising on streets near you.
White collar jobs aren’t immune: the Deloitte Insight Report forecasts that 39% of jobs in the legal field could be displaced by automation within ten years. In what might be the biggest blow to the human ego, a handful of companies are developing AI systems to compose music. Efforts range from startup Jukedeck to Sony’s Flow Machines. To my ears, the results prove robots don’t have much soul. Then again, the technology is still new.
In January 2017 I attended the Virtual Assistant Summit, in San Francisco, which focused on how AI systems would support or replace humans.
There I spoke with Bart Selman, a professor at Cornell University’s computer science department.
Having studied AI for twenty some years, Selman is not optimistic about the future of human employment. He once thought that AI would erode low-skilled employment yet leave most white-collar jobs alone. His view on that has changed, he tells me. “You start analyzing a mid-level job that seems to require quite a bit of knowledge and skills, now it looks like [AI] might be able to do that job successfully,” he says.
Even as wages have fallen, partially due to automation, company valuations have risen to record highs. If this formula continues it may lead to social unrest, he opines.
Furthermore, he disagrees with the widely held opinion that society can save itself from AI-related job losses by stressing STEM education. “I’m not a big fan of STEM in education,” he says, explaining that too small a sliver of the workforce could ever find employment in science and technology. Data from the U.S. Department of Labor reports that only 5.9 percent of the U.S. workforce is employed in STEM occupations. “So society needs to step back and say, Are these [STEM jobs] real solutions, or are they solutions that sound good?”
The Future of Frankenstein
In a critical turning point in Frankenstein, the oversized monster-man, having attained human consciousness, discovers loneliness. He demands that his creator, the scientist Victor, create a female counterpart for him. This seizes Victor with a horrible worry: if he creates a female companion, she and the man-creature may create spawn, which would then imperil all of mankind. Victor, like today’s artificial intelligence developers, faces the unpredictable consequences of his creation.
Victor refuses the monster’s request but, alas, learns that reversing course is impossible once you’ve created independent, sentient life. The monster, enraged, comes after Victor, murdering his new wife and fleeing. Toward the end, Victor attempts retribution – going all the way to the Arctic – but dies in the pursuit. The creature is then grief stricken over Victor’s death; the scientist was the only one who understood him. The monster decides he must kill himself and is last seen drifting off on an ice floe.
As today’s AI developers create systems with ever increasing independence, we have to wonder if the outcome will be happier than Victor’s. Of course Victor’s experiment plagued only his own life, while current AI advances will affect the entirety of culture and society. And so humanity peers forward, perhaps optimistically but with a definite unease. At this point we can only hope for the best.