The notion of a technological singularity brought about by advances in Artificial Intelligence (AI) has entertained academics, inventors, futurists, and journalists since the concept was first coined around the middle of the last century. The singularity posits that, at some point in the near or distant future, we humans will create AI that outsmarts us, leading to all sorts of unhappy results for our kind.
Depending on whether you prefer utopian or dystopian science fiction, the singularity will either be absolutely fabulous or downright horrifying. Mathemetician and science-fiction writer Vernor Vinge is thought to have coined the term “the singularity.” Vinge’s view in this 1993 essay is decidedly dystopian: “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”
While Vinge’s timeline may be a little off (we assure you this was written by a human, not a robot), the ideas he espoused have taken root and caused consternation for great minds such as Stephen Hawking, Ray Kurzweil, and Elon Musk, among many others. Meanwhile, AI marvels such as IBM’s Watson and Google’s AlphaGo have performed a variety of amazing tasks. Watson famously beat a human in Jeopardy! in 2011, and AlphaGo defeated a human in the ancient and complex game of Go.
But AI is not all fun and games. IBM is applying Watson AI to solve major medical challenges. In a presentation at the World Economic Forum in Davos, Switzerland, on January 17, IBM CEO Ginni Rometty spoke about AI advancements. For example, she said Watson can now spot some forms of cancer better than a panel of human experts.
One of the first new machine-learning products from Google, according to Re/Code, is a jobs API designed to help companies looking to hire hundreds of workers at a time. Re/Code reports that Career Builder, Dice, and FedEx are planning to use the new service.
Anyone buying into the singularity might find it ironic (or scary) that one of the first big AI products is designed to help companies hire humans en masse. Anyone currently looking for a job, who has already run up against the brick wall of resume bots, is probably filled with dread at this idea. After all, AI is only as smart as the humans creating it (for now). We’re certainly getting better at it, technological advancements are accelerating to help us along, and competition in the AI space is heating up.
MIT’s Andrew McAfee & Erik Brynjolfsson tackle the technological, business, and ethical advances and challenges presented by the singularity in the July/August 2016 edition of Foreign Affairs. They write:
“The costs of processing, memory, bandwidth, sensors, and storage continue to fall exponentially. Cloud computing will make all these resources available on demand across the world. Digital data will become only more pervasive, letting us run experiments, test theories, and learn at an ever-greater scale. And the billions of humans around the world are growing increasingly connected; they’re not only tapping into the world’s knowledge (much of which is available for free) but also expanding and remixing it. This means that the global population of innovators, entrepreneurs, and geeks is growing quickly and, with it, the potential for breakthroughs.”
Where AI excels, according to McAfee and Brynjolfsson, is in jobs that involve pattern recognition. These can be tasks such as recognizing street signs, parsing human speech, identifying credit fraud, and modeling how materials will behave under different conditions, they write, predicting “[J]obs that involve matching patterns, in particular, from customer service to medical diagnosis, will increasingly be performed by machines.”
Back To The Real World
While it’s fun to ponder how AI and the singularity might transform our future, it’s far more meaningful for most of us working in IT to consider what’s happening right now as the AI rubber meets the corporate road. Infosys attempts to provide some answers in its January 2017 research report, “Amplifying Human Potential: Towards Purposeful Artificial Intelligence,” which is based on a survey of 1,600 business and IT decision makers worldwide conducted in November 2016 by Vanson Bourne.
According to the report, AI is perceived as a long-term strategic priority for innovation, with 76% of survey respondents citing AI as fundamental to the success of their organization’s strategy, and 64% believing their organization’s future growth is dependent on large-scale AI adoption.
The IT department is the leading adopter of AI in the enterprise, according to 69% of respondents, followed by operations (34%), business development (33%), marketing (29%) and commercial, sales and customer services (28%).
How are organizations preparing for AI deployment and use? Here’s how it breaks down (multiple responses were allowed):
· Investing in supporting IT infrastructure (60% of respondents)
· Developing knowledge/skills (53%)
· Using external support to assist with planning (46%)
· Building AI into company ethos (43%)
· Using external support for knowledge gathering (40%)
· Gathering feedback from customers (32%)
· Assessing customer / industry approach (25%)
The Infosys report offers a fairly rosy outlook for employees. In 80% of cases where companies are replacing roles with AI, the study finds, organizations are redeploying or retraining staff to retain them in the business. Furthermore, 53% are specifically investing in skills development. Organizations with fewer AI-related skills are more likely to re-deploy workers impacted by AI adoption, whereas those with more AI-related skills are more likely to re-train employees, according to the study. The leading industries that plan to retain and retrain their workers are: fast-moving consumer goods (94% of respondents); aerospace and automotive (87%); energy, oil and gas (80%); and pharmaceutical and life sciences (78%).
Respondents say their organizations have deployed, or are planning to deploy, AI in the following technology areas (multiple responses were allowed):
· Big data automation (65% of respondents)
· Predictive/prescriptive analytics (54%)
· Machine learning (51%)
· Expert systems – software that leverages databases and repositories to assist decision-making (44%)
· Deep learning neural networks (31%)
On average, the companies surveyed by Vanson Bourne for the Infosys report have invested $6.7 million in AI in the last year, and have been actively using AI for an average of two years.
Respondents in pharmaceuticals/life sciences are most likely to say their organizations have fully and successfully deployed AI technology (40%) with respondents in the public sector most likely to say their organizations have no plans to use it (27%).
The Infosys report also explores which skills organizations will seek from future generations in the workforce. More than half of respondents (58%) cite active learning and 53% cite complex problem-solving and as key skills. Other important skills are critical thinking (46% of respondents) creativity (46%), and logical reasoning (43%). The most important academic subjects respondents see as focus areas for future generations are computer sciences (72%), business and management (47%), and mathematics (45%).
When asked about which skills their current employees offer to implement and use AI, it’s clear most organizations have room for improvement. Roughly half of all respondents say their current workforce has the following necessary skills (multiple responses were allowed):
· Development skills (58% of respondents)
· Security skills (58%)
· Implementation skills (57%)
· Training skills (47%)
· Customer-facing skills (37%)
· No AI skills (10%)
Does this mean AI will usher in a period of sustained job growth and economic advancement? Accenture researchers analyzed 12 developed economies and found that AI has the potential to double their annual economic growth rates by 2035.
In their report “Why AI Is The Future Of Growth,” Accenture analysts Mark Purdy and Paul Daugherty write: “With AI as the new factor of production, it can drive growth in at least three important ways. First, it can create a new virtual workforce—what we call ‘intelligent automation.’ Second, AI can complement and enhance the skills and ability of existing workforces and physical capital. Third, like other previous technologies, AI can drive innovations in the economy. Over time, this becomes a catalyst for broad structural transformation as economies using AI not only do things differently, they will also do different things.”
McAfee and Brynjolfsson also suggest positive changes through AI, as long as we humans can avoid standing in our own way. “In times of rapid change, when the world is even less predictable than usual, people and organizations need to be given greater freedom to experiment and innovate,” they write in their Foreign Affairs article. “In other words, when one aspect of the capitalist dynamic of creative destruction is speeding up—in this case, the substitution of digital technologies for cognitive work—the right response is to encourage the other elements of the system to also move faster. Everything from individual tasks to entire industries is being disrupted, so it’s foolish to try to lock in place select elements of the existing order. Yet often, the temptation to try to preserve the status quo has proved irresistible.”
Perhaps if we could be more like AI, we wouldn’t have to overcome what the Infosys report identifies as the top five barriers to adoption: employee fear of change; lack of in-house skills to implement and manage AI; lack of knowledge about where AI can exist; concerns about handing over control; and cultural acceptance.
Still, as this New York Times article notes, creating AI that can play a game like Go better than a human requires the work of thousands of humans. The article is likely to allay many fears because, as author George Johson writes, “Computer scientists are experimenting with programs that can generalize far more efficiently. But the squishy neural nets in our heads — shaped by half a billion years of evolution and given a training set as big as the world — can still hold their own against ultra-high-speed computers designed by teams of humans, programmed for a single purpose and given an enormous head start.”