Download the authoritative guide: Cloud Computing 2018: Using the Cloud to Transform Your Business
Many experts in the field firmly believe 2017 will be a breakout year for both artificial intelligence and robotics, since the two often go together. Spoiler alert: it's all good.
AI Makes Robots Smarter
Robots use an increasing number of sensing modalities including taste, smell, sonar, IR, haptic feedback, tactile sensors, and range of motion sensors. They are also becoming better at picking up on facial expressions and gestures, so their interactions with humans become more natural, said Kevin Curran, IEEE senior member and professor of cyber security at Ulster University.
"Basically, AI is crucial for all their learning and adaptive behavior so they can adapt existing capabilities to cope with environmental changes. AI is key to helping them learn new tasks on the fly by sequencing existing behaviors," he said.
Karsten Schmidt, head of technology at the Innovation Center Silicon Valley for SAP Labs echoed this sentiment. "In 2017, we will see AI gain greater acceptance and momentum as humans come to increasingly rely, trust and depend more on AI-driven decisions and question them less. This will happen as a direct result of improved AI learning due to more usage and a broader user base, and as the quality and usefulness of AI software in turn improves," he said.
Meet Your AI Co-Worker
Many people fear losing their jobs to robots, but more than likely you will have a robot for a co-worker. Then again, if you've been in the workforce long enough, you've probably already had a robot for a co-worker, just in human form.
"In 2017, we are seeing a growing emergence of robots designed to operate alongside people in everyday human environments. Autonomous service robots that assist workers in warehouses, deliver supplies in hospitals, and maintain inventory of items in grocery stores are emerging onto the market," said Sonia Chernova, assistant professor at Georgia Tech College of Computing.
These systems need humans because one thing robotics researchers are still struggling with is robotic arms. There's no substitute for the human arm to pick things up and manipulate objects. "[Robot arms] have of course been used successfully for decades in manufacturing, but current techniques work reliably only in controlled factory environments, and are not yet robust enough for the real world," said Chernova.
This could lead to the rise of "AI Supervisors," said Tomer Naveh, CTO of Adgorithms, an AI-based digital marketing platform. Robots already have taken on many labor-intensive, manual (read: boring) tasks we do in our everyday life but robots will get smarter, and need AI to do it, he said.
"AI systems will get better at communicating their decisions and reasoning to their operators, and those operators will respond with new rules, business logic, and feedback that make it more and more useful in practice over time. As a result we will see people shifting from doing tasks by themselves, to supervising AI software on how to do it for them," he said.
That's actually a disturbing thought.
AI and robotics will slowly move into another area where human error is common: retail. To some degree there is already automation in optical scanners and retail tracking used by stores to manage inventory, but it will be considerably improved.
The retail industry, for example, has been unable to address the problem of non-scanned items at checkout, which accounts for 30% of retailers’ annual losses. They only discover the loss in inventory well after the fact.
"AI is stepping in to address issues of this caliber across industries, and as a result, it’s often gathering just as much data as it’s processing. This resulting data is becoming a secondary benefit to businesses that use AI. AI Apps created to detect these non-scans are now also providing retailers with information about their origins, whether they’re fraudulent or accidental, and how customers and cashiers are gaming the system," said Alan O’Herlihy, CEO of Everseen, developer of AI products for point of sale systems.
And as consumers have positive experiences with drone deliveries, public opinion may go a long way towards opening up regulations for further drone use, said Jake Rheude, director of business development for Red Stag Fulfillment, an eCommerce fulfillment provider.
"Consumers are already fully on board with the concept of drone delivery. According to The Walker Sands Future of Retail 2016 Study, 79% of US consumers said they would be 'very likely' or 'somewhat likely' to request drone delivery if their package could be delivered within an hour. And 73% of respondents said that they would pay up to $10 for a drone delivery. This is an unprecedented level of acceptance for new technology with so little real word experience from consumers," he said.
AI in Your Home
Another prediction made by umpteen science fiction movies – usually with an alarmist tone – is that AI will come into the home in a big way. It already has if you have an iPhone, with Siri, or use Windows 10 and Cortana. Gradually it will move into other devices, the experts predict.
"Alexa, Cortana and Siri are great, but they still lack the sophistication and accuracy to be relied upon as a utility. In 2017, advances in natural language processing and natural language generation will transform what digital assistants understand and how they analyze and respond with legitimately useful information. The era of just opening a related Wikipedia page are over," said Matt Gould, AI expert and co-founder of Arria NLG, which develops technology that translates data into language.
To make these devices work optimally, they need to develop an emotional quotient, or an EQ, predicts Dr. Rana el Kaliouby, CEO and co-founder, Affectiva, which develops facial recognition software. "We expect to see Emotion AI really come to the fore this year, and once AI systems develop social skills and rapport, AI interfaces will be more engaging and sticky, and less frustrating for their users, driving even wider adoption of the technology," she said.
She predicts that in the future, all of our devices will be equipped with a chip that can adapt our experiences to our emotions in real time, by reading facial expressions, analyzing tone of voice and possessing built-in emotion awareness. "The ability of technology to adapt to our mood and preferences could enhance experiences ranging from driving a car to ordering a pizza," she said.
And this should mean less typing, said Scott Webb, president of Avionos. "Physical interaction with hand-to-keyboard commands will give way to more organic input methods like voice and physical response as we move forward," he said.
It's been said before but is worth repeating that AI will improve security because, like in so many other cases, security AI won't be prone to human failings of boredom, fatigue, illness and disinterest that often causes a security lapse. It will also have much faster reaction times and much better recognition of unusual patterns.
"Machine learning and the models generated through processes around machine learning are helping enterprises analyze massive amounts of data and identify trends, anomalies, and things not detectable through standard modeling. Machine learning algorithms are helping security researchers dynamically identify threats, airlines improve maintenance and reliability of their aircraft, and provide the back bone for self-driving cars to analyze data in real-time to make decisions," said David Dufour, senior director of engineering at antimalware vendor WebRoot.
That immediacy is needed with catching data breaches, as well. The average time to discover a network attacker is about five months, giving attackers plenty of time to achieve their goals, said Peter Nguyen, director of technical services at LightCyber, which does behavior based security software.
"Finding signs of an attacker is difficult and demands the use of AI. Instead of trying to encounter, identify and block threats by their known characteristics, the way to find an active attacker is through their operational activities. Using machine learning, it’s possible to learn the good behavior of all users and devices and then find anomalies. Then, AI can be focused to find those anomalies that are truly indicative of an active attack," he said.