For decades we’ve had Isaac Asimov’s three laws of robotics but really haven’t needed them. These laws anticipated the birth of artificial intelligence and many of the concerns that people like Bill Gates and Elon Musk have recently raised about intelligent machines becoming an existential threat to the human race.
Last week IBM CEO Ginni Rometty added to these laws the concepts of trust and information ownership, clearly showcasing that the time to implement controls over these emerging intelligent machines is upon us.
Asimov’s Three Laws
While the three laws as worded talk about robots, a lot of things have changed since they were initially written around the 1940s. Today a more appropriate term would be AI. However, the rest of the wording of the laws has held up remarkably well. The laws state:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given by human beings except where such orders would conflict with the first law.
- A robot must protect its own existence, as long as such protection does not conflict with the first or second law.
These laws or rules basically assure that intelligent machines are permanently subordinate to a human and that they can’t be effectively used as weapons against humans. However, they could be ordered to destroy other robots or even destroy themselves without violating any of these initial laws.
But these laws didn’t address the one remaining huge concern facing us today: potential massive unemployment as people are replaced in their jobs by ever-more-intelligent robots.
IBM addressed that concern last week.
IBM’s Fourth Law
Asimov actually created a fourth law, but it was never broadly adopted. And it was felt that it couldn’t reasonably be adhered to. We’ll cover that in a bit.
Here is what IBM announced last week:
- The purpose of AI is to augment human intelligence: AI systems should be designed to work with humans and expand the potential of everyone. AI should make us all better at our jobs. At IBM, we are investing in initiatives to help the global workforce gain the skills needed to work in partnership with these technologies.
- Data and insights belong to their creator: IBM clients’ data is their data, and their insights are their insights. Client data and the insights produced on IBM’s cloud or from IBM’s AI are owned by IBM’s clients.
- AI systems must be transparent and explainable: For the public to trust AI, it must be transparent. Technology companies must be clear about who trains their AI systems, what data was used to train those systems and, most importantly, what went into their algorithm’s recommendations. This is key to ensuring people understand how AI arrives at a conclusion or recommendation. Companies advancing AI also have an obligation to monitor for and correct bias in the algorithms themselves, as well as bias caused by the human-influenced data sets their systems interact with.
Now, two of these elements apply to the firm or entity creating the AI and not the AI itself and talk about assuring information ownership and the integrity of the creation process. But the first element, were it translated into a law, would instead read “Unless it was to protect the life and safety of a human or where no human were reasonably available, no robot/AI will be designed to replace humans but instead will be designed to augment and collaborate with them.”
In short, IBM’s fourth law is to assure that AIs come into the market as assistants and collaborators, not replacements for humans. This is consistent with Asimov’s first three laws, because not doing this would put humanity at risk.
Anticipating the Risk
One of the big problems we often have with new technology is that we don’t really think about the risks until after they become painfully obvious. Take smartphones for instance, we should have known going to iPhone-like smartphones from BlackBerrys would result in a lot of deaths because folks were using them while driving and screen phones required much more attention. But we didn’t act until after there were a massive number of deadly accidents and, even then, we only reduced and didn’t eliminate the exposure.
IBM is attempting to avoid what could be a very painful similar mistake by getting out ahead of the AI wave and working to eliminate one of the biggest related threats: massive unemployment.
This is arguably better than Asimov’s proposed fourth law which was: “A robot may not harm a human being, unless he finds a way to prove that ultimately the harm done would benefit humanity in general.”
It would be almost impossible for any AI to determine, beyond an abstract level, that their action to harm a human would benefit humanity. However, by assuring robots/AIs don’t replace humans but collaborate with them, the end goal of helping humanity is met.
As a result, I think IBM has put forth a viable fourth law of AI/robotics.
Photo courtesy of Shutterstock.