Companies like IBM that have pioneered artificial intelligence (AI) and have the power and reach of a multinational have the resources to create an internal ethics board, and IBM has what I believe is a storied history of behaving ethically.
Most companies lack both the resources of a company like IBM and the internal controls to take an ethics effort seriously, while companies also have the resources and have formed internal ethics efforts and have fallen short.
Without resources, you can’t create a viable ethics board, and without protection, the ethics board will consist of more form than substance. This makes companies appear to behave ethically but have no real impact on behavior, which could expose them to avoidable liability and government intervention.
To address this problem, there is an impressive effort being driven by the Institute for Experiential AI out of Northeastern University to create an independent AI Ethics Advisory Board that addresses the need for a low-cost ethics board that can stand up to pressure and provide accurate ethical assessments.
Let’s explore the need for an external ethics board:
Ethics refers to moral behavior. Google’s scrapped ethical guideline “do no evil” showcases a common problem with ethics: It can be fluid in large and small companies.
I was both an internal auditor and a competitive analyst. In internal audit, I was a compliance officer, and while working for one of the most ethical companies in the world, I ran into a huge number of internal crimes, often against governments, that ended up wiping out entire divisions and doing massive damage to companies. As a competitive analyst, part of the training was reviewing unethical behavior by a variety of companies that ended up being destroyed by the unethical behavior of similar groups, which had not been restricted from acquiring competitive information unethically.
Ironically, our competitive intelligence group was replaced by a peer group that didn’t have these ethical constraints. They outperformed us tactically and, in turn, were instrumental in the eventual destruction of the company. Even though this was traumatic, these lessons hit hard enough so that when I was later asked to create a set of false records, I refused to protect my career. But decisions like this were instrumental to its later destruction after I left the organization.
While it may seem like disregarding ethics is a fast path to advancement, that behavior ignores the increasing risk of whistleblowers and redundant digital information that can make it nearly impossible to conceal unethical behavior and highlights the attempt at concealment, which can massively increase the resulting risk to significantly overcome any related financial benefit.
In short, strong ethics are simply good business.
Why the independent AI ethics board is critical
An AI makes decisions at machine speeds and, by its very nature, creates a record of not only what it did, but why it did it.
An AI can perform years of work in minutes, making the potential damage of an unethical AI far more damaging than any human could possibly achieve. Therefore, it is critical that companies have access to an ethics resource that is itself incorruptible, well-staffed, and funded to assure their recommendations are unbiased and can help prevent, rather than attempt to cover up, unethical behavior.
If there is a problem, and assuming the ethics board’s recommendations are followed, that board then provides a powerful protection against accusations that the problem was intentional and not accidental. Typically, penalties are more than a magnitude greater if it is found the company behaving unethically did so on purpose.
Therefore, the AI Ethics Advisory Board hosted by the Institute for Experiential AI out of Northeastern University is a critical resource for companies that want to ride the AI wave without being buried, because their AI behaved unethically.
See more: The Artificial Intelligence (AI) Market