I had a nice chat recently with Steve Mills, chief AI ethics officer at BCG, about its “AI Ethics Code of Conduct,” which surrounds the firm’s efforts to develop and sell AI instances to its customers.
AI represents one of the biggest opportunities for massive automation, but it also represents substantial risks given it operates at machine speeds. So it’s unlikely that any human operator could mitigate the damage an unethical AI could cause, and as we know from the robotics market, liability flows right back to the creator of systems when a problem results.
When BCG started this effort, it was surprised at the lack of existing finished work on ethical AI, so it had to compile and create a process that could not only define the ethical rules surrounding its AI efforts, but come up with ways to ensure these rules would be followed and enforced once in place.
Let’s look at how BCG created its AI ethics rules:
Building out of company values
Ethical rules need to be built out of accepted company values or principles. BCG has seven of these that are unusually well defined, which I’m sure significantly helped it create AI ethics.
These are the corporate values: transparency and explainability; accountability; fairness and equity; assuring privacy; minimizing social and environmental adverse impacts; assuring security and robustness; and assuring that AI efforts enhance, but do not replace, human workers.
These values were then added to BCG’s ethical research into the industry, which looked at IEEE and OECD ethical baselines, and resulted in 30 internal standards or policies that need to be followed.
BCG has implemented an internal audit-like compliance process consisting of 10 people who represent engineering, security, AI, compliance, and other areas. This group samples low-risk efforts infrequently but approaches 100% review of projects that are high risk, helping mitigate problems before they become a larger issue.
Resulting AI is structured to provide information on the broader ramifications of a decision, not just recommend the decision in a vacuum. For instance, where you might typically discontinue a grocery store that was underperforming, if that store was in an area that was depressed and was the only store, the application would showcase those dependencies, so the decision maker weighs the financial benefits against the harm removing the store might do to the local community.
In the end, the resulting AI is not to make the decisions for managers, but to assure those managers make the most informed decisions possible.
Simulation and engagement
Like any complex effort, it isn’t wise to release the resulting AI into a work process untested.
While BCG hasn’t spun up fully simulated test environments for its AI projects yet, it does test them in a secure sandbox, so any problems can be discovered and corrected before the related AI becomes commercially viable.
When creating an AI for a client, BCG has found that it needs to stay engaged throughout the project to ensure its success, and it favors working with firms that can internally understand what AIs can and cannot do.
Making employees better
A key part to the AI effort at BCG is the focus on enhancing and not replacing employees, which is critical to the safe and effective role of AI technologies.
Enhancing employees allows them to grow to become more valuable, whereas replacing them can destroy the economy in an entire region, depending on how many are laid off, and create long-term image problems for the company and its brand.
BCG has 1,500 people, 600 of which are data scientists, in its AI practice. Assuring the ethics of the AI this practice helps create makes the future world just a little bit safer and more empathetic than it would otherwise be. We need this ethical focus on AI to assure that its damage potential is never realized and we only see the good they can do. BCG’s practice helps with that, and I think helps make for a better future world.