I’m at Qualcomm’s annual analyst event in California this week, and they have been talking about their solution for artificial intelligence (AI) at the edge.
But edge AI has one huge problem we are currently ignoring and that is that it potentially will result in a far higher likelihood of bias.
IBM Research appears to have a viable way, through the IBM Causal Inference 360 Toolkit, to significantly reduce that risk, and this approach could significantly increase the effectiveness of any AI inference effort at scale that is at risk of bias. Given every AI effort is at risk of bias, this is a fascinating approach to what could otherwise be a problem that could result in huge AI accuracy issues over time.
See more: Addressing Bias in Artificial Intelligence (AI)
The Problem With Distributed AI
The issue with distributed AI is that most of the efforts are hybrid in nature. The AI at the edge, to reduce latency and network traffic, only sends the part of the data it determines is needed to the central AI, which then makes the actual decision. This means the central AI never sees the entire data stream but only what that remote and far more limited AI has been trained to send.
A few years ago, I was briefed on a big problem a shipping company had with timely deliveries. The company is highly automated and instrumented, allowing far fewer people to more effectively manage what is a very complex entity with extreme effectiveness. They are also a leader not only in AI training and inferencing from the cloud for their customers, but in AI use: suggesting that at some point many of the related management decisions will be made by ever-more capable AIs, rather than managers or administrators.
They employed a data forensics firm, who I interviewed, to figure out what the problem was because their systems reported there was no problem, even though many customers were clearly very upset because their packages were turning up days and weeks later than expected.
What that firm found was that, for much the same reason that remote inference AIs don’t send all the data, their distribution centers only sent the data that the folks that set up the system thought needed to be sent. Since these centers weren’t, by policy, supposed to direct ship to individuals but to smaller remote distribution centers with that responsibility, shipping information to customers wasn’t reported. A manager at one of these central distribution centers had decided to ship directly thinking that it would be faster because that centralized resource was closer to large numbers of customers than any of the smaller centers, but it wasn’t set up for that effort. Not only wasn’t the data sent, but those shipments took far longer to ship.
So, the company had what was a big problem, particularly for anything perishable like food or critical like medications, that only end customers were aware of. And because the company didn’t see the problem, they couldn’t fix it.
The core issue that carries over into this distributed computing model, is that if that remote inference AI fails to send some critical piece of information, the centralized AI will either act, or fail to act, in error. And if you are talking about a critical problem, the AI not only will start making mistakes, it will make them at machine speeds.
See more: The Artificial Intelligence Market
Causal Inference 360 as a Solution
Causal Inference 360 is an approach that analyzes the AI model and makes a far more effective determination of what information should and should not be considered.
The creation of this effort appears somewhat connected to the problem with facial recognition engines based on incomplete or biased data training sets that included things life skin color as a determining factor for man vs. animal decisions. It forces the developer to only look at causal data and to ensure that all causal data is included in the process, so the resulting decision can be better assured. It is designed to specifically address the common mistake of thinking that correlated data is causal, but it should also help ensure that all causal information is included in the resulting model.
The result should be a far more effective AI, one that makes fewer mistakes, and this could be critical for endpoint computing using the endpoint inferencing capability that Qualcomm and others are developing to assure the endpoints are sending all the causal information. While no approach is perfect and any approach can miss a critical piece of information, this approach should reduce substantially the number and magnitude of any mistakes that are made, and, if this method remains in, forensically analyze any resulting mistakes, the system will gain accuracy over time.
Solving AI Bias
We are entering a critical period where we are turning more and more things over to AIs, many of which are hybrid in nature, potentially resulting in unintended bias due to either capturing too much coincidental information or too little causal information.
IBM’s Causal Inference 360 effort, once applied, should significantly reduce and potentially eliminate over time the probability of the creation of biased decisions at scale, preventing potentially catastrophic outcomes.
It is a fascinating approach to what is undoubtedly a massive problem for our AI future, regardless of the type or nature of the AI deployed.