Tuesday, March 19, 2024

Could AI Fix the Real Problem Behind the CIA IoT Leak?

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

A lot of us have been looking at the recent WikiLeaks drop of Central Intelligence Agency (CIA) files related to hacking Internet of Things (IoT) and personal technology devices.

It includes a lot of data, so it is easy to get lost in the woods with regard to what these tools do. But apparently they can break into devices and cast blame for it on the Russians, which I believe is problematic in a whole number of different ways.

I think the real problem with this is the CIA’s risk assessment process.

We typically associate public companies with approaches that favor relatively small tactical benefits over relatively large strategic exposures — not the CIA. I think this decision process problem is far larger than just the CIA and reflects on security in general. I also think that a deep learning tool like IBM’s Watson could, if placed in the decision process, help prevent bad decisions like this.

Parsing the Problem

In my opinion, there were clearly a number of questionable decisions that led to the creation of these tools, which are largely based on vulnerabilities that existed in U.S. companies’ products. The first bad decision was opting to exploit the vulnerabilities rather than report them and have them corrected. It should be obvious that the national exposure from exploits like these likely exceed the benefit of hacking any one individual phone. To put it differently, the trade-off was between keeping our spouses, children, politicians and families safe versus the ability to hack a foreign agent.

But the CIA isn’t tasked with keeping domestic citizens safe. They are tasked with gaining intelligence. The decision may have been easy because they traded off something for which they weren’t responsible for something for which they were responsible.

When you realize this, you might conclude that the core of this problem lies with how the CIA is measured. This decision seems to be the direct result of training executives not to see the bigger picture.

I believe this also showcases that the CIA hasn’t adjusted to its new reality. Given the number of recent leaks, the new reality is that the CIA apparently can’t keep a secret. That means the tools they create to exploit others are likely being stolen and used against U.S. citizens — possibly including CIA operatives. This would suggest that the more prudent path, until the leaks can be decisively addressed, would be to create no more tools like this. They represent an excessive risk to the agency and the country. But that thought process does not seem to have been internalized.

Finally, with the loss of control of these tools, anyone using them could appear as if they were the CIA. That might allow a third party to orchestrate a hack that could potential trigger a declaration of war from a state like North Korea, which might shoot first and ask critical questions later.

In my opinion, all of this suggests that the CIA shouldn’t be creating tools like this. Instead, it should be working with the industry to correct security exposures in order to keep the nation safer instead. It should acquire hacking tools from the outside both to ensure they aren’t significantly making the hacking problem worse and to position themselves more as a defensive than offensive organization — at least until it can effectively address the leaks.

Managing Risk

It seems obvious that if you can’t contain a weapon, you shouldn’t create it — unless you have a viable and ready defense that can mitigate it.

This is the real problem with the CIA leak. It has lost control over its own tools. This kind of problem, if not quickly mitigated, could lead to damage to the U.S. that could outstrip what any hostile entity could do alone. And that could be dire for the U.S. and for the CIA as an agency.

But it isn’t only the CIA that has experienced an imbalance of risk and reward. Companies like Volkswagen with its diesel scandal, Samsung with the Galaxy Note 7 phone or Takata with its faulty air bags made decisions that might look good tactically but put the entire company at risk strategically. The tactical benefit was overwhelmed by the strategic risk.

It is my fervent hope that eventually artificial intelligence (AI) systems like IBM’s Watson will be positioned to help executives keep from making foolish decisions like this. But in the meantime, I think executives likely need regular training on balancing reasonable risk and reward (not to mention an ethics refresher course).

From a broader perspective, what I envision with Watson is a communications monitor that gives an alert when an executive appears to be doing something unwise. Alerts might say “That could be considered sexual harassment — please reconsider wording and do not send,” “What you propose would be considered illegal in these countries and result in an estimated cost of $Xb if caught with jail time probability 30 percent,” or “That has to be the stupidest thing any executive has done ever —seriously consider working for our largest competitor, gold star recommendation will be in your email inbox shortly.”

I agree with Ginni Rometty, that Watson, properly applied, could significantly improve decision making (or get poor decision-makers to change companies) before the next scandal occurs.

Photo courtesy of Shutterstock.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles