Wednesday, November 6, 2024

The Critical Importance Of The Intel DARPA GARD AI Initiative

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

We are awash with fake news, and it is adversely impacting elections and people ranging from politicians to executives, to everyday people.  But the growing concern is what it will do to our increasing population of ever more capable artificial intelligence deployments. 

Because these AIs are increasingly controlling the world around us. And while humans make bad decisions at a relatively glacial pace compared to computers, AIs make decisions at machine speeds. This creates the opportunity for future cascading catastrophes directly related to bad information but intentional (as in an attack) and unintentional (because they are people sourced and people are flawed). 

To address this problem, Intel is taking the lead, along with the Georgia Institute of Technology.

Let’s talk about that this week. 

GIGO

Back when I first started studying Computer Science, the instructors used to remind us of the saying “Garbage In, Garbage Out,” though, in my case, it mainly referred to some typing error on my punch cards. 

I hated punch cards. Looking back, Computer Science wasn’t great, everything was batch, turn around was glacial, and by the time you got an answer to a math problem, you could have worked it out yourself by hand, not even needing a calculator, in less time. But they could handle what seemed then to be massive amounts of data and provide at least some insight into what the data was telling you. 

But if the data was corrupted, so was the answer.  One field mistake could have you arguing that that women were huge football fans, and men were Oprah’s largest dedicated audience.  Just one binary mistake that switched the sexes, and suddenly you are in front of executives looking like an idiot.

What happened to me was that I worked for a multi-national in Internal Audit and it made no sense that we had to, at year-end, guess how much annual sales was going to be because, at the time we made the announcement, the company knew the exact answer we just hadn’t processed the data yet. 

The practice was to uplift the actual numbers we had calculated by around 20%.  So a bunch of us worked to fix the timing problem, and that year the internal report that had always been about 20% low was accurate, only to have a Controller then uplift that number, making us 20% over and costing the CFO his job.

Now we have been aggressively moving to replace people with AIs, particularly in areas like accounting. Still, if those AIs get bad information, a bad directive, or are intentionally messed with, we are going to be in a world of hurt and not just financially.  Jobs, corporate performance, lives (thinking about the current Pandemic and logistics issues), and even national defense will increasingly depend on AIs getting the accurate information they need so that we can trust both the advice they provide and the decisions they make.

But we not only have issues that people can make coding and data entry mistakes, but we also have hostile players from criminals to disgruntled employees to hostile governments actively trying to mess things up.  We need to get in front of this because, if we fall behind, we are pretty much screwed as AIs scale. 

The GARD Initiative

GARD (Guaranteeing Artificial Intelligence Robustness against Deception) is a government-driven education and industry leadership program under the DARPA (Defense Advanced Research Projects Agency) umbrella to do precisely that. Get ahead of this problem and crafting robust defense against those that want to compromise our data and put our jobs and lives at risk. 

It will focus on both ensuring data integrity and any adversarial attempt to alter or corrupt the algorithms that are used to interpret that data.  Granted, it doesn’t address the corruption of the individual interpreting the result, but that has been a known problem that predates computers and policies going back decades exist to address corrupted officers, executives, and other employees. 

People haven’t been sitting idly by, but the defenses currently in existence are designed to address pre-defined adversarial attacks but can’t adjust to attacks beyond the designed parameters. This shortfall means an attacker either using a unique attack or designing an attack to circumvent a known defense could still do substantial damage. 

GARD will be designed to approach this problem differently using a far broader approach to attack types and be far more agile in its ability to both identify and respond to an attack. 

I see this as an AI-driven defense against an AI targeted threat and critical to the growing potential for an AI-driven attack that could circumvent existing defenses.  

Wrapping Up: GARD Is Critical

We are entering a new age, but we already see huge problems with the massive proliferation of false information and equally massive attempts to corrupt information gathering systems with this false data. To combat this, DARPA has defined a program called GARD, and both Intel and Georgia Tech have stepped up to help make us safe.  Here is hoping that this effort is successful because if it isn’t, the outcome could be extremely dire. 

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles