Thursday, April 18, 2024

Microsoft and Partners Producing Competition to ‘Attack’ AI Security

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

REDMOND, Wash. — The artificial intelligence (AI) and security communities are being invited to participate in a competition to “attack” critical AI systems.

The GitHub-based competition runs Aug. 6 to Sept. 17 and is being produced by Microsoft and a group of partners  — NVIDIA, CUJO AI, VM-Ray, and MRG Effitas, according to a blog post this week by Hyrum Anderson, principal architect of trustworthy machine learning (ML), Microsoft.

The contest will reward participants who “efficiently evade” AI-based malware detectors and AI-based phishing detectors. It is dubbed the Machine Learning Security Evasion Competition (MLSEC).

Security for AI is Lagging 

Anderson notes that machine learning is “powering critical applications” across industries as well as infrastructure and cybersecurity. 

Microsoft is reporting an “uptick of attacks” on commercial AI systems that could “compromise the confidentiality, integrity, and availability guarantees” of the systems, he says. 

Anderson cites several resources that reinforce the need for the competition and to “democratize knowledge to secure AI systems.”

Specifically, ML can be “manipulated to achieve an adversary’s goals,” as detailed in several ML security case studies Anderson cites by MITRE ATLAS.

When it comes to AI, however, security is the “biggest hurdle” facing companies, with the issue being cited by over 30 percent of senior IT leaders, according to a survey Anderson cites by CCS Insight.

Yet, 25 out of 28 organizations do not have the right tools to secure their AI systems, according to a Microsoft survey.

Raising Awareness

In academia, researchers have studied “how to attack” AI systems for about two decades, but “awareness among practitioners is low,” according to Anderson.

Christopher Cottrell, AI red team lead at NVIDIA, says there’s “a lack of practical knowledge about securing or attacking AI systems in the security community.”

MLSEC will “highlight how security models can be evaded by motivated attackers and allow practitioners to exercise their muscles attacking critical machine learning systems used in cybersecurity,” Anderson says.

See more: Artificial Intelligence in 2021: Current and Future Trends

MLSEC Details

  • MLSEC runs from Aug. 6 to Sept. 17, 2021. Registration will remain open throughout the duration of the competition.
  • Winners will be announced on Oct. 27, 2021 and contacted via email.
  • Prizes for first place, honorable mentions, and a bonus prize will be awarded for each of the two tracks.

Two Competition Tracks

1. Anti-phishing evasion track: play the role of an attacker and attempt to evade a suite of anti-phishing models

2. Anti-malware evasion track: change an existing malicious binary in a way that disguises it from the anti-malware model

AI Market Growth

In 2021, the AI market is projected to grow 16.4% to $327.5 billion, according to IDC.

By 2024, the market is expected to reach $554.3 billion, with a compound annual growth rate (CAGR) of 17.5%.

“The global pandemic has pushed AI to the top of the corporate agenda, empowering business resilience and relevance,” said Ritu Jyoti, program VP for AI research, IDC.

“AI is becoming ubiquitous across all the functional areas of a business.”

See more: Top Performing Artificial Intelligence Companies of 2021

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles