ĢƵ

Artificial Intelligence Security

The Basics of AI Security

Adversaries are developing algorithmic and mathematical approaches to degrade, deny, deceive, and/or manipulate AI systems. As organizations continue to operationalize AI across mission sets, often to automate processes and decision making, they must implement defenses that impede adversarial attacks.

In broad terms, adversaries employ the following five types of attacks to debase, evade, and exfiltrate the predictions and private information of AI systems:

As agencies seek to employ methods to limit or eliminate attacks, it’s important to recognize that AI threats are highly asymmetric:

  • Allowing adversaries to reap rewards with a single successful attack, while requiring defenders to implement controls that need to be resilient to all attacks.
  • Requiring defenders to often use 100 times the compute power of an attacker.

AI Security Services from ĢƵ Allen

AI Risk and Vulnerability Assessments

With AI risk and vulnerability assessments, we identify attack vectors and vulnerabilities, evaluate risk and exposure, provide recommendations for remediation, align proposed guardrails with federal policy requirements, and establish a roadmap for implementation.

AI Security Engineering

Our AI security engineering best practices, tools, and automations can be incorporated into model development pipelines, allowing AI security to be seamlessly integrated into existing continuous integration/continuous delivery processes.

AI Security Research

Through client-directed AI security research, we help agencies rapidly and precisely define and explore approaches to address real-world AI security concerns.

AI Red Teaming

Using a suite of general-purpose and/or bespoke penetration testing tools, we quantify model robustness, and thus risk exposure by exercising client models in a simulated (i.e., non-production) environment.

Differential Privacy

Differential privacy (DP) enables the statistical use of sensitive datasets while safeguarding the privacy of personal identifiable information (PII) and other protected data.

Watch the video to learn about how differential privacy is a powerful strategy for protecting sensitive information.

Click Expand + to view the video transcript

Imagine you’re a medical researcher examining patient outcomes for a particular treatment. You need access to a vast amount of data to correlate outcomes with treatment methods, but you don’t want to risk violating any patient’s privacy rights. Differential privacy can help. Differential privacy is a powerful strategy for protecting sensitive information. Following the aggregation of large amounts of data for a machine learning model, a calibrated amount of random ‘noise’ is added to the collected information. This minimizes the risk of revealing information about individuals from the dataset. Sometimes, malicious actors will try to extract certain data about a topic or person from a dataset using a “Membership Inference Attack.” They do this by training a classifier to discriminate between outputs of models that specifically include or exclude these data. The random noise added to the data reduces the chances of a successful Membership Inference Attack by obscuring the detailed information of each individual. Differential privacy strikes a balance which safeguards individual information through obfuscation while still enabling the creation of accurate and useful predictive models in aggregate. As the adoption and scope of AI increases, the urgency to protect individual privacy demands the usage of privacy-protecting tools. Differential Privacy is key to enabling individuals to safeguard their personal information while allowing data to be used in useful and predictive models. ĢƵ Allen can help. We work closely with our clients across the federal and commercial sectors to develop and deploy machine learning methods that prioritize privacy, safety, and security. Find out more today.

Case Studies

Static Malware Detection and Subterfuge: Quantifying the Robustness of Machine Learning and Current Anti-Virus Systems

Challenge: Understand the weaknesses of commercial and open-source machine learning malware detection models to targeted injections of bytes under threat of an adversary with only black-box query access.

Solution: An efficient binary-search that identified 2,048-byte windows whose alteration will reliably change detection model output labels from “malicious” to “benign.”

Result: A strong black-box attack and analysis method capable of probing vulnerabilities of malware detection systems, resulting in important insights toward robust feature selection for defenders and model developers.

A General Framework for Auditing Differentially Private Machine Learning

Challenge: More accurately audit the privacy of machine learning systems while significantly reducing computational burden.

Solution: Novel attacks to efficiently reveal maximum possible information leakage and estimate privacy with higher statistical power and smaller sample sizes than previous state-of-the-art Monte Carlo sampling methods.

Result: A set of tools for creating dataset perturbations and performing hypothesis tests that allow developers of general machine learning systems to efficiently audit the privacy guarantees of their systems.

Adversarial Transfer Attacks with Unknown Data and Class Overlap

Challenge:Quantify the risk associated with adversarial evasion attacks under a highly realistic threat model, which assumes adversaries have access to varying fractions of the model training data.

Solution: A comprehensive set of model training and testing experiments (e.g., more than 400 experiments on mini-ImageNet data) under differing mixtures of “private” and “public” data, as well as a novel attack that accounts for data class disparities by randomly dropping classes and averaging adversarial perturbations.

Result: Important and novel insights that, counterintuitively, conclude adversarial training can increase total risk under threat models for which adversaries have gray-box access to training data.

AI Security Research Papers

Since 2018, ĢƵ Allen has been a leader in advancing the state of the art in machine learning methodologies that safeguard systems against adversarial attacks. Methods range from adversarial image perturbation robustness for computer vision models and differentially private training to behavior-preserving transformations of malware.

Research by Year

2024
2023
2022
2021
2019
2018

Partnerships

ĢƵ Allen’s partnerships with leading vendors enable us to bring our mission expertise together with the market’s most innovative AI security tools.

HiddenLayer

offers a security platform to safeguard AI machine learning models without requiring access to raw data and algorithms. ĢƵ Allen’s AI and cybersecurity professionals use HiddenLayer's software to augment AI risk and vulnerability assessments, strengthen managed detection and response, and enhance AI security engineering.

NVIDIA

NVIDIA is the premier provider of processors optimized for AI and deep learning tasks. ĢƵ Allen teams with NVIDIA to support high-performance compute needs, such as those used in our research, developing .

Contact Us

Contact ĢƵ Allen to learn more about advanced AI security strategies to safeguard trusted information and AI from adversarial attacks.