This is Part 1 of a 3-part blog series highlighting some of the distinguishing aspects of Akamai's DNS services, Edge DNS and Global Traffic Management.
What a journey it has been. I wrote quite a bit about machine learning from a red teaming/security testing perspective this year. It was brought to my attention to provide a conveninent “index page” with all Husky AI and related blog posts. Here it is.
Machine Learning Basics and Building Husky AI Getting the hang of machine learning The machine learning pipeline and attacks Husky AI: Building a machine learning system MLOps - Operationalizing the machine learning model Threat Modeling and Strategies Threat modeling a machine learning system Grayhat Red Team Village Video: Building and breaking a machine learning system Assume Bias and Responsible AI Practical Attacks and Defenses Brute forcing images to find incorrect predictions Smart brute forcing Perturbations to misclassify existing images Adversarial Robustness Toolbox Basics Image Scaling Attacks Stealing a model file: Attacker gains read access to the model Backdooring models: Attacker modifies persisted model file Repudiation Threat and Auditing: Catching modifications and unauthorized access Attacker modifies Jupyter Notebook file to insert a backdoor CVE 2020-16977: VS Code Python Extension Remote Code Execution Using Generative Adversarial Networks (GANs) to create fake husky images Using Microsoft Counterfit to create adversarial examples Backdooring Pickle Files Backdooring Keras Model Files and How to Detect It Miscellaneous Participating in the Microsoft Machine Learning Security Evasion Competition - Bypassing malware models by signing binaries Husky AI Github Repo Conclusion As you can see there are many machine learning specific attacks, but also a lot of “typical” red teaming techniques that put AI/ML systems at risk.
In this post we will explore Generative Adversarial Networks (GANs) to create fake husky images. The goal is, of course, to have “Husky AI” misclassify them as real huskies.
If you want to learn more about Husky AI visit the Overview post.
Generative Adversarial Networks One of the attacks I wanted to investigate for a while was the creation of fake images to trick Husky AI. The best approach seemed by using Generative Adversarial Networks (GANs).
There are plenty of examples of artificial intelligence and machine learning systems that made it into the news because of biased predictions and failures.
Here are a few examples on AI/ML gone wrong:
Amazon had an AI recruiting tool which favored men over women for technical jobs The Microsoft chat bot named “Tay” which turned racist and sexist rather quickly A doctor at the Jupiter Hospital in Florida referred to IBM’s AI system for helping recommend cancer treatments as “a piece of sh*t” Facebook’s AI got someone arrested for incorrectly translating text The list of AI failures goes on…
Digital platforms are increasingly essential for banking, which means access control is an increasing focus for security. F5 Labs' Shahnawaz Backer writes for CXOtoday, describing some of the current thinking towards balancing access and convenience for users.
You might have heard about “NAT Slipstreaming” by Samy Kamkar. It’s an amazing technique that allows punching a hole in your routers firewall by just visiting a website.
The attack depends on the router having the Application Layer Gateway enabled. This gateway can be used by anyone inside your network to open a firewall port (totally by design). Protocols such as SIP (Session Initiation Protocol) use it.
What I will focus on in this post is the Application Layer Gateway (ALG) and SIP.