Machine Learning Attack Series: Overview
What a journey it has been. I wrote quite a bit about machine learning from a red teaming/security testing perspective this year. It was brought to my attention to provide a conveninent “index page” with all Husky AI and related blog posts. Here it is.
Machine Learning Basics and Building Husky AI
- Getting the hang of machine learning
- The machine learning pipeline and attacks
- Husky AI: Building a machine learning system
- MLOps - Operationalizing the machine learning model
Threat Modeling and Strategies
- Threat modeling a machine learning system
- Grayhat Red Team Village Video: Building and breaking a machine learning system
- Assume Bias and Responsible AI
Practical Attacks and Defenses
- Brute forcing images to find incorrect predictions
- Smart brute forcing
- Perturbations to misclassify existing images
- Adversarial Robustness Toolbox Basics
- Image Scaling Attacks
- Stealing a model file: Attacker gains read access to the model
- Backdooring models: Attacker modifies persisted model file
- Repudiation Threat and Auditing: Catching modifications and unauthorized access
- Attacker modifies Jupyter Notebook file to insert a backdoor
- CVE 2020-16977: VS Code Python Extension Remote Code Execution
- Using Generative Adversarial Networks (GANs) to create fake husky images
- Using Microsoft Counterfit to create adversarial examples
- Backdooring Pickle Files
- Backdooring Keras Model Files and How to Detect It
Miscellaneous
- Participating in the Microsoft Machine Learning Security Evasion Competition - Bypassing malware models by signing binaries
- Husky AI Github Repo
Conclusion
As you can see there are many machine learning specific attacks, but also a lot of “typical” red teaming techniques that put AI/ML systems at risk. For instance well known attacks such as SSH Agent Hijacking, weak access control and widely exposed credentials will likely help achieve objecives during red teaming operations.
Hope the content is helpful and maybe even inspiring for others to start building, breaking and better protecting AI/ML systems.
Reach out if there are specific topics you would like me to cover, or if you have any feedback. Also, if you enjoyed reading about this series, I’d appreciate a note. :)
Also, if you’d like to build Husky AI yourself, the resources are available at Husky AI Github Repo.
Stay safe, Johann.
Twitter: @wunderwuzzi23
PS: Don’t forget to check out Cybersecurity Attacks - Red Team Strategies for more red teaming goodies.