The other day I read this blog post about “The Death of Manual Red Teams” and I thought I’d take a moment to comment on it to provide an alternative perspective.
In my opinion the premise of the blog post is backwards, highlighting a lack of understanding of what red teaming is about.
For instance the following sentence in the post seems quite incorrect: “Red teaming is the process of using existing, already known security bugs and vulnerabilities to hack a system.”.
In my view a red team might use an existing vulnerability, discover a new vulnerability or circumvent a system with entirely different means, because what matters is the objective (and entertaining ideas on what an adversary will do to reach that objective).
An alternative perspective
Imagine you had access to 18000 organizations but only actively acted on maybe 1% (possibly due to resources constraints).
Afterwards you might think quite a bit about what parts of such an operation would benefit from automation to better scale next time. Or maybe at least have some form of “emergency exit” button to grab as much as possible when detected, and before someone pulls the plug.
Red teaming is there to explore the unknows and provide alternative perspectives, which is why AI/ML might lead to quite some interesting realizations when used by red teams. I’m also a bit worried that some seem equate red teaming with cybersecurity operations only - it can be much more strategic and broad.
I don’t think red teaming as a discipline is automatable, although I think adversaries (and hence red teamers) will continue to invest in automation, including AI/ML assisted tooling and decision making.
Edit: The title of the original post seems to have been updated to be less clickbaity.