Red Team Methodology

If red teaming is authorized, adversary-based assessment for defensive purposes, where adversary-based means accounting for the motivation, goals, knowledge, skills, tools, and means of one or more adversaries, how can we know whether the red team accurately models the adversaries we are concerned about? Or, how can we know whether the red team can repeatably model these adversaries?

Red teams learn from each assessment knowledge and skills that often influence future assessments. Usually, this learning effect is desired, but in some cases it gets in the way of answering key assessment questions. How can the red team account for this learning?

These and many other questions amount to asking, “Why does red team methodology matter?”

The figure below will help with one answer to this question. In the universe of all adversary actions or bad things someone might do to a system you are defending, it isn’t likely that a single adversary or a single red team will have the knowledge, skills, and experience to do all of them. In the figure, it is plain to see that Red Team A can model both Adversary A and Adversary B but not Adversary C. Similarly, Red Team B can model Adversary B but not Adversary A or Adversary C.

If your only adversary of concern is Adversary B then you have a choice of red teams. But, if your adversary of concern is Adversary C, you’ll have to find another red team or perhaps even pair two or more red teams who together can model Adversary C.

More even than showing the importance of choosing a red team that can model your adversaries of concern, this figure indicates the importance of adversary modeling, a key aspect of red team methodology. Just because a red team has the knowledge, skills, and experience to model an adversary does not mean they exercise the full range of adversary behavior in each assessment. And, you’d never know that the red team failed to fully model the adversary without adversary models and collection of relevant red team metrics.

Adversary models include a spectrum of outsider and insider threats characterized by both measurable capabilities — knowledge, access, and resources — and intangibles such as risk tolerance and motivation. These models are used to screen attack possibilities and assist in threat-based prioritization of protection strategies. The principal advantage of these models is an adversary perspective that yields a view of information systems different from that of defenders and yields critical insights into the security of critical systems.

Top of page