ĢƵ

Beyond the Kill Chain - Hunting Disinformation Threats

man hunting disinformation

What are the advantages of DISARM?

DISARM is an open-source, threat-informed, and community-driven tool based on ATT&CK. Because of its widespread adoption and detailed frameworks (Red for offense and Blue for defense), we have chosen to use DISARM as our model for counter-disinformation operations.

DISARM Red, which was developed first, describes “all the things that an incident creator will need to do to create, run, and assess the effectiveness of a disinformation incident.” DISARM Blue seeks to provide a toolkit of detection tactics and countermeasures.

DISARM Red is broken down into four phases: Plan, Prepare, Execute, and Assess. Under each of the four are a number of tactics which a threat actor will implement. Phases are further broken down into tactics, then techniques, and finally tasks. Like ATT&CK, DISARM’s grid is to show tactic stages left to right sequentially, so “the further left that a tactic is ... the earlier that it’s likely to be met by an incident creator.”

Phase One: Plan
Phase Two: Prepare
Phase Three: Execute
Phase Four: Assess

DISARM Bluearticulates procedures to detect and counter specific tactics in the Red framework. By its own description, DISARM is a work in progress, with much left to do. Yet to be assessed are which detections and counters are most effective against which tactics or in defense of which target audiences. This is an area where cognitive modeling may have a significant impact, which we plan to address in the future. While Red is a thorough outline of the phases and TTPs of a disinformation campaign, Blue remains relatively unstructured, more of a toolbox for defenders.

How can a threat-hunting model and DISARM be combined to counter disinformation?

What counter-disinformation frameworks have lacked thus far is an actionable model for implementing continuous operations. Instead of looking to cybersecurity kill chains, we propose using a threat-hunting model to detect and defeat disinformation campaigns.

Continuous operations have been modeled in other realms, perhaps most famously in U.S. Air Force Col. John Boyd’s OODA (Observe-Orient-Decide-Act) loop. Others, such as the intelligence cycle (intelligence requirements, planning and direction, collection, processing, analysis and production, and dissemination), are designed to build upon one another; the outcome of one intelligence cycle feeds the next. The terrorist attack cycle brings us closer to the more fluid model that countering disinformation requires. It begins with target selection, then moves to surveillance, planning, rehearsal, and execution before escape and exploitation. This not only most closely mirrors the order of a disinformation campaign, but much like disinformation, the steps do not have to be taken in order. The threat actor can step back and reevaluate, skip steps to expedite, and/or spin off other campaigns at various points.

So then what does the counter-disinformation cycle look like?

At ĢƵ Allen, we have found success using the threat-hunting model to detect active threats in the cyber domain. Based on DISARM Red’s phases (Plan, Prepare, Execute, and Assess), we seek to apply a threat-hunting methodology in the following corollary phases: Hypothesis, Collection and Analysis, Trigger, Investigation and Response, and Follow Up:

  • In the Hypothesis phase, based on the contemporary understanding of the threat landscape, the counter-disinformation operator conducts target audience analysis and postulates potential adversarial courses of action.
  • This is followed by the Collection and Analysis phase, in which information is gathered and assessed to support or refute the proposed hypotheses. A clear difference here between traditional cyber threat hunting and disinformation threat hunting is the former is focused on the internal networks of the organization involved. Still, the latter is not just inward looking but also looks outward to the global information environment, including cyberspace, social media, and beyond.
  • These first two phases should be continuous until phase three, the Trigger. This is when the execution of a disinformation campaign is detected, and the counter-team swings into action.
  • What gets triggered is the start of phase four, the Investigation and Response. The investigation is ideally expedited and informed by the first two phases. The key in this phase is a rapid response; not only to the disinformation itself but also sources, methods, and goals must be attributed to the threat actor as quickly as possible to identify and deploy counternarratives before the false one can take hold.
  • Finally, the loop closes with the Follow Up phase, where lessons learned, TTPs identified, and damage assessed inform the next round of hypotheses. Much like disinformation itself, this cycle is constant, fluid, and self-reinforcing.

The DISARM framework, given its current traction and detailed attacker and defender tactics, techniques, detections, and counters, serves as a practical playbook for the Disinformation Threat Hunt model, and its basis in the familiar MITRE ATT&CK framework enables the most rapid and effective widespread adoption. Disinformation Threat Hunt also aligns with U.S. Cyber Command’s 2018 vision of “,” which acknowledges that, like information, cyberspace is “a fluid environment of constant contact and shifting terrain,” and therefore, our defenses must be aligned accordingly.

A framework to counter mis-, dis-, and malinformation (MDM) can only be as effective as an operator’s ability to implement it. The model must be fluid, cyclical, and persistent, just like the threats we seek to counter. We hope security teams will gain value from this new approach.

Disinformation Threat Hunt Model

Based on DISARM Red (left), a proposed Disinformation Threat Hunt Model (right)

Incident Creator

Phase 1: Plan

TA01: Plan Strategy

TA02: Plan Objectives

TA13: Target Audience Analysis

Phase 2: Prepare

TA14: Develop Narratives

TA06: Develop Content

TA15: Establish Social Assets

TA16: Establish Legitimacy

TA05: Microtarget

TA07: Select Channels and Affordances

Phase 3: Execute

TA08: Conduct Pump Priming

TA09: Deliver Content

TA10: Drive Offline Activity

TA11: Persist in the Information Environment

Phase 4: Assess

TA12: Assess Effectiveness

Disinformation Hunter

Phase 1: Hypothesis

Anticipate Adversarial Strategy

Anticipate Adversarial Objectives

Anticipate Adversarial Narratives

Identify Potential Targets

Phase 2: Collection & Analysis

Identify Narratives

Identify Artifacts

Refute Legitimacy

Conduct Target Audience Analysis

Prepare Counternarratives

Phase 3: Trigger

Test Hypotheses

Phase 4: Investigation & Response

Deploy Counternarratives

Attribution

Expose Online Harms

Link Offline Activity to Online Narratives

Disrupt Adversary's Ability to Remain in the Environment

Phase 5: Follow Up

Catalog Information

Develop New Hypotheses

Get acquainted with your adversaries, defend some of the world’s most critical networks, and more.

Interested in Additional Technical Content?

Thisblog seriesis brought to you by ĢƵ Allen DarkLabs. OurDarkLabsis an elite team of security researchers, penetration testers, reverse engineers, network analysts, and data scientists, dedicated to stopping cyber attacks before they occur.

This article is for informational purposes only; its content may be based on employees’ independent research and does not represent the position or opinion of ĢƵ Allen. Furthermore, ĢƵ Allen disclaims all warranties in the article's content, does not recommend/endorse any third-party products referenced therein, and any reliance and use of the article is at the reader’s sole discretion and risk.

1 - 4 of 8