Countering disinformation is about to get much harder: In the near term, ChatGPT and similar chatbots powered by large language models, or LLMs, will let threat actors master a range of malicious activities, including manufacturing more believable lies at scale. As National Security Agency (NSA) , ChatGPT will enable malicious foreign actors to craft very convincing, native-language, English text for phishing schemes, false backstories, and even malign influence operations.
ChatGPT and similar LLM-powered tools use artificial intelligence (AI) to ingest massive quantities of data from the internet, identify patterns in language, and answer user questions with text that is tricky to distinguish from human writing. Disinformation threats created by the abuse of these tools would fall under the umbrella of "information to human (I2H)" attacks, as opposed to cyberattacks aimed at stealing information.
Even before the advent of ChatGPT, disinformation posed a serious risk to election integrity and public safety. The Department of Homeland Security (DHS) is responsible for countering the triple-headed threat of misinformation, disinformation, and malinformation (MDM). The Cybersecurity and Infrastructure Security Agency (CISA), an operational component of DHS, has about MDM threats to critical infrastructure.
Threat actors are investing heavily in spreading falsehoods. In 2022, , triple what it had been just one year prior. Now, LLMs have the potential to significantly increase the “bang for the buck” in disinformation campaigns by making it quicker and simpler to create vastly larger quantities of inauthentic content and distribute it to ever-wider audiences.