Every day, companies rely on photos, videos, audio, and other media to shape their brand, enable decisions by business leaders, and carry out key functions while protecting their networks, communications, and sensitive data. But what if, hidden among all the authentic media, there are deepfakes—highly realistic synthetic media made with artificial intelligence (AI)? Now more than ever, it’s clear that deepfakes pose business risks. In fact, misinformation/disinformation ranks as the most severe near-term global risk in the World Economic Forum’s . Like government leaders, commercial chief information security officers, executives, and boards want to understand and mitigate deepfake risks.
It’s easy to imagine how criminals might use deepfakes to undermine a brand, impersonate leaders and financial officers, and compromise vital data and systems. Threat actors are already creating deepfake images, audio, and video content with lifelike facsimiles of real people. Celebrities, the public, and businesses are being targeted. Fake imagery is being used to cause reputational harm, exact revenge, and carry out fraud. There’s a low barrier to entry into this malicious activity because the tools needed to create deepfakes are widely available and accessible.