Cybersecurity Awareness Month: Deepfakes-when seeing is no longer believing

Imagine the dangerous ramifications of a technology, available to anyone, that can be used to threaten personal privacy, National Security and even Democracy itself.  That technology is video. Combined with the viral power of the Internet, believable fake videos can compromise privacy, harm corporations, sow societal and political discord and spread disinformation.  Disinformation is false information that is meant to mislead; whereas, misinformation is false information provided without malice.

Deepfakes are videos or audio recordings digitally manipulated by AI (artificial intelligence) with the intent to deceive.  Deepfakes rely on people’s inclination to believe what they see and have the potential to fundamentally change how individuals perceive the world. The term deepfake is derived from deep learning. The image below illustrates deep learning:

  • AI empowers machines to accomplish tasks that require human intelligence.
  • Machine learning is a subset of AI and enables computers to learn and acquire skills without the interaction of humans.
  • Deep learning is a further subset that allows machines using algorithms to “train” or learn from data.

How a Deepfake is made
Essentially, a deepfake is created by a computer equipped with AI software that watches lots of videos to learn how a human face or body moves then applies those movements to a single or small group of pictures.  The voice is then added and synchronized with facial movements.  Access to a larger set of data improves the outcome quality; however, Samsung AI researchers have already come up with an alternative technique that allows them to train a deepfake using a single still image.

Anyone with some technical abilities can download deepfake software and inexpensively create convincing videos in a relatively short period of time.  Currently, most deepfakes are detectable.  As AI rapidly evolves, the quality will continue to improve and rival that of special effects studios making reality harder to discern.

Why they’re a threat
Because we typically trust what we see, videos are the evidence we rely on to validate the truth.  Defending against deepfakes is difficult because once the seeds of fear and doubt have been planted, some viewers will never forget and remain skeptical.  Conversely, when it comes to authentic videos, unscrupulous claims of deepfakes will propagate doubt and lead people to question reality.

Corporations must be prepared to rapidly respond if they are the target of a deepfake.  In June 2019 a deepfake of Facebook Founder Mark Zuckerberg making alarming comments about controlling the public’s privacy went viral.

Below are more examples of the potentially dangerous implications of deepfakes:

Personal Threats:

  • Bullying and harassment
  • Adult content videos

Corporate Threats:

  • Convincing spear phishing attacks
  • Imposter executives attempting to get employees to commit fraud
  • Extortion
  • Fake promotional material
  • Attempt by competitors to damage reputation/negatively impact shares of public companies

Social Chaos, Public Safety and National Security:

  • Re-frame history
  • Election and evidence tampering
  • Conspiracies by foreign adversaries
  • Incite violence
  • Manipulate emergency alert warnings/Public Service Announcements
  • Dissemination of false information
  • Disinformation reported by news anchors
  • Manipulate money markets or stocks

Defending against deepfakes
The US Pentagon considers the issue so serious that DARPA (Defense Advanced Research Projects Agency) established the MediFor (Media Forensics) Project, which is attempting to develop forensic technologies that can automatically detect manipulations.

In September 2019, Facebook announced it was teaming up with the Partnership on AI, Microsoft, and academics to launch a Deepfake Detection Challenge (DFDC).  The goal is to improve tools and produce technology to detect videos and other media manipulated by AI in an effort to combat disinformation.  Facebook has committed $10 million to this effort.

There is currently no law to protect against deepfakes; however the Deepfakes Accountability Act was introduced in June 2019 and is the first attempt by Congress to provide legal protections.  States have also introduced their own legislation.

Author Sasha Aronson

More posts by Sasha Aronson

Leave a Reply