What are deepfakes and how are they made?

The next phase of misinformation is being fuelled by AI, leading some, like the Wall Street Journal to take matters into their own hands and launch a deepfakes task force, the WSJ Media Forensics Committee. 

The machine learning technique called “generative adversarial networks” is the system on which most deepfakes are based. Due to its public availability, the ability to use this technology to generate deepfakes is spreading. All that is needed is some technical savviness and good enough graphics card.

There are multiple techniques used by creators to make deepfakes. A face swap algorithm insert the face of a person into a target video, while a lip-sync algorithm allows forgets to graft a lip-syncing mouth onto someone else’s face. Facial reenactment allows one to transfer facial expressions from one person into another video. One can even transfer the body movement of one person to another.  Therefore, it is not only possible to put people into situations they were never actually in, but it is also possible to make them say and do things they have never said or done.

How do we identify deepfakes?

However, it is possible to detect these fakes. Here are a few examples of how this can be done: 

  1. Examine the source. If the video or footage is suspicious, it is wise to contact the source and ask them questions about its origin. If the video was uploaded online by an unknown source even more questions should be asked. Checking the metadata of the video or image with tools like InVID or other metadata viewers can provide answers. Collaboration with other verification organisations such as Storyful can also help. However, technology alone won’t solve the problem and real people in the newsroom are at the centre of the process. 
  2. Reverse image search. Since deepfakes are often based on existing footage, reverse search engines like Tineye or Google Image Search can help find older versions of the video and figure out while aspects were doctored. 
  3. Examine the footage. Editing programs can help journalists slow the footage down, zoom the image and look frame by frame to identify glitches. These may include differences between skin tones, fuzziness, glimmering, unnatural light, irregular breathing or metallic sounding voices. 

Ramifications?

These manipulations can not only be used to put politicians or other public figures into compromising positions, but can also be used to deceive the public, or worse, cause international incidents. They can also be used to intimidate the very people who seek to uncover them.