As deepfake technology continues to proliferate across all media platforms, it becomes increasingly important for people to educate themselves with the knowledge and techniques necessary to identify and combat this threat.
In this first blog post of a series on detecting deepfakes, we will dig into common hints that may indicate the content is not genuine. Here are some of the more common cues to watch out for:
1. Artifacts within Face and Unnatural Coloring
Pay attention to the subject’s eyes, especially for inconsistencies in reflections and gaze patterns. Additionally, note any anomalies in facial features such as overly smooth skin, unnatural reflections, missing details like the outline of individual teeth, or irregular facial hair. Furthermore, differences in skin tone, odd lighting effects, or shadows in the wrong place could be indicators of potential fake content.
⚠️ However, keep in mind that compression, watermarks, and other post-processing techniques may alter the original visual quality, potentially making it difficult to distinguish between genuine manipulation artifacts and those caused by post-processing.
2. Body Posture and Behavior inconsistencies
Deepfake technology often focuses on altering facial features, which can sometimes lead to a disconnect between the head and body. You might notice the head or certain parts of it (especially around the mouth like in this report) not aligning well with the body or moving in a way that doesn’t match the person’s natural body language. Watch for jerky or unnatural motions, particularly during head turns or shifts in posture. Additionally, if facial expressions seem flat or disconnected from the speaker’s words, it’s another red flag that the video might be manipulated.
⚠️Keep in mind that sometimes age or medical conditions may alter normal behaviors, so unnatural expressions need to be paired with contextual verifications. A good example of this is when the president of Gabon made a video appearance addressing the country while hiding his recovery from a stroke that paralyzed parts of his body. The public made quick judgements from the video, deeming it fake, which almost led to a coup.
3. Audio-Visual Discrepancies
Pay close attention to whether the audio syncs seamlessly with the speaker’s mouth movements. Inauthentic audio-visual synchronization is a common pattern of lip-sync-based deepfake manipulation. For example, during the 2020 U.S. presidential election campaign, a deepfake video surfaced depicting the candidate Joe Biden seemingly making inflammatory remarks. Despite the convincing appearance, experts quickly identified subtle inconsistencies in the audio synchronization with Biden’s mouth movements.
4. Contextual Analysis
Beyond looking at the main character in the video or image for artifacts, consider the broader context of this piece of media, including its source, timestamps, and accompanying narratives such as previous post history, followers of the account, and comments. Deepfakes often lack the contextual coherence present in authentic media and these contextual elements can be pieced together just using the information from the scene. For instance, was a sign/logo in the background of an original video or image that is now missing? Try to perform a reverse image search and see if there’s similar content from reliable sources, it may lead you to the original source.
5. Media Provenance
Another way to verify the authenticity of media is by examining its provenance—tracking the origin and history of the content. The C2PA (Coalition for Content Provenance and Authenticity) standard is being adopted by organizations to establish a verifiable chain of custody for digital content. By checking for tamper-proof evidence of a media’s source and modification history, viewers can better assess whether the content is genuine or potentially a deepfake.
Although C2PA image Content Credentials is the modern provenance technique that is so far being adopted, there are still others being researched like model watermarking that could make way into the mainstream, however refinements will be needed before they can be adopted by the mainstream.
Final Thoughts
It is also important to note that many of these indicators may become less reliable over time. We have seen that the absence of blinking is no longer a reliable indicator for identifying deepfake video. The latest version of Midjourney has also notably enhanced hand rendering and other features. With the ever-evolving generative technologies, visual cues may become less reliable. People started to seek help from AI-supported detectors, but how reliable are they? Keep an eye on our next blog post in this series, where we will review some popular AI detection tools!