Tutorial 3: Image verification and fake detection

Learning objectives

By the end of this tutorial, you will understand:

  • Advanced techniques for detecting manipulated and fake images
  • How to identify deepfakes and AI-generated content
  • Methods for verifying the authenticity of viral images and breaking news visuals
  • Technical approaches to analyzing image compression, lighting, and pixel-level inconsistencies
  • Tools and platforms specifically designed for image verification
  • How to distinguish between innocent editing and malicious manipulation
  • Strategies for combating visual misinformation campaigns
  • Building a systematic verification workflow that stands up to professional scrutiny

Introduction: The war against visual deception

We live in an age where seeing is no longer believing. In 2023, a fake image of an explosion at the Pentagon briefly sent stock markets tumbling and caused widespread panic before being debunked. This incident perfectly illustrates why image verification has become one of the most critical skills in modern OSINT!

The rise of powerful image editing tools, along with AI-generated content and deepfake technology, has created a situation where false visuals spread faster than the truth. Fake images are now weapons in information battles, used for everything from political propaganda to financial market trickery.

Take the 2022 conflict in Ukraine as an example. Authentic war footage and reused images from past conflicts flooded social media at the same time. Professional investigators had to work tirelessly to sort out real documentation from recycled content, staged photos, and intentional falsehoods. The stakes are incredibly high; fake images can shape public opinion, justify military actions, and change the course of history.

This tutorial will change you from a passive viewer into an active verifier who can spot complex deceptions. You will learn to think like both a forger and a detective. You will understand how images are altered and how to reveal those alterations.

The psychology of visual deception

Before diving into technical detection methods, it’s crucial to understand why fake images are so effective and how our brains process visual information.

Why we are vulnerable to visual deception

  • Processing speed: Our brains process visual information 60,000 times faster than text, making us prone to accepting images before critical analysis.
  • Emotional impact: Images trigger immediate emotional responses that can override logical thinking. A disturbing photo can provoke outrage before we question its authenticity.
  • Confirmation bias: We’re more likely to accept images that confirm our existing beliefs and less likely to scrutinize them carefully.
  • Social proof: When we see others sharing and commenting on images, we assume they have been verified, creating viral spread of unverified content, especially via social media platforms.

The motivation behind fake images

Understanding why someone would create fake images helps OSINT investigators know what to look for:

  1. Political Manipulation: Fabricated evidence to support political narratives or discredit opponents.
  2. Financial Fraud: Fake product images, fabricated financial documents, or manipulated charts.
  3. Social Engineering: Creating false personas or fake evidence for confidence schemes.
  4. Attention Seeking: Viral content creation for social media engagement and monetization.
  5. Historical Revisionism: Altering or fabricating historical evidence to support alternative narratives.

Categories of image manipulation

Not all image alterations are created equal. Understanding the different types helps prioritize verification efforts and choose appropriate detection methods.

Type 1: Innocent editing and enhancement

Characteristics: Basic adjustments like brightness, contrast, color correction, and cropping.

Intent: Improve image quality or aesthetic appeal without changing fundamental content.

Detection challenge: Low – these modifications rarely affect investigative value.

Examples: Instagram filters, basic photo retouching, noise reduction

Real-World Context: A journalist might enhance the contrast of a protest photo to make signs more readable. While technically altered, this doesn’t constitute deceptive manipulation if disclosed appropriately.

Type 2: Misleading context and recycling

Characteristics: Authentic images presented with false dates, locations, or circumstances.

Intent: Deceive viewers about when, where, or why something occurred.

Detection Challenge: Medium – requires extensive background research.

Examples: Using disaster photos from previous years, presenting training exercises as real conflicts

Case Study: During the 2020 Portland protests, images from 2016 demonstrations were repeatedly shared as current events. The images themselves were authentic, but the temporal context was deliberately falsified to inflate the apparent scale of ongoing unrest.

Type 3: Content manipulation and composite images

Characteristics: Adding, removing, or altering elements within images.

Intent: Create false evidence or alter the meaning of authentic scenes.

Detection Challenge: High – requires technical analysis and expertise.

Examples: Adding smoke to explosion photos, inserting people into crowds, removing identifying features

Investigation Technique: The 2013 Boston Marathon bombing investigation revealed how internet users accidentally identified innocent people as suspects using manipulated images that highlighted certain individuals while obscuring others.

Type 4: Complete fabrication and AI generation

Characteristics: Entirely artificial images created using AI or advanced compositing.

Intent: Create evidence of events that never occurred.

Detection Challenge: Extremely High – requires specialized tools and expertise.

Examples: Deepfakes, AI-generated faces, synthetic satellite imagery

Emerging Threat: Services like ThisPersonDoesNotExist.com can generate photorealistic faces of non-existent people, making it trivial to create fake social media profiles with convincing profile pictures.

Technical detection methods

Error Level Analysis (ELA): Exposing compression inconsistencies

Error Level Analysis examines how JPEG compression affects different parts of an image. When images are edited, compressed sections show different error levels than original content.

How It Works:

  1. The image is resaved at a specific JPEG quality level
  2. The difference between the original and resaved version is calculated
  3. Areas with different compression histories appear as bright regions

Best Tools:

  • FotoForensics: Free online ELA analysis with detailed tutorials
  • JPEGSnoop: Desktop software for advanced JPEG analysis
  • Ghiro: Open-source automated image forensics

Noise Analysis: Reading the digital fingerprints

Every digital camera sensor produces a unique pattern of electronic noise. Consistent noise patterns throughout an image suggest authenticity, while inconsistent patterns indicate manipulation.

Noise pattern analysis:

  • Uniform Distribution: Authentic images show consistent noise across the entire frame.
  • Irregular Patterns: Copied or pasted elements often show different noise characteristics.
  • Sensor Fingerprints: Advanced analysis can identify specific camera models.

Tools for Noise Analysis:

  • Amped FIVE: Professional forensic software with noise analysis capabilities.
  • PhotoME: Free metadata and noise pattern analyzer.
  • Python Libraries: scikit-image and OpenCV provide programming interfaces for custom noise analysis.

Case Application: Intelligence agencies use noise analysis to verify satellite imagery authenticity. Inconsistent noise patterns can reveal where legitimate satellite photos have been altered to hide military installations or activities.

Lighting and shadow analysis

Physical laws governing light and shadow remain constant, making lighting analysis one of the most reliable detection methods.

Key indicators:

  • Shadow Direction: All shadows in a scene should point away from the light source.
  • Shadow Length: Shadows should be proportionally consistent across the image.
  • Lighting Temperature: Color temperature should match the supposed light source.
  • Reflection Patterns: Reflective surfaces should show consistent environmental lighting.

Analysis Tools:

  • SunCalc: Calculate sun position and shadow angles for any location and time.
  • Adobe After Effects: Professional software with advanced lighting analysis tools.
  • Blender: Free 3D software capable of lighting simulation and verification.

Pixel-Level analysis

Advanced manipulation leaves traces at the pixel level that human eyes cannot detect but algorithms can identify.

Clone Detection: Identifies areas where pixels have been copied from elsewhere in the image.

Resampling Detection: Finds evidence of resizing or rotating operations.

Double JPEG Compression: Detects multiple save cycles that suggest editing.

Content-Aware Fill Detection: Identifies areas where missing content has been algorithmically filled

Professional Tools:

Open Source Alternatives:

  • OpenCV: Computer vision library with forensic applications.
  • ImageIO: Python library for image processing and analysis.
  • GIMP: Free alternative to Photoshop with some forensic capabilities.

Understanding deepfake technology

Deepfakes are synthetic media, including images, videos, or audio, created or changed with artificial intelligence techniques to mimic real people convincingly. They pose serious challenges to information accuracy, digital forensics, and online trust. Key methods include:

Generative Adversarial Networks (GANs) 

Most deepfakes rely on GANs, which are machine-learning frameworks made up of two neural networks that compete with each other. The generator creates more and more realistic fake content, while the discriminator tries to detect the forgery. Through this back-and-forth, the generator gets better until it can fool even advanced detection systems.

Face swapping 

This technique digitally replaces one person’s face with another in images or videos. Modern face-swapping tools can keep the lighting, head movement, and skin tone consistent, making the composite look real even under close inspection.

Face reenactment

Instead of swapping out an entire face, reenactment changes expressions, eye movements, or mouth shapes of the original subject. This allows an actor’s performance to drive the target’s facial movements, creating realistic but made-up speech or emotions.

Speech synthesis 

Using deep-learning voice models, attackers can clone a person’s voice to produce convincing audio that matches fake visuals or standalone phone calls. When combined with facial manipulation, this can create persuasive but completely false audiovisual content.

Technical deepfake detection methods

Identifying deepfakes requires a mix of forensic analysis and machine-learning methods. The following techniques focus on specific signals that reveal synthetic media:

Temporal inconsistencies
Video deepfakes are created frame by frame, which can lead to subtle differences over time. Analysts search for unnatural transitions, like inconsistent lighting, mismatched shadows, or slight changes in facial geometry between consecutive frames. These details may be easy to miss by the human eye, but automated detection tools can identify them.

Biological impossibilities
Human physiology imposes natural limits that are hard to duplicate perfectly. Some examples include:

  • Blinking and eye movement: Deepfake subjects may blink less often or with irregular timing.
  • Head and neck movement: Certain angles or sudden changes in posture can go beyond normal human biomechanics.
  • Skin and aging signs: Pores, wrinkles, or hair consistency may not realistically change throughout a sequence.

Artifact detection
Synthetic images often have digital artifacts, including pixel-level irregularities, color mismatches, or boundary distortions caused by the generative process. Forensic tools examine compression patterns, chromatic aberrations, and noise distributions to find these irregularities, even when they are not visible to the naked eye.

Frequency domain analysis
The creation of deepfakes can leave identifiable marks in the frequency spectrum of an image or video. By converting media into the frequency domain using methods like the Discrete Fourier Transform, analysts can detect abnormal periodic patterns or spectral peaks that differ from those found in real camera captures.

Specialized detection tools

  • DeeperForensics: Research-grade deepfake detection.
  • Sensity AI: Commercial deepfake monitoring and detection platform.
  • Microsoft Video Authenticator: Enterprise-level deepfake detection tool.

Social context deepfake detection

Technical detection must be combined with contextual analysis:

  • Source Analysis: Who posted the content and what’s their track record?
  • Timing Analysis: Does the content appear at a suspiciously convenient time?
  • Behavioral Analysis: Do the person’s actions match their known behavior patterns?
  • Cross-Reference Verification: Can the claimed event be verified through other sources?

Case Study: In 2022, a deepfake video of Ukrainian President Zelensky calling for surrender was quickly debunked not just through technical analysis, but because the message contradicted all of Zelensky’s public statements and behavior patterns during the conflict.

Platform-specific verification techniques

Different social media platforms present unique challenges and opportunities for image verification.

Facebook and Meta Platforms

Challenges:

  • Aggressive compression that removes metadata
  • Multiple upload paths that affect image quality
  • Privacy settings that limit access to original sources

Verification Strategies:

  • Check the “About” section of posting accounts
  • Look for original higher-quality versions on Instagram
  • Cross-reference with Facebook’s Third-Party Fact-Checking program

Tools:

  • Meta Business Suite: Access to enhanced analytics and verification features
  • Hoaxy: Tracks how information spreads on social media

Building professional verification workflows

Phase 1: Initial assessment

  • Screenshot the image with source information
  • Perform basic reverse image searches across at least three engines
  • Check publication metadata if available
  • Assess the claim being made about the image

Phase 2: Technical analysis

  • Run ELA analysis using FotoForensics
  • Extract and analyze any available EXIF data
  • Perform lighting and shadow consistency checks
  • Look for obvious signs of manipulation

Phase 3: Contextual verification

  • Research the source account or website
  • Look for corroborating evidence from other sources
  • Check weather and environmental conditions for claimed time/location
  • Verify any people, places, or events shown in the image

Combating large-scale visual misinformation

Identifying coordinated campaigns

  • Pattern recognition: Look for identical or similar images being shared across multiple accounts simultaneously.
  • Bot network detection: Identify non-human sharing patterns and artificial amplification.
  • Source tracking: Trace images back to their first appearances online.
  • Cross-platform analysis: Monitor how false images spread across different social media platforms.

Proactive monitoring systems

  • Hashtag monitoring: Track specific hashtags known for spreading misinformation.
  • Keyword alerts: Set up notifications for claims about breaking news events.
  • Reverse search automation: Use APIs to automatically search for new appearances of known false images.
  • Cross-platform tracking: Monitor how content spreads across different platforms

Tools for scale operations:

  • Google Alerts: Basic keyword monitoring.
  • Mention: Social media monitoring across platforms.
  • Brand24: Professional social media monitoring.
  • Custom Python Scripts: Automated monitoring using platform APIs.

Conclusion

Image verification is a mix of technology and investigation. It needs both technical skills and analytical thinking. With artificial intelligence making it easier to create realistic fakes, the skills you’ve learned in this tutorial are becoming more important.

To successfully verify images, you should use several methods. Technical analysis can show the digital signs of manipulation, while contextual investigation brings in the human insight that machines can’t replicate. Neither method alone is enough; together, they create a strong way to uncover visual truth.

Keep in mind that verification is an ongoing process. Your first analysis might point to one conclusion, but more evidence can change what you think. Stay open-minded and be ready to update your findings as new details come in.

The importance of image verification keeps growing. In conflicts, elections, legal cases, and public health issues, your ability to tell truth from falsehood can have serious effects. This duty requires not just technical skills but also ethical judgment and integrity.

Next steps

In Tutorial 4, we will explore search engines fundamentals, diving deep into the different types of search engines available and how to use them effectively for OSINT investigations. You will learn about specialized search engines beyond Google, understand how search algorithms affect your results, and develop advanced search strategies that most investigators never master.

The image verification skills you’ve learned will complement your search capabilities, creating a comprehensive toolkit for information discovery and verification.

Further reading