Generative AI is eroding trust in visual media. FocalProof is building the infrastructure to cryptographically verify that video content is authentic — from capture to consumption.
After viral claims of his death, Israeli PM Netanyahu posted a video from a Jerusalem café. Instead of ending the speculation, social media users and AI chatbot Grok labeled the verified footage a deepfake — and the conspiracy grew.
A fabricated image of a Pentagon explosion went viral on Twitter. Within minutes the S&P 500 dipped and the Dow dropped 80 points — the first documented case of an AI-generated image directly affecting markets.
A finance worker at global engineering firm Arup joined a video conference with the company's CFO and colleagues. Every participant was AI-generated. The employee authorized 15 transfers before the fraud was discovered.
When people can't verify what they see, the systems that depend on visual evidence begin to break down.
Video evidence of police misconduct, election irregularities, or human rights abuses can now be dismissed as AI-generated. The footage that democratic accountability depends on is losing its authority.
The existence of deepfakes gives bad actors a ready-made defense: "that video is fake." Misinformation researchers warn this deniability is as damaging as the fakes themselves — it lets people reject authentic evidence at will.
Studies show that constant exposure to content that may or may not be real leads to disengagement — not better judgment. People stop trying to evaluate information at all, weakening the information environment for everyone.
Markets have already moved on a single fake image. Identity verification is increasingly vulnerable to deepfakes. Voice cloning, video impersonation, and document forgery are becoming industrialized.
Detection is a losing game. AI generation and AI detection are locked in an arms race that detection cannot win.
In the Netanyahu case, an AI chatbot declared authentic, verified video "100% deepfake" — amplifying the exact conspiracy the footage was meant to dispel. This pattern will repeat: as generative AI improves, detection tools will produce more false positives, and each false positive erodes trust further.
The answer isn't guessing what's fake. It's proving what's real.
FocalProof verifies that video was captured on a specific physical device — and that proof stays with the content no matter where it goes.
Device-level attestation. Cryptographic proof that content originated from a real, specific device — not generated by software.
Survives compression and re-encoding. Verification persists through the processing that platforms like Instagram, TikTok, and WhatsApp apply on upload.
Resilient to light editing. Cropping, color correction, and standard post-production don't break the chain of authenticity.
Cross-platform verification. Anyone can verify a piece of content regardless of where it was shared — no special software required.
FocalProof serves any context where knowing a video is real has material consequences.
Verify that content was captured by a real person on a real device
Prove that reporting footage is authentic before publication
Ensure surveillance and monitoring footage hasn't been tampered with
Authenticate official communications and evidentiary material
Verify identity claims, insurance evidence, and financial documentation
If you're an investor, platform, or organization exploring content authenticity — let's talk.
Get in Touch