NY Politician Alex Bores: Crypto Tech, Not Human Training, Can Fix Deepfake Crisis
Alex Bores: Cryptographic Solution Can Fix Deepfake Problem

In a significant intervention on the growing threat of AI-generated misinformation, a New York politician and former tech executive has argued that the deepfake crisis has a clear technological solution. He states the answer lies not in training people to spot fakes but in adopting a cryptographic verification system that once secured the early internet and enabled online payments.

The Cryptographic Blueprint from the Internet's Past

Alex Bores, a Democrat running for Congress in Manhattan’s 12th District, made the case on a recent episode of Bloomberg’s Odd Lots podcast. A former data scientist and federal-civilian business lead at Palantir, Bores suggested that the architecture which made online banking trustworthy in the 1990s can be applied today to verify images, video, and audio.

He referenced the widespread adoption of HTTPS and digital certificates, which solved the internet's early security problem for financial transactions. "That was a solvable problem. That basically same technique works for images, video, and for audio," Bores stated, advocating for a similar "trust but verify" model to neutralise the threat of highly-realistic deepfakes.

C2PA: The Proposed Digital Certificate for Content

Bores is backing a specific open-source standard known as the Coalition for Content Provenance and Authenticity (C2PA). This standard acts as a form of tamper-evident digital credential attached to a file. The C2PA metadata can record crucial information:

  • Origin: Whether the content was captured by a physical camera or generated by an AI tool.
  • Editing Trail: A history of any modifications made to the file.
  • Creator Proof: Cryptographic evidence of who or what produced the media.

According to Bores, the primary challenge is not the technology itself, but achieving universal adoption. For the system to be effective, attaching this cryptographic proof must become the default option for creators and platforms.

The Adoption Hurdle and a New Standard of Trust

"The challenge is the creator has to attach it and so you need to get to a place where that is the default option," Bores explained. The ultimate goal is to reach a point where if media lacks this verifiable proof, viewers automatically treat it with skepticism.

He drew a direct comparison to the current trust model for websites. "It’d be like going to your banking website and only loading HTTP, right? You would instantly be suspect, but you can still produce the images," he added. Just as consumers now instinctively avoid sites without a secure padlock, they should learn to distrust media without proper cryptographic provenance.

Bores' argument shifts the focus from the near-impossible task of training every internet user to be a deepfake detective, to building a technological infrastructure that bakes authenticity into digital content from its creation. The success of this proposal hinges on whether industry players can coalesce around standards like C2PA to create a new, verifiable layer of trust for the AI age.