Attempts at a Technological Solution To Disinformation Will Do More Harm Than Good
There is widespread concern today about the use of generative AI and deepfakes to create fake videos that can manipulate and deceive people. Many are asking, is there any way that technology can confidently establish whether an image or video has been altered? A number of techniques have been proposed. They include -- most prominently -- a system of "content authentication" supported by a number of big tech firms, and which was discussed by the Bipartisan House Task Force Report on AI released this month. The ACLU has doubts about whether these techniques will be effective as well as serious concerns about potential harmful effects.
There are a variety of techniques for detecting altered images, such as statistical analyses of discontinuities in the brightness, tone and other elements of pixels. The problem is that any tool that is smart enough to identify features of a video that are characteristic of fakes can probably also be used to erase those features and make a better fake. The result is an arms race between fakers and fake detectors. Some have predicted that efforts to identify AI-generated material by analyzing the content of that material are doomed. This has to led to a number of efforts to use another approach to proving the authenticity of digital media: cryptography. In particular, many of these concepts are based on a concept called "digital signatures."
Using Cryptography To Prove Authenticity
If you take a digital file -- a photograph, video, book or other piece of data -- and digitally process or "sign" it with a secret cryptographic "key," the output is a very large number that represents a digital signature. If you change a single bit in the file, the digital signature is invalidated. That lets you prove that two documents are identical -- or not -- down to every last 1 or 0, even in a file that has billions of bits, such as a video.
Under what is known as public key cryptography, the secret "signing key" used to sign the file has a mathematically linked "verification key" that the manufacturer publishes. That verification key only matches with signatures that have been made with the corresponding signing key, so if the signature is valid, one knows with certainty that the file was signed with the camera manufacturer's signing key and that not a single bit has been changed.
Many people have thought that if you can just digitally sign a photo or video when it's taken and store that digital signature somewhere where it can't be lost or erased, then later, you can prove that the imagery hasn't been tampered with. Proponents want to extend these systems to cover editing as well as cameras, so that if someone adjusts an image using a photo or video editor, the file's provenance is retained along with a record of whatever changes were made to the original, provided "secure" software was used to make those changes.
For example, suppose you are standing on a corner and you see a police officer using force against someone. You take out your camera and begin recording. When the video is complete, the file is digitally signed using the secret signing key embedded deep within your camera's chips by its manufacturer. Before posting it online, you use software to edit out a part of the video that identifies you. The manufacturer of the video editing software likewise has an embedded secret key that it uses to record the editing steps that you made, embed them in the file and digitally sign the new file. Later, someone who sees your video online can use the manufacturers' public verification keys to prove that your video came straight from the camera and wasn't altered in any way except for the editing steps you made. If the digital signatures were posted in a nonmodifiable place such as a blockchain, you might also be able to prove that the file was created at least as long ago as the signatures were placed in the public record.
Content Authentication Schemes Are Flawed
The ACLU is not convinced by these "content authentication" ideas. In fact, we're worried that such a system could have pernicious effects on freedom.
These schemes share similar flaws. One is that such schemes may amount to a technically enforced oligopoly on journalistic media. In a world where these technologies are standard and expected, any media lacking such a credential would be flagged as "untrusted." These schemes establish a set of cryptographic authorities that get to decide what is "trustworthy" or "authentic."
Imagine that you are a media consumer or newspaper editor in such a world. You receive a piece of media that has been digitally signed by an upstart image editing program that a creative kid wrote at home. How do you know whether you can trust that kid's signature -- that they'll only use it to sign authentic media, and that they'll keep their private signing key secret so that others can't digitally sign fake media with it?
The result is that you end up only trusting tightly controlled legacy platforms operated by the big vendors such as Adobe, Microsoft and Apple.
Furthermore, if "trusted" editing is only doable on cloud apps or devices under the full control of a group such as Adobe, what happens to the privacy of the photographer or editor? If you have a recording of police brutality, for example, you may want to ask the police for their story about what happened before you reveal your media, to determine whether the police will lie. But if you edit your media on a platform controlled by a company that regularly gives in to law enforcement requests, they might well get access to your media before you are willing to release it.
Locking down hardware and software chains may help authenticate some media, but it would not be good for freedom. It would pose severe threats to who gets to easily share their stories and lived experiences. If you live in a developing country or a low-income neighborhood in the U.S., for example, and don't have or can't afford access to the latest authentication-enabled devices and editing tools, will you find that your video of the authorities carrying out abuses will be dismissed as untrusted?
It's not even certain that these schemes would prevent an untrustworthy piece of media from being marked as "trusted." Even a locked-down technology chain can fail against a dedicated adversary. For example:
-- Sensors in the camera could be tricked, for example by spoofing GPS signals to make the "secure" hardware attest that the photography took place in a different location than it did.
-- Secret signing keys could be extracted from "secure" camera hardware. Once the keys are extracted, they can be used to create signatures over data that did not originate with that camera but can still be verified with the corresponding verification key.
-- Editing tools or cloud-based editing platforms could potentially be tricked into signing material that they didn't intend to sign, either by hacks on the services or infrastructure that support those tools or by exploitation of vulnerabilities in the tools themselves.
-- Synthetic data could be laundered through the "analog hole." For example, a malicious actor could generate a fake video and play it back on a high-resolution monitor. They then set up an authentication-capable camera and hit "record." The video produced by the camera will now have "authentic" provenance information, even though the scene itself did not exist outside of the screen.
-- Cryptographic signature schemes have often proven to be far less secure than people think, often because of implementation problems or how humans interpret the signatures.
Another commonly proposed approach is that instead of trying to establish proof that content is unmodified, establish proof that modified content has been modified. To do this, these schemes demand every AI photo creation tool register all "non-authentic" photos and videos using a signature, or a watermark. Then people can check if a photo has been created by AI.
There are numerous problems here. People can strip digital signatures, evade media comparison or elide watermarks by changing parts of the media. They can create a fake photo manually in image editing software or with their own AI, which is likely to become increasingly possible as AI technology is democratized. It's also unclear how you could force every large corporate AI image generator to participate.
A Human Problem, Not a Technology Problem
Ultimately, no digital provenance mechanism will solve the problem of false and misleading content, disinformation or the fact that a certain proportion of the population is deceived by it. Even content that has been formally authenticated can be used to warp perception or reality. No such scheme will control how people decide what is filmed or photographed, what media is released and how it is edited and framed. Choosing focus and framing to highlight the most important parts is the ancient essence of storytelling.
The believability of digital media will most likely continue to rely on the same factors that storytelling always has: the totality of the human circumstances surrounding something. Where did the media come from? Who posted it, or is otherwise presenting it, and when? What is their credibility; do they have any incentive to falsify it? How fundamentally believable is the content? Is anybody disputing its authenticity? The fact that many people are bad at making such judgments is not a problem that technology can solve.
Photo-editing software has been with us for decades, yet newspapers still print photographs on their front page, and prosecutors and defense counsel still use them in trials, largely because people rely on social factors such as these. It is far from clear that the expansion of democratized software for making fakes from still photos to videos will fundamentally change this dynamic.
Voters hit with deepfakes for the first time (such as a fake President Joe Biden telling people not to vote in a Republican primary) may well fall for such a trick. But they will only encounter such a trick for the first time once. After that, they will begin to adjust. Much of the handwringing about deepfakes fails to account for the fact that people can and will consider the new reality when judging what they see and hear.
If many people continue to be deceived by such tricks in the future, then a better solution would be increased investments in public education and media literacy. No technological scheme will fix the age-old problem of some people falling for propaganda and disinformation or replace the human factors that are the real cure.
Jay Stanley is a senior policy analyst with the ACLU Speech, Privacy, and Technology Project. Daniel Kahn Gillmor is a senior staff technologist for ACLU's Speech, Privacy, and Technology Project. For more than 100 years, the ACLU has worked in courts, legislatures and communities to protect the constitutional rights of all people. With a nationwide network of offices and millions of members and supporters, the ACLU takes on the toughest civil liberties fights in pursuit of liberty and justice for all. To find out more about the ACLU and read features by other Creators Syndicate writers and cartoonists, visit the Creators website at www.creators.com.
Copyright 2024 Creators Syndicate Inc.
Comments