The immutability of documentation tech matters more in a world with AI.
The cameras used to document "news" will need to be watermarked, fingerprinted and authenticated, like what Canon and Nikon are already doing (and which AFP has already adopted).
It may have seemed gimmicky at first, but in a year or two, you'll probably only be able to trust visuals from companies that do this (wire agencies like AFP, AP and Reuters are heavily disincentivised to create fake news anyway but that's another topic).
At a certain level, I imagine social media apps will also encourage direct camera-to-post for documentation/videos of reality, since this will be the only end-to-end method to verify an image was created unaltered. I can imagine a world where, if you film a protest through the Instagram app, you'd get some kind of "this is real" badge on it, whereas if you upload a video, it gets treated as "could be AI" like 99% of all future content.
The problem with this approach is that it is easily bypassed. Simply point your camera at a high quality monitor playing an AI generated video, and there you go, and authenticated AI video. In the future, video evidence is going to be as convincing as it was for 99.9999...% of human history. We survived with out it in the past. We'll survive without it in the future.
I doubt it will be that easy to bypass. A fake would still have to withstand pixel-level analysis on the level of methods that already detect tampering in regular video. For one thing, that will have to be a very high quality monitor indeed to leave no detectable trace of e.g. moire patterns.
A fake doesn’t need to be perfect to be effective, it just needs to fool enough people. Most posts won’t (and couldn’t ever) be scrutinised at that level.
Even if a lot of people just lap up whatever tiktok feeds them, it still matters that people who care can get actual evidence from the real world that eventually filters out to public consciousness. It will have indirect influence, yes, but it'll still be a lot better than being fully post-truth.
We have been able to manipulate legal documents for 100s of years.
We have been able to manipulate images for over 100 years.
We have been able to manipulate images on any computer with a few hours of training for for 30+ years.
We have been able to manipulate videos with training for 20+ years.
It is an order of magnitude easier now (likely as easy as documents have been to manipulate for 30ish years now). However, this is not a new problem, courts have always had to deal with manipulated evidence.
Not just an order of magnitude easier, many orders of magnitude. We're going from hours of painsraking work done by professionals who you pay to virtually instant and as many as you want.
Interestingly, I think Apple has inadvertently positioned themselves very well to be able to authenticate various activity as being done by an actual human. What if anything they decide to do with that capability remains to be seen.
I think it’s already irrelevant: cryptographic proofs of video evidence is difficult to communicate to audiences while watermarks will be learned by AI as trusted and injected into AI videos anyway. Also, in between the lens and your eyeball is usually a pipeline of editing applied anyway so either the cryptographic signature ends up with every layer signing the modifications applied + the previous layer or you stack watermarks. But ultimately the original problem is how to communicate the cryptographic chain validity.
Most users don't care, but in theory a newspaper could use this tech to verify certain camera images and their readers could just trust that they've vetted things.
In practice, ordinary users don't care much about mainstream media anymore.
In theory this is where zero-knowledge proofs can come in. That would allow you to apply transforms to the video (crop, contrast, resize etc) and be able to prove the exact transform that was applied. However it's still computationally expensive.
To whom? I can imagine starting wars with fake videos.
It's hard to imagine someone kvetching about not being able to sideload apps to their phone reaching that point of significance. I don't mean to completely dismiss very real concerns about what people can and can't do with their purchases, but OTOH war involves actual people actually dying, and manipulating media is a fantastic way to get one.
removing methods to circumvent monitoring and control of information makes it easier for a bad actor to take advantave of these tools. Yes it's nice for the good guys to be able to keep their code secure, but do you want a dictator to be able to do that?
By the time the video reaches the end user (i.e. on tiktok and the likes), it will have been re-compressed, edited, meme-ed, voiced over a dozen time. So not sure how you preserve trust in that chain.
Also, one thing HNers get fundamentally wrong is that anybody cares about trust/authenticity. And I don't see what's so special about photo/video.
One of the most common forms of submissions on Reddit/Twitter is an image with text, or a screenshot of a tweet, or a screenshot of a headline that makes a claim, and everyone takes it dead seriously.
Almost nobody is going "hmm let me look this up first to see if it even exists or accurately represents the facts".
So if all you need is an image of text for people to believe it, what does it even matter if you have this sophisticated system where you require photos to be signed by camera hardware or whatever? You aren't even putting a dent in how bullshit spreads.
I imagine a new type of bluetick would emerge. There will always be those who can't distinguish between a tick emoji next to a username and the actual thing, but that's a UX problem. Something shot and verified on-app could get a special, clickable tick on it when it's shared.
This removes the possibilities for bad actors to just one - the platform itself.
In any case, the audience will have to learn new ways to "trust" and tech alone won't be the solution. But I've less hope in people and more hope in new social contracts
I think LIDAR sensors would be useful to verify depth information in an image, on a side note.
You don't, the only reliable source will be the source that has signed the content. It basically takes us back to the times when the only footage available was curated and broadcast by TV.
Yeah, but I don't think Reuters, AP or AFP are anywhere near the top 1000 most popular accounts on tiktok. So they can sign anything they want, won't affect the average tiktok user.
I don't think this would accomplish anything. For one thing, quite a bit of misinformation these days comes from official government sources that can just compel the manufacturers to turn over authentic signing keys. Remember that Trump just posted an AI-generated video of himself shilling medbeds; when it was pointed out as AI-generated, he deleted it. If Truth Social checked the cryptographic signature, he'd order his staff to sign it. They wouldn't dare say no.
The next flaw is that cameras are happy to record screens playing AI-generated videos and mark them as authentic. Perhaps you can tell today because the screen pixels aren't perfectly 1:1 mapped to the image sensor pixels, but as soon as elections depend on being able to do that, those screens will exist.
People are saying to add LIDAR to prevent this "record the screen" hack, but a mirror over the LIDAR sensor and me sitting at a desk motionless looks to LIDAR exactly like the world leader I'm deepfaking sitting motionless at a desk. People are not using AI to generate amazing action shots.
At the end of the day, people will have to take some personal responsibility. Migrants probably aren't killing and eating pets. Pets taste terrible and grocery stores that you can just walk into and steal whatever you want exist. There isn't a bed that can cure any disease. If someone says they do, even a world leader, test them out on something non-critical. Break off a fingernail and see if the magic bed can regrow it overnight. If not, maybe stick to traditional cancer treatments until there is some clearer evidence.
It’s already possible. See the Stagecraft studio they built for the production of TV series The Mandalorian.
> shooting the series on a stage surrounded by massive LED walls displaying dynamic digital sets, with the ability to react to and manipulate this digital content in real time during live production
> The StageCraft process involves shooting live-action actors and sets surrounded by large, very high-definition LED video walls. These walls display computer-generated imagery backdrops, once traditionally composited primarily in post-production after shooting with chroma key screens. These facilities are known as "volumes". When shooting, the production team is able to realign the background instantly based on moving camera positions. The entire CGI background can be manipulated in real-time.
The cameras used to document "news" will need to be watermarked, fingerprinted and authenticated, like what Canon and Nikon are already doing (and which AFP has already adopted).
It may have seemed gimmicky at first, but in a year or two, you'll probably only be able to trust visuals from companies that do this (wire agencies like AFP, AP and Reuters are heavily disincentivised to create fake news anyway but that's another topic).
At a certain level, I imagine social media apps will also encourage direct camera-to-post for documentation/videos of reality, since this will be the only end-to-end method to verify an image was created unaltered. I can imagine a world where, if you film a protest through the Instagram app, you'd get some kind of "this is real" badge on it, whereas if you upload a video, it gets treated as "could be AI" like 99% of all future content.