First do a left-right on the link that Aurornis posted [1]. Notice the extra fat in the chin, the elongated ear, the enlarged mouth and nose, the frizzlier hair, the lower shirt cut.
You hate it. You think, intellectually, that this shouldn't work and surely no one would have the gall to so brazenly do this without the fear of being caught and shamed. And then you think, well once the truth is revealed that there will be some introspection and self-reflection on being tricked, and that maybe being tricked here means being tricked elsewhere.
Well someone, in an emotionless room, min-maxed the outcomes and computed that the expected value from such an action was positive.
There is no need to min-max. There is never a large scale introspection after a media correction. Most people will never see the correction and will still believe what they saw first, years later, if not for the rest of their life.
Or they do hear about it, maybe a few days or a week later, but they dismiss it because its old news at the point and not worth thinking about to them.
Truth is, most people are never really thinking most of the time. They're reacting in the moment and maybe forming a rationale for their action after the fact.
Can I opt out of using my taxes to create memes? If Trump wants to use his cryptocurrency to shill for Truth Social I suppose I can't really complain. But, why do I have to pay for the department of meme wars?
I think we're never going to be able to have robust ai detection, and current models are as bad as they'll ever be. Instead we really need to have the ability to sign images on cameras that show these are the bits that came off this hardware unedited, that professional news outlets can verify.
But that's going to cost money to make and market all these new cameras and I just don't know how we incentivize or pay for this, so we're left unable to trust any images and video in the near future. I can only think of technical solutions and not the social changes that need to happen before the tech is wanted and adopted.
Thank you, this is fantastic to know! I think we have to normalize requiring this or similar standards for news, it will go a long way.
Ideally we would have a similar attestation from most people's cameras (on their smartphones) but that's a much harder problem to also support with 3p camera apps.
ok but then the conversation switches from "was this actually taken from a camera" to "is this a photo of a printout" and we're not really any further along in being able to establish trust in what we're seeing, my point is the goal posts will always get moved because unless we see literally anything in person these days, we can't really trust in it
Then you can have a signed picture of a screen showing an AI image. And the government will have a secret version of OpenAI that has a camera signature.
This sounds like a good idea on its face, but it will have the effect of both legitimizing altered photos and delegitimizing photos of actual events.
You will need camera DRM with a hardware security module down all the way to the image sensor, where the hardware is in the hands of the attacker. Even when that chain is unbroken, you'll need to detect all kinds of tricks where the incoming photons themselves are altered. In the simplest case: a photo of a photo.
If HDCP has taught anything, it's that vendors of consumer products cannot implement such a secure chain at all, with ridiculous security vulnerabilities for years. HDCP has been given up and has become mostly irrelevant, perhaps except for the criminal liability it places on 'breaking' it. Vendors are also pushed to rely on security by obscurity, which will make such vulnerabilities harder to find for researchers than for attackers.
If you have half of such a 'signed photos' system in place, it will become easier to dismiss photos of actual events on the basis that they're unsigned. If a camera model or security chip shared by many models turns out to be broken, or a new photo-of-a-photo trick becomes known, a huge amount of photos produced before that, become immediately suspect. If you gatekeep (the proper implementations of) these features only to professional or expensive models, citizen journalism will be disincentivized.
But even more importantly: if you choose to rely on technical measures that are poorly understood by the general public (and that are likely to blow up in your face), you erode a social system of trust that already is in place, which is journalism. Although the rise of social media, illiteracy and fascism tends to suggest otherwise, journalistic chain of custody of photographic records mainly works fine. But only if we keep maintaining and teaching that system.
But especially when a party has been shown to alter photos with evidence even for “memetic” reasons, they’ve poisoned their own reliability. As far as I’m concerned the DOJ us no longer a reliable source of evidence until a serious purge of leadership due to their intimate connection with the parties who posted this edited photo.
Don’t worry! According to the White House, it’s just a meme! Making up fake news is totally fine as long as you can say you’re memeing!
The WH using social media (X, Pravda Social) for official communication is highly deliberate - they get to declare post-hoc what is actually real communication and what is “just memes”. Of course it won’t make any difference to people amplifying the content. If the WH had to stick to traditional outlets for news they wouldn’t have this fig leaf to hide behind.
Aurornis | a day ago
The differences are not subtle
autoexec | 23 hours ago
000ooo000 | a day ago
https://news.ycombinator.com/item?id=46718485
matthewaveryusa | a day ago
First do a left-right on the link that Aurornis posted [1]. Notice the extra fat in the chin, the elongated ear, the enlarged mouth and nose, the frizzlier hair, the lower shirt cut.
You hate it. You think, intellectually, that this shouldn't work and surely no one would have the gall to so brazenly do this without the fear of being caught and shamed. And then you think, well once the truth is revealed that there will be some introspection and self-reflection on being tricked, and that maybe being tricked here means being tricked elsewhere.
Well someone, in an emotionless room, min-maxed the outcomes and computed that the expected value from such an action was positive.
And here we are.
https://apnews.com/article/fact-check-levy-armstrong-crying-...
xboxnolifes | 5 hours ago
Or they do hear about it, maybe a few days or a week later, but they dismiss it because its old news at the point and not worth thinking about to them.
Truth is, most people are never really thinking most of the time. They're reacting in the moment and maybe forming a rationale for their action after the fact.
xrd | a day ago
the_gipsy | a day ago
bdangubic | 23 hours ago
salawat | 22 hours ago
mattnewton | 23 hours ago
But that's going to cost money to make and market all these new cameras and I just don't know how we incentivize or pay for this, so we're left unable to trust any images and video in the near future. I can only think of technical solutions and not the social changes that need to happen before the tech is wanted and adopted.
breve | 23 hours ago
https://authenticity.sony.net/camera/en-us/index.html
https://www.sony.eu/presscentre/sony-launches-camera-verify-...
Ideally it'd become an open standard supported by all manufacturers. Which is what they're trying to do:
https://c2pa.org/
mattnewton | 23 hours ago
Ideally we would have a similar attestation from most people's cameras (on their smartphones) but that's a much harder problem to also support with 3p camera apps.
2OEH8eoCRo0 | 23 hours ago
cmxch | 22 hours ago
ndsipa_pomu | 10 hours ago
93po | 23 hours ago
mattnewton | 22 hours ago
93po | 4 hours ago
direwolf20 | 23 hours ago
throwaway89201 | 23 hours ago
You will need camera DRM with a hardware security module down all the way to the image sensor, where the hardware is in the hands of the attacker. Even when that chain is unbroken, you'll need to detect all kinds of tricks where the incoming photons themselves are altered. In the simplest case: a photo of a photo.
If HDCP has taught anything, it's that vendors of consumer products cannot implement such a secure chain at all, with ridiculous security vulnerabilities for years. HDCP has been given up and has become mostly irrelevant, perhaps except for the criminal liability it places on 'breaking' it. Vendors are also pushed to rely on security by obscurity, which will make such vulnerabilities harder to find for researchers than for attackers.
If you have half of such a 'signed photos' system in place, it will become easier to dismiss photos of actual events on the basis that they're unsigned. If a camera model or security chip shared by many models turns out to be broken, or a new photo-of-a-photo trick becomes known, a huge amount of photos produced before that, become immediately suspect. If you gatekeep (the proper implementations of) these features only to professional or expensive models, citizen journalism will be disincentivized.
But even more importantly: if you choose to rely on technical measures that are poorly understood by the general public (and that are likely to blow up in your face), you erode a social system of trust that already is in place, which is journalism. Although the rise of social media, illiteracy and fascism tends to suggest otherwise, journalistic chain of custody of photographic records mainly works fine. But only if we keep maintaining and teaching that system.
datsci_est_2015 | 6 hours ago
nneonneo | 23 hours ago
The WH using social media (X, Pravda Social) for official communication is highly deliberate - they get to declare post-hoc what is actually real communication and what is “just memes”. Of course it won’t make any difference to people amplifying the content. If the WH had to stick to traditional outlets for news they wouldn’t have this fig leaf to hide behind.
knowsuchagency | 20 hours ago
youngtaff | 13 hours ago