Minnesota activist releases arrest video after manipulated White House version

169 points by petethomas a day ago on hackernews | 24 comments

Aurornis | a day ago

The current link is basically devoid of information, but clicking through to this page shows the two pictures with a slider to move between them: https://apnews.com/article/fact-check-levy-armstrong-crying-...

The differences are not subtle

autoexec | 23 hours ago

Of course they darkened her skin color.

000ooo000 | a day ago

Good news Satya, we finally found a use for all that electricity you're burning!

https://news.ycombinator.com/item?id=46718485

matthewaveryusa | a day ago

realpolitik time folks:

First do a left-right on the link that Aurornis posted [1]. Notice the extra fat in the chin, the elongated ear, the enlarged mouth and nose, the frizzlier hair, the lower shirt cut.

You hate it. You think, intellectually, that this shouldn't work and surely no one would have the gall to so brazenly do this without the fear of being caught and shamed. And then you think, well once the truth is revealed that there will be some introspection and self-reflection on being tricked, and that maybe being tricked here means being tricked elsewhere.

Well someone, in an emotionless room, min-maxed the outcomes and computed that the expected value from such an action was positive.

And here we are.

https://apnews.com/article/fact-check-levy-armstrong-crying-...

xboxnolifes | 5 hours ago

There is no need to min-max. There is never a large scale introspection after a media correction. Most people will never see the correction and will still believe what they saw first, years later, if not for the rest of their life.

Or they do hear about it, maybe a few days or a week later, but they dismiss it because its old news at the point and not worth thinking about to them.

Truth is, most people are never really thinking most of the time. They're reacting in the moment and maybe forming a rationale for their action after the fact.

xrd | a day ago

Can I opt out of using my taxes to create memes? If Trump wants to use his cryptocurrency to shill for Truth Social I suppose I can't really complain. But, why do I have to pay for the department of meme wars?

the_gipsy | a day ago

I remember reading an article about how terrible AI could be in the hands of a regime like China's. What a time to be alive, I guess.

bdangubic | 23 hours ago

all this time we were “fighting China” and now we got China… except nothing gets done :)

salawat | 22 hours ago

Evil transcends all borders, mate, and it all looks/sounds the same ultimately.

mattnewton | 23 hours ago

I think we're never going to be able to have robust ai detection, and current models are as bad as they'll ever be. Instead we really need to have the ability to sign images on cameras that show these are the bits that came off this hardware unedited, that professional news outlets can verify.

But that's going to cost money to make and market all these new cameras and I just don't know how we incentivize or pay for this, so we're left unable to trust any images and video in the near future. I can only think of technical solutions and not the social changes that need to happen before the tech is wanted and adopted.

breve | 23 hours ago

Sony cameras can sign still images and videos to vouch that they are not AI generated:

https://authenticity.sony.net/camera/en-us/index.html

https://www.sony.eu/presscentre/sony-launches-camera-verify-...

Ideally it'd become an open standard supported by all manufacturers. Which is what they're trying to do:

https://c2pa.org/

mattnewton | 23 hours ago

Thank you, this is fantastic to know! I think we have to normalize requiring this or similar standards for news, it will go a long way.

Ideally we would have a similar attestation from most people's cameras (on their smartphones) but that's a much harder problem to also support with 3p camera apps.

2OEH8eoCRo0 | 23 hours ago

More like I won't trust anything that doesn't come from a press photographer.

cmxch | 22 hours ago

And what will make them more trustworthy?

ndsipa_pomu | 10 hours ago

Their career prospects would vanish if caught doctoring images with AI. Unfortunately, the same can't be said of governmental employees.
it doesnt really matter if you can just take a photo of an AI image that's been printed out

mattnewton | 22 hours ago

That will look like a photo of a printout though. Seems easier to just hack the hardware to get it to sign arbitrary images instead.
ok but then the conversation switches from "was this actually taken from a camera" to "is this a photo of a printout" and we're not really any further along in being able to establish trust in what we're seeing, my point is the goal posts will always get moved because unless we see literally anything in person these days, we can't really trust in it

direwolf20 | 23 hours ago

Then you can have a signed picture of a screen showing an AI image. And the government will have a secret version of OpenAI that has a camera signature.

throwaway89201 | 23 hours ago

This sounds like a good idea on its face, but it will have the effect of both legitimizing altered photos and delegitimizing photos of actual events.

You will need camera DRM with a hardware security module down all the way to the image sensor, where the hardware is in the hands of the attacker. Even when that chain is unbroken, you'll need to detect all kinds of tricks where the incoming photons themselves are altered. In the simplest case: a photo of a photo.

If HDCP has taught anything, it's that vendors of consumer products cannot implement such a secure chain at all, with ridiculous security vulnerabilities for years. HDCP has been given up and has become mostly irrelevant, perhaps except for the criminal liability it places on 'breaking' it. Vendors are also pushed to rely on security by obscurity, which will make such vulnerabilities harder to find for researchers than for attackers.

If you have half of such a 'signed photos' system in place, it will become easier to dismiss photos of actual events on the basis that they're unsigned. If a camera model or security chip shared by many models turns out to be broken, or a new photo-of-a-photo trick becomes known, a huge amount of photos produced before that, become immediately suspect. If you gatekeep (the proper implementations of) these features only to professional or expensive models, citizen journalism will be disincentivized.

But even more importantly: if you choose to rely on technical measures that are poorly understood by the general public (and that are likely to blow up in your face), you erode a social system of trust that already is in place, which is journalism. Although the rise of social media, illiteracy and fascism tends to suggest otherwise, journalistic chain of custody of photographic records mainly works fine. But only if we keep maintaining and teaching that system.

datsci_est_2015 | 6 hours ago

But especially when a party has been shown to alter photos with evidence even for “memetic” reasons, they’ve poisoned their own reliability. As far as I’m concerned the DOJ us no longer a reliable source of evidence until a serious purge of leadership due to their intimate connection with the parties who posted this edited photo.

nneonneo | 23 hours ago

Don’t worry! According to the White House, it’s just a meme! Making up fake news is totally fine as long as you can say you’re memeing!

The WH using social media (X, Pravda Social) for official communication is highly deliberate - they get to declare post-hoc what is actually real communication and what is “just memes”. Of course it won’t make any difference to people amplifying the content. If the WH had to stick to traditional outlets for news they wouldn’t have this fig leaf to hide behind.

knowsuchagency | 20 hours ago

Why was this flagged?

youngtaff | 13 hours ago

Because some of the HN readers flag anything to do with US politics — jgc for example https://news.ycombinator.com/item?id=46693887