This is why you need a phrase that you've never shared in a text or on social media that you can use so your family knows it's you. Especially to protect them from scammers pretending to be you.
> The solution the world's leading experts have landed on is one your grandparents could have come up with: codewords. You, your family, business partners and anyone else you communicate with about important subjects need to come up with a secret phrase that no-one else knows you can use in an emergency to verify each other's identities. Think of it like a convoluted form of the multi-factor authentication we all use to login online.
> "My wife and I have a codeword that we use if we ever get an unusual call," Farid says. "We haven't needed to use it yet, but sometimes I ask just to test her to make sure we don't forget it."
Or just find a shared memory/moment not available on the internet when in doubt. I don't think people will be that eager to remember another passphrase.
I bet that a confident scammer is prepared to deal with things like that. They want to put you in a state where you are under time and emotional pressure and your "relative" will have a well practiced response why they can't answer your weird questions.
Imagine your crying grandson who caused a traffic accident in Mexico and the police planted drugs in his car and now he needs money to pay them off. He is in pain and probably has a concussion (explanation why he can't remember what you are asking), the police is hassling him to get off the phone (time pressure, explanation why the quality of the call is terrible). Will you get hung up on some code word he asked you to memorise years ago and you can't even know where it is anymore? And if you bring it up he just starts crying and tells you that you are his last chance to turn his life around. And you remember when he was a wee little kid and he fell and scraped his knee and you comforted him. Just the thought of pressing him on the code makes you feel like a terrible person. Or not. And then the scammer just finds someone more gullible. Theirs is a number game after all.
Being sufficiently paranoid, the second you use such a phrase you've shared it. Someone will be listening in, and it's only a question of time until criminals get their mitts on that data in the next breach.
At this point "spotting AI" is IMO an irrelevant skill. It's something to be aware of but a bunch of the time I can't tell even with an extended look on static images, or if I'm on a phone and scrolling then nothing really tweaks automatically - perceptually the flaws blend exactly as you'd expect them to.
So it's all context clues really - i.e. if the video tracking shot is sort of within the constraints of the models, plays to obvious agendas etc. then I might tweak to go looking for artifacts...but in the propaganda game? That's already game over. And we're all vulnerable to the ground shifting beneath us - i.e. how much power would there be if you had a model which could just slightly exceed those "well known" limitations?
IMO the failure to implement strong distributed cryptography much earlier in the digital age is going to punish us hard for this - i.e. we haven't built a societal convention of verifying and authenticating digital communications amongst each other, and technology has finally caught up that it can fool our wetware now. It was needed well before this - e.g. the rise of the telephone scam and VOIP should've been when we figured out how to make sure people were in the habit of comprehending digital signatures and authentication. It isn't though, and now something much more dangerous is out there.
Recently one of my friends got email hijacked and whatever entity it was seemingly used her past sent emails as a training corpus to construct some very convincing pleas for donations involving a dog rescue she's been operating for several years.
It also included personal details only her closest friends and family would know. I assume this is being done at scale now. These are NOT Nigerian prince scams of yesteryear; this is something entirely different.
Perhaps we need tamper proof authenticated cameras in all major cities worldwide that publish a livestream 24/7 and you can then stand in front of them to prove your human existance...
This could be something that notaries around the world could offer as a service.
Or in general, a way to digitally sign a tamper-free video recoding made with a camera from a reputable manufacturer. Maybe a regular iPhone already has enough integrity checks and security contexts to achieve this.
I'm almost certain that an iPhone camera can go that, and the reason that Apple controls the full stack. It's necessary but not sufficient, since it's missing the identity maintenance when media leaves the device. Apple would have to place a cryptographically signed digital watermark into a global blockchain so that the analog hole can be closed. All devices that present that media back to a human would need to verify the contents provenance chain back to the initial capture device.
There's nothing missing technology wise to achieving this but we, at this point, lack the collective will and the regulatory regime. I do foresee a future where this is the norm and that anything you listen to or watch you'll be able to trace back to the device that captured the data.
The options I have seen so far were a) using our digital IDs, which is very handy or b) having a bank verify my identity in person with my ID, which is also pretty good.
These options are not available to recent immigrants, people with foreign documents and people without a registered address. I spent a lot of time working around those limitations.
> At first, my aunt wasn't buying that any AI was involved. [...] There was a long pause. "I was like 90% sure," she said, hesitating. "But that sounded more artificial."
There is a thing about many people. I don't remember the phenomenon's name, if it has one, but it goes like this:
Given enough time to reconsider options, people will be endlessly flip-flopping between them grabbing onto various features over and over in a loop.
There's also another phenomenon which is that whatever the latest idea is, it must be the best. Many people do this mistake and even convince themselves of being right now because "they used to think like that" before.
So at each stage in the loop they are always super convinced of the position.
Even not being 100% confident, at some point people have to decide what to do.
Actions might include some continuous checks in them, like the famous plan, do, check, act.
Solipsism already tell us that anything beyond current present self experience, existence of anything is uncertain. So, almost everything one have to take for granted to make anything outside metaphysic argument require an act of faith.
Dissonance between what you instinctively believe and what you think the other person wants you to say.
Easy to replicate by asking someone something obvious, like the weather, and when they reply ask “are you sure?” - they won’t be so sure any more (believing it’s a trick question)
If I ask my mother if I’m real, she’ll have a pause because she has never had to entertain such a question, or the possibility her son over the phone is an impostor. Good way to push someone towards paranoia and psychosis.
This is the basis of the virtual kidnapping scam/grandparent scam, or panic manipulation more generally. The manufactured urgency keeps them from doubting: the voice on the phone being off is just fear, or a bad connection, for example.
I have personally intervened in one of those when I heard someone reading off a 6 digit number.
Exactly, to perform the scam it works best if you get people to switch to their animal brain. "The snake is going to bite right now so I have to so something!".
That said, hog butchering scams have gotten popular so manufactured urgency isn't the only way.
> Good way to push someone towards paranoia and psychosis.
Interestingly, these are both phenomena where we start to _lose_ the ability to question our thoughts or introspect. These are phenomena of self-confidence rather than of self-doubt.
Given enough time to reconsider options, people will be endlessly flip-flopping between them grabbing onto various features over and over in a loop.
People will default to believing something is AI if there's no downside to that opinion. It's a defence mechanism. It stops them being 'caught out' or tricked into believing something that's not true.
As soon as there's a potential loss (e.g. missing out on getting rich, not helping a loved one) people will switch off that cynical critical thinking and just fall for AI-driven scams.
This phenomenon (or a closely related one?) is recognized and known as Kotov Sydnrome in the context of chess.
A summary, courtesy of chess dot com:
> The name of this "syndrome" comes from GM Alexander Kotov, author of the classic chess book Think Like a Grandmaster. In the book, Kotov described an incorrect yet very common calculation process that often leads players to select a suboptimal or bad move.
> According to Kotov, in positions where the lines are complex and there are numerous candidate moves and variations to calculate, it's easy to make a hasty move. A player in that situation might spend too much time going over two moves and all of their ramifications without finding a favorable ending position. In that process, the player is likely to go back and forth between the two different lines, always coming to the same unsatisfying conclusion—this wastes precious mental energy and time.
> After spending too much time evaluating the first two options, the player gives up the calculation due to time pressure or fatigue and plays a third move without calculating it. According to the author, that sort of move can cause tremendous blunders and cost the game.
I have a systematic way of approaching this kind of situation, where you have to rapidly estimate a thing, commit to the estimate and are judged by the quality of your estimates in the long run; my approach is to first make a guess based off my gut, and then to pause and make a bet with myself, did I guess high or low? If my gut then says that my first gut instinct was too high or low, I adjust from there. I can't guess great the first time, but this two-stage guessing works a lot better for me.
I'm sure I'm not the first to use this technique, but I don't know what it's called.
AI companies love to hype up how AI will provide a great benefit to the economy and transform intellectual labor, but I hardly see any discussion about how much damage it will cause to the economy when you can no longer trust that you're on a video call with an actual person. Maybe the person you're interviewing is actually an AI impersonating someone, or maybe they never existed in the first place. Information found online will also no longer be trustable, footage of some incident somewhere may have been entirely fabricated by AI, and we already experience misleading articles today.
Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video, and the availability of actual information will become more and more limited as the sea of online information gets polluted with crap. It may never be possible to calculate the full extent of the damage in monetary value.
I don't know of a solution. I don't think even identity verification will meaningfully solve this. People will get hacked, or provide their SEO-spamming agent with their own identity, or purposefully post fake videos under their own identity. As it becomes more normal to scan your ID to access random websites, it will also become easier to steal people's identities and the value of identity verification will go down.
People don't get hacked - devices get hacked. So all we need is a better chain of trust between two people. This is not a technology development problem as much as a technology implementation problem. And a political problem
People get hacked -- a device could be flawless, but if a person is a victim of "Social Engineering" and hands the attacker a password, there's nothing the designer of the device could do about it.
2FA has tried to solve exactly this. Not many attacked people will hand over their password AND their phone. Yes I know, they might hand over one authentication code (and I know people who did exactly that)... We should also look into reducing the attack surface - if you get Instagram hacked you shouldn't get your Facebook hacked as well. But the current big tech centralization leads us to that single point of failure, because they don't care about the user's concerns only market grab. So... what now? Do we get the politics into this?
Best thing I think of is domain names. Domains are tied to addresses and billing, and sites are people or businesses, with physical locations one can visit.
Maybe a good startup idea would be “local verify” , where you check locally for a client if the online destination is real.
I’m seeing a huge increase in companies requiring in person interviews now. Seems there is a real possibility the internet as we know it will be destroyed.
linkedin is completely destroyed now. There are tons of ai bots there but real humans are now fronts for AI. So you cant even trust content from from ppl you know.
identity serivce is not useful because that person might be a real person but they might just be a pipe to ai like we see on linkedin.
Touching grass. Valuing in-person connections. Focusing on the community, meatspaces and actual people around you.
Getting off of the Internet and off of our devices. It's not just a solution to AI/LLMs modifying our reality but also a solution to [gestures wildly at the cultural, societal and global communication impacts of the past ~16 years].
This sentiment is unpopular, but it's true. Prioritize true connections and experiences.
Partially agree.
However, this problem has existed with scam e-mails since the 90s.
For me the solution is in signed e-mails and signed documents. If the person invites me to a online meeting with a signed e-mail, I trust that person that it's really them.
Same for footage of wars, etc. The journalist taking it basically signs the videos and verifies it's authenticity. It is AI generated, then we would loose trust in that person and wouldn't use their material anymore.
people at my org were gleeful when they learned they could hook LLMs into Slack. Even if we had some reliable, well-used signature system, I think people would just let AI use it to send emails on their behalf.
If the AI age has taught me anything, it's that most people do not care what their output is. They'll put their name on anything, taste or quality does not matter in the least. It's incredibly depressing.
I think he was referring to a cryptographic signature, possibly using the "web of trust" to get the key. I'm not convinced we need central authority to solve this.
Same way security cameras prove that they are authentic camera recordings that have not been modified. If modified, the video will no longer match the signature that was generated with it.
There are people hosting agents online to talk to other agents etc. on their behalf. How difficult is it to just instruct such an agent to do the tasks you mentioned? You're assuming it's done by "bad actors" while it's most likely just going to be done by "everyone" that knows how to do it.
I mean emails were and still are a huge security risk. Sometimes I'm more scared of employees opening and engaging with emails than I am than anything else.
With cash, you can only steal so much (or have transactions of up to certain size) until you run into geographical and physical constraints. With cryptocurrency, it’s possible to lose any amount.
With humans writing scam emails, you can only have so many of them until one blows the whistle. With LLMs, a single person can distribute an arbitrary amount.
At some point, quantity becomes a new quality, and drawing a parallel becomes disingenuous because the new quality has no precedent in human history.
The highlighted parallel is usually drawn between cryptocurrency and cash, not between cryptocurrency and banks. With both cash and cryptocurrency, as is the idea behind the analogy, 1) there’s no intermediary and 2) once it’s gone, it’s gone. Obviously, the banking system is not immune to fraud (not sure why you think I made that claim, unless your definition of “cash” includes electronic transfers), but banks and/or payment systems can (and do) resolve these cases and have certain KYC requirements.
> damage it will cause to the economy when you can no longer trust that you're on a video call with an actual person
What damage are you talking about?
I'm not sure I understand why it matters that there is no real person there if you can't actually tell the difference. You're just demonstrating that you don't actually need a human for whatever it is you're doing.
The grandparent post has the belief that human interaction is intrinsically better. Not sure i agree, but i can understand the POV.
However, the increase in fake videos that are difficult to tell from real is indeed a potential issue. But the fact that misinformation today is already so prevalent is evidence that better video doesn't make it any worse than it already is imho.
You're not sure if human to human interaction is intrinsically more valuable than a human talking to a facsimile? That feels like a very dangerous position to hold for one's ethical calculations and general sanity. I'm clinging tightly to the value of the bond with other people, even the passing connection, but certainly with my family members as this article is about.
Human to human may be more valuable, but that may not have much to do with the truth in their statements. For example if your relatives are hooked up to a constant misinformation feed it gets to become problematic to communicate and deal with them.
Because what you are actually doing is exchanging symbols, tokens, if you will, that may be redeemed in a future meatspace rendezvous for a good or service (e.g. a job, a parcel). These tokens are handshakes, contracts, video calls, etc. to be exchanged for the actual things merely represented therein.
Instead what we have now with AI is people exchanging merely the tokens and being contented with the symbol in-and-of itself, as something valuable in its own right, with no need for an actual candidate or physical product underlying the symbol.
There is a clip by McLuhan I can't be assed to find right now where he says eventually people will stop deriving pleasure from the products themselves and instead derive the feelings of (projected) accomplishment and pleasure from viewing advertisements about the product. The product itself becomes obsolete, for all you actually need to evoke the desired response is the advertisement, or the symbol.
A hiring manager interviewing an AI and offering it a job is like buying the advertisement you just watched, and.... that's it. No more, the transaction is complete.
>Instead of tending towards a vast Alexandrian library the world has become a computer, an electronic brain, exactly as an infantile piece of science fiction. And as our senses have gone outside us, Big Brother goes inside. So, unless aware of this dynamic, we shall at once move into a phase of panic terrors, exactly befitting a small world of tribal drums, total interdependence, and superimposed co-existence. [...] Terror is the normal state of any oral society, for in it everything affects everything all the time. [...] In our long striving to recover for the Western world a unity of sensibility and of thought and feeling we have no more been prepared to accept the tribal consequences of such unity than we were ready for the fragmentation of the human psyche by print culture.
Your wife or mother calls you or video calls you and says to meet her somewhere, or to send money, or to pick up groceries or whatever. Does it not matter that it wasn't her? Could it be someone trying to manipulate you into going somewhere, to be robbed or whatever? At any rate, you'll need to verify that information came from the source you trust before you act on it, and that verification has a cost.
The damage is to the trust we have in our communication media. The conclusion here is that every person is trivial to impersonate; that's the damage.
Ok fine, let's put it in the context of business. Your competitor impersonates your customer, gives you bad instructions. After following the bad instructions, you lose the contract with your customer, and your competitor (the attacker) is free to try and replace you.
If you got a suspicious text, the logical thing is to call up the person who sent it and try to verify it. AI impersonation makes that much harder.
Or even better, open the on-prem AI portal and type something like "I just got a suspicious call from client X, but I am on a lunch break. Call him and use a fake video of me. Ask him if what he said is true..."
We are still in the early stage of AI and already I struggle to tell what is real or fake on my Twitter feed. It will only get better in its deception with time.
You know those incriminating Epstein photos with his associates? A few years from now a common defense from people like that would be that the photos were AI generated, and it would be difficult to prove them wrong beyond reasonable doubt.
People in previous cases already attempted to dismiss incriminating pics of themselves as being the work of clever Photoshop artists.
"Is this a deepfake video call" is a major plot point in a pretty big movie currently in theaters, so I think this is getting into the broader zeitgeist.
> Information found online will also no longer be trustable
Most information you can access publicly, including Wikipedia, is a result of astroturfing fight. Most information online had not been trustable for double digit number of years now.
> we already experience misleading articles today
Again, had been happening for decades.
> footage of some incident somewhere may have been entirely fabricated by AI
Not like we did not already have doctored footage plaguing the public.
> Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video
Necessity to inspect the supply chain for snake oil has been a thing since at least EA (the Nasir one).
We may be dealing with the problem of spam, but the problems have already been there.
All these are true, but just as it happened before the internet, it's accelerating even further. There are clear costs that cannot just be hand waved away.
I'm not sure we can say it's accelerating. The techniques that adversarial actors use has always been changing and when they shift tactics it can take a while for an adequate defense is adopted. We're still dealing with sql injection in the owasp top ten. What I think would indicate an acceleration is when the most security oriented organizations continuously fail to defend against new attacks. If we start hearing about JPMorgan and Google getting popped every month or two, we're in trouble.
The acceleration is in the decrease of the cost to produce misinformation.
Misinformation in pure text form has always been cheapest, but is even cheaper now that text generation is basically a solved problem. Photos have been more expensive, it used to take time and skill with a photo editor to produce a believable image of an event that never happened. The cost is now very low, it's mostly about prompting skills. Fake videos were considerably harder, especially coupled with speech. Just a few years ago I could assume any video I saw was either real or a time-consuming, deliberate fake.
We've now entered a time where fake videos of famous people take actual effort to tell apart, and can be produced for a low cost - something accessible to an individual, not a big corporation. We can have an entirely fake video of Trump, or another world leader, giving a speech and it will look like the real thing, with the audiovisual "tells" of it being fake getting harder to notice every few months.
> The acceleration is in the decrease of the cost to produce misinformation.
So it's a spam issue. And normally, while annoying it's possible to fight spam, however on these topics we have built structures that disable the very mechanisms allowing us to fight spam. That's worrying.
The fact that someone can instruct their computer to astroturf their flight tracking app on some forum for nerds is irrelevant - people have been instructing "marketing agencies" to astroturf their brand of caffeinated sugar water on tv, radio and press for decades and centuries. For a very long time the "traditional media" was aware that their ability to sell astroturfing capacity was hanging on their general trustworthiness. Then the internets rose to prominence, traditional media followed by selling more and more of their capacity to astroturfers. Now we have a worrying situation that the internets might be spammed by astroturfers a bit too much, but the backup is broken already. Now that's truly frightening.
Welcome to the post-truth world, where objective references outside of your own village cannot exist.
Laws will be passed to make it "safer". Just like it is happening with the id verification systems. Every image or video gen will require a watermark. Something visible which cannot be removed easily or hidden which can be detected and blocked. Access to models which do not comply will be made harder through id verification checks or something.
There will be some regulatory capture in between.
World will kick into gear only when something really bad happens. Maybe a influential person - rich or politician - fooled into doing something catastrophic due to a deepfake video/image. Until then normal people being affected isn't going to move the needle.
Verification needs to work the other way around, some kind of verifiable chain of trust for photos and videos from real cameras. Watermarking all generated media is impossible.
I don't really understand why this is so hard or why it wasn't just done from the get go.
Just have Apple and Google digitally sign videos and photos recorded from phones and then have Google and Meta, etc display that they are authentic when shown on their platforms.
You're talking about the metadata of the files, which can always be edited and someone will inevitably try to make software to do exactly that. Also, Adobe's proposal for handling generated content is exactly this and they're not able to get buy-in from other companies.
Edit the metadata in what way? It's a cryptographic hash.
If the bits that make up the video as was recorded by the camera don't match the hash anymore, then you know it was modified. That doesn't mean it's fake, it just means use skepticism when viewing. On the other hand the ones that have not been modified and still match can be trusted.
Essentially 0% of professional photography or videography uses "straight out of the camera" (SOOC) JPEGs or video. It's always raw photos or "log" video, then edited to look like what the photographer actually saw. The signal would be so noisy as to be useless.
It becomes a hard problem quickly when you introduce editing, and most photos and videos on social media are edited. I'm not sure how it would work. It seems more feasible than universal watermarks, though.
> Laws will be passed to make it "safer". Just like it is happening with the id verification systems. Every image or video gen will require a watermark. Something visible which cannot be removed easily or hidden which can be detected and blocked. Access to models which do not comply will be made harder through id verification checks or something.
i've thought about this off and on and how to implement it. Not easily, was my general takeaway.
or rather, it's easily to implement but you're in a adversarial relationship with bad actors and easy implementations may be easily broken
e.g. your certs gotta come from somewhere and stay protected, and how do you update and control them. key management for every single camera on every phone, etc.
The deeper problem here isn't that deepfakes are too good - it's that every "proof of humanity" test converges to the same bag of tricks. Shared secrets, liveness checks, biometric challenges. An attacker who studies the test can pass it. We keep building Voight-Kampff machines without asking whether the Voight-Kampff framing is the right one. The question isn't "can you tell this is real" - it's "what would you accept as proof, and can that proof be synthesized?"
This was a natural thing to try so I did and even Grok will simply obey instructions to say all those. You don't need one of those ablated open models.
I've started to prove it (here on LinkedIn, countering its Moltbookification) via my bad handwriting – the final frontier of AGI. Finally, a lifetime of training to write more or less illegible pays off.
The same I am trying to do with my (vibe coded!) site "jetzt" (German for "now"), to which I photo blog impressions from everyday life. Only insiders will know what they mean beyond their aesthetic, and it also feels like a good way of human connection in these times.
Am I too naive in thinking the answer is rather simple? Cryptographic proofs (digital signatures). For text this should be trivial and for streaming video/audio you can probably hash and sign packets or maybe at least keyframes or something?
True, I can only know that the owner of the private key signed but not how the document was created. But I suppose there is some trust involved that a person I know who signs doesn't sign some AI generated stuff.
To establish the initial link, I suppose we need something more mainstream/scalable than the old key signing parties I remember from CCC etc.
But at least for friends and family it should be possible to create some flow where every member has a key-combo and you trust them to only sign stuff they wrote etc. and have local mini-keysign parties.
Do we need new key signing for friends/family? I can trust that all messages coming from a friend/family’s account originated from them, or else their account was compromised. I don’t see how a ‘non-ai’ key adds enough more trust to be worth it.
You have far too much faith in humanity. The majority of my extended family members are not smart enough to resist continuous attacks and would eventually not only sign, but give away the key in question.
Simply put I think we are stretching humanity farther than intellectual ability allows in a lot of people.
The author should have mentioned that this was partly an article to whitewash Netanyahu, but this coming from the BBC (and from the mainstream British media as a whole) that was to be expected.
AI slop detection requires some fine developed intuitions that come from decades-long exposure to both journalism/marketing slop as well as high quality literature. Because AI was aligned out of the hell by low level journalism newly graduates.
That's why it always falls back to the same tired formalistic clichês, like "Not this, but that", rampant baiting and sensationalism, because that's what would get high marks from your typical low-rent liberal arts annotator.
Man, I have nothing against liberal arts per se. On the contrary, I think that a tragedy of our time is that people disconnected from things like literature, history and art in the name of over-specialization and an excessively utilitarian approach towards education.
But I am very critical of what pass as the modern liberal arts academic establishment. To avoid a very long text, let's say that my view is heavily influence by Ortega y Gasset.
i wonder what is the captcha equivalent of ai bots? ask about taboo topics to rule out commercial models and ask about specific reasoning questions that trip ai like walking vs driving to car wash? or your own set?
This is scary but also kind of hilarious. You should feel proud your aunt still judges first before believing anything online. I've heard so many stories from friends lately. These scams are getting crazy. Scammers are already using pictures of influential people and even jumping on video calls pretending to be them.
More than a year ago I suggested that our family adopt a sign/countersign type of authentication (I say "the migrating birds fly low over the sea", you say "shadeless windows admit no light" ;-). It was clear at that time that we were going to start seeing scams get more advanced and hard to tell from valid requests for money, for example.
I thought I'd get at least some traction, considering part of the family works for No Such Agency. Nope. <shrug>
Somewhat related: over the last few weeks at work we've started having people calling our customer support asking for their e-mail addresses to be changed. The first one went through, but the scammer somehow messed it up and the address bounced. They called back in and the support person they talked to recognized by voice that it wasn't the same person they'd talked to in the past. Now we've had this happen to 3 different accounts, the first two times was people with thick Indian accents, the most recent one was suspected of being AI generated voice.
The sign/countersign still works even if it's unilateral. You say "the migrating birds fly low over the sea", they say "I told you already, we're not doing this stupid thing", and now they are authenticated.
This is one area where the government needs to step in. Video-hosting websites should be made to flag videos as AI-generated. AI companies should be made to watermark generated content in a hard-to-remove way (i.e. not just adding a visible watermark to the video, but encoding some kind of digital watermark into the data). Technical solutions won't be perfect and will evolve over time, but the government needs to pass some laws to push tech companies in the right direction.
The only companies that'd follow the watermark are the good guys though, yeah?
The people you'd want to be wary of would be the ones that'd look legit.
e.g. "yes i guess i will send my son $400,000 in cash tonight because he's been kidnapped, and i know it's real because there's no AI watermark that all the nice US/EU companies use."
The author really tries to convince us of Netanyahu that "He's not dead, folks", implying that the video in question is real because five fingers. While at the same time he relaying the message from experts that one cannot prove that that audio/video is not AI.
I thought we've long passed the Turing test, until I tried to implement a chat bot.
It's not even close.
It's easy to "pass the Turing test" for 5 minutes. It's extremely hard if you try to hold a longer, continuous conversation. Anything longer than 10 minutes the user will immediately know it's not human. Some problems you'll encounter:
- The bot needs to handle all situations, especially the nonsensical ones. This is when the user types "EEEEEEEEEEEEE...", or curse words, repeatedly.
- Who would've thought that it's extremely hard to decide when to stop talking?
- No matter how well you build the "persona" for the bot, they'll eventually converge to the same one, which is that of the llm itself.
- You'll notice that the bot is ignoring something obvious (e.g. it's not remembering past convo), and then give it some instructions to help with that. And then that'll be THE ONLY THING it does.
taylodl | 8 hours ago
sam_lowry_ | 8 hours ago
theshrike79 | 8 hours ago
eesmith | 7 hours ago
> The solution the world's leading experts have landed on is one your grandparents could have come up with: codewords. You, your family, business partners and anyone else you communicate with about important subjects need to come up with a secret phrase that no-one else knows you can use in an emergency to verify each other's identities. Think of it like a convoluted form of the multi-factor authentication we all use to login online.
> "My wife and I have a codeword that we use if we ever get an unusual call," Farid says. "We haven't needed to use it yet, but sometimes I ask just to test her to make sure we don't forget it."
bandrami | 7 hours ago
bandrami | 7 hours ago
kalaksi | 7 hours ago
krisoft | 6 hours ago
Imagine your crying grandson who caused a traffic accident in Mexico and the police planted drugs in his car and now he needs money to pay them off. He is in pain and probably has a concussion (explanation why he can't remember what you are asking), the police is hassling him to get off the phone (time pressure, explanation why the quality of the call is terrible). Will you get hung up on some code word he asked you to memorise years ago and you can't even know where it is anymore? And if you bring it up he just starts crying and tells you that you are his last chance to turn his life around. And you remember when he was a wee little kid and he fell and scraped his knee and you comforted him. Just the thought of pressing him on the code makes you feel like a terrible person. Or not. And then the scammer just finds someone more gullible. Theirs is a number game after all.
classified | 3 hours ago
XorNot | 8 hours ago
So it's all context clues really - i.e. if the video tracking shot is sort of within the constraints of the models, plays to obvious agendas etc. then I might tweak to go looking for artifacts...but in the propaganda game? That's already game over. And we're all vulnerable to the ground shifting beneath us - i.e. how much power would there be if you had a model which could just slightly exceed those "well known" limitations?
IMO the failure to implement strong distributed cryptography much earlier in the digital age is going to punish us hard for this - i.e. we haven't built a societal convention of verifying and authenticating digital communications amongst each other, and technology has finally caught up that it can fool our wetware now. It was needed well before this - e.g. the rise of the telephone scam and VOIP should've been when we figured out how to make sure people were in the habit of comprehending digital signatures and authentication. It isn't though, and now something much more dangerous is out there.
drzaiusx11 | 6 hours ago
It also included personal details only her closest friends and family would know. I assume this is being done at scale now. These are NOT Nigerian prince scams of yesteryear; this is something entirely different.
Tepix | 8 hours ago
Perhaps we need tamper proof authenticated cameras in all major cities worldwide that publish a livestream 24/7 and you can then stand in front of them to prove your human existance...
This could be something that notaries around the world could offer as a service.
UqWBcuFx6NV4r | 8 hours ago
exitb | 8 hours ago
intrasight | 6 hours ago
There's nothing missing technology wise to achieving this but we, at this point, lack the collective will and the regulatory regime. I do foresee a future where this is the norm and that anything you listen to or watch you'll be able to trace back to the device that captured the data.
nicbou | 7 hours ago
Zinu | 7 hours ago
nicbou | 6 hours ago
FinnKuhn | 7 hours ago
The options I have seen so far were a) using our digital IDs, which is very handy or b) having a bank verify my identity in person with my ID, which is also pretty good.
nicbou | 6 hours ago
mrlnstk | 7 hours ago
jrjeksjd8d | 7 hours ago
tjpnz | 7 hours ago
monster_truck | 7 hours ago
mkl | 6 hours ago
DaanDL | 7 hours ago
forkerenok | 8 hours ago
There is a thing about many people. I don't remember the phenomenon's name, if it has one, but it goes like this:
Given enough time to reconsider options, people will be endlessly flip-flopping between them grabbing onto various features over and over in a loop.
Quekid5 | 8 hours ago
vasco | 8 hours ago
So at each stage in the loop they are always super convinced of the position.
psychoslave | 7 hours ago
Actions might include some continuous checks in them, like the famous plan, do, check, act.
Solipsism already tell us that anything beyond current present self experience, existence of anything is uncertain. So, almost everything one have to take for granted to make anything outside metaphysic argument require an act of faith.
https://en.wikipedia.org/wiki/Solipsism
BoppreH | 7 hours ago
sph | 7 hours ago
Easy to replicate by asking someone something obvious, like the weather, and when they reply ask “are you sure?” - they won’t be so sure any more (believing it’s a trick question)
If I ask my mother if I’m real, she’ll have a pause because she has never had to entertain such a question, or the possibility her son over the phone is an impostor. Good way to push someone towards paranoia and psychosis.
Kye | 7 hours ago
I have personally intervened in one of those when I heard someone reading off a 6 digit number.
pixl97 | 2 hours ago
That said, hog butchering scams have gotten popular so manufactured urgency isn't the only way.
catlifeonmars | 7 hours ago
Interestingly, these are both phenomena where we start to _lose_ the ability to question our thoughts or introspect. These are phenomena of self-confidence rather than of self-doubt.
onion2k | 7 hours ago
People will default to believing something is AI if there's no downside to that opinion. It's a defence mechanism. It stops them being 'caught out' or tricked into believing something that's not true.
As soon as there's a potential loss (e.g. missing out on getting rich, not helping a loved one) people will switch off that cynical critical thinking and just fall for AI-driven scams.
This is the downside of being a human being.
V-2 | 7 hours ago
A summary, courtesy of chess dot com:
> The name of this "syndrome" comes from GM Alexander Kotov, author of the classic chess book Think Like a Grandmaster. In the book, Kotov described an incorrect yet very common calculation process that often leads players to select a suboptimal or bad move.
> According to Kotov, in positions where the lines are complex and there are numerous candidate moves and variations to calculate, it's easy to make a hasty move. A player in that situation might spend too much time going over two moves and all of their ramifications without finding a favorable ending position. In that process, the player is likely to go back and forth between the two different lines, always coming to the same unsatisfying conclusion—this wastes precious mental energy and time.
> After spending too much time evaluating the first two options, the player gives up the calculation due to time pressure or fatigue and plays a third move without calculating it. According to the author, that sort of move can cause tremendous blunders and cost the game.
forkerenok | 5 hours ago
mikkupikku | 5 hours ago
I'm sure I'm not the first to use this technique, but I don't know what it's called.
a2128 | 7 hours ago
Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video, and the availability of actual information will become more and more limited as the sea of online information gets polluted with crap. It may never be possible to calculate the full extent of the damage in monetary value.
whateverboat | 7 hours ago
a2128 | 7 hours ago
nathanaldensr | 7 hours ago
intrasight | 6 hours ago
bigfishrunning | 6 hours ago
soco | 5 hours ago
bigfishrunning | 5 hours ago
Not to mention that most 2FA still uses SMS, which has it's own well-understood security flaws.
prox | 5 hours ago
Maybe a good startup idea would be “local verify” , where you check locally for a client if the online destination is real.
Gigachad | 7 hours ago
rkomorn | 7 hours ago
More in-person stuff feels like a win to me (and I say this as someone who probably counts as introverted).
Not being able to trust any online interactions anymore? Seems like a new height in what was already a negative.
dominotw | 7 hours ago
identity serivce is not useful because that person might be a real person but they might just be a pipe to ai like we see on linkedin.
adithyassekhar | 7 hours ago
jjulius | 4 hours ago
Getting off of the Internet and off of our devices. It's not just a solution to AI/LLMs modifying our reality but also a solution to [gestures wildly at the cultural, societal and global communication impacts of the past ~16 years].
This sentiment is unpopular, but it's true. Prioritize true connections and experiences.
roflmaostc | 7 hours ago
For me the solution is in signed e-mails and signed documents. If the person invites me to a online meeting with a signed e-mail, I trust that person that it's really them.
Same for footage of wars, etc. The journalist taking it basically signs the videos and verifies it's authenticity. It is AI generated, then we would loose trust in that person and wouldn't use their material anymore.
Forgeties79 | 7 hours ago
TheOtherHobbes | 6 hours ago
Ultimately ID requires either a government ID service, a third party corporate ID service, or some kind of open hybrid - which doesn't exist.
All of those have their issues.
tenacious_tuna | 6 hours ago
bigfishrunning | 6 hours ago
Ajedi32 | 5 hours ago
pixl97 | 3 hours ago
MarsIronPI | 4 hours ago
olmo23 | 6 hours ago
SirMaster | 3 hours ago
mk89 | 6 hours ago
hansonkd | 5 hours ago
strogonoff | 4 hours ago
With cash, you can only steal so much (or have transactions of up to certain size) until you run into geographical and physical constraints. With cryptocurrency, it’s possible to lose any amount.
With humans writing scam emails, you can only have so many of them until one blows the whistle. With LLMs, a single person can distribute an arbitrary amount.
At some point, quantity becomes a new quality, and drawing a parallel becomes disingenuous because the new quality has no precedent in human history.
pixl97 | 3 hours ago
And by that you mean tens of millions to billions right? Bank transfer scamming/fraud is a thing.
strogonoff | 3 hours ago
Forgeties79 | 7 hours ago
Or the opposite, where people attempt to get out of trouble by calling real evidence into question by calling it “AI”
bigfishrunning | 6 hours ago
Forgeties79 | 5 hours ago
thunky | 6 hours ago
What damage are you talking about?
I'm not sure I understand why it matters that there is no real person there if you can't actually tell the difference. You're just demonstrating that you don't actually need a human for whatever it is you're doing.
skydhash | 6 hours ago
Not GP, but there's a lot of damage that can be done with impersonation.
chii | 6 hours ago
However, the increase in fake videos that are difficult to tell from real is indeed a potential issue. But the fact that misinformation today is already so prevalent is evidence that better video doesn't make it any worse than it already is imho.
collinmcnulty | 5 hours ago
pixl97 | 2 hours ago
rdevilla | 6 hours ago
Instead what we have now with AI is people exchanging merely the tokens and being contented with the symbol in-and-of itself, as something valuable in its own right, with no need for an actual candidate or physical product underlying the symbol.
There is a clip by McLuhan I can't be assed to find right now where he says eventually people will stop deriving pleasure from the products themselves and instead derive the feelings of (projected) accomplishment and pleasure from viewing advertisements about the product. The product itself becomes obsolete, for all you actually need to evoke the desired response is the advertisement, or the symbol.
A hiring manager interviewing an AI and offering it a job is like buying the advertisement you just watched, and.... that's it. No more, the transaction is complete.
pixl97 | 3 hours ago
Hmm, this guy may have been on to something
>Instead of tending towards a vast Alexandrian library the world has become a computer, an electronic brain, exactly as an infantile piece of science fiction. And as our senses have gone outside us, Big Brother goes inside. So, unless aware of this dynamic, we shall at once move into a phase of panic terrors, exactly befitting a small world of tribal drums, total interdependence, and superimposed co-existence. [...] Terror is the normal state of any oral society, for in it everything affects everything all the time. [...] In our long striving to recover for the Western world a unity of sensibility and of thought and feeling we have no more been prepared to accept the tribal consequences of such unity than we were ready for the fragmentation of the human psyche by print culture.
--The Gutenberg Galaxy, 1962
rdevilla | 3 hours ago
esseph | 6 hours ago
We're in deep shit.
bigfishrunning | 6 hours ago
The damage is to the trust we have in our communication media. The conclusion here is that every person is trivial to impersonate; that's the damage.
thunky | 6 hours ago
Also it was already possible for someone to impersonate your mother via text or similar, and even easier to pull off.
contagiousflow | 5 hours ago
bigfishrunning | 5 hours ago
If you got a suspicious text, the logical thing is to call up the person who sent it and try to verify it. AI impersonation makes that much harder.
thunky | 5 hours ago
The communication channel is what you trust. So you would call the person using that trusted channel.
It's just like when you get a scam email or popup from "Microsoft" saying your laptop is compromised and you need to call their number ASAP.
Habgdnv | 5 hours ago
nslsm | 6 hours ago
bitmasher9 | 6 hours ago
chistev | 6 hours ago
You know those incriminating Epstein photos with his associates? A few years from now a common defense from people like that would be that the photos were AI generated, and it would be difficult to prove them wrong beyond reasonable doubt.
People in previous cases already attempted to dismiss incriminating pics of themselves as being the work of clever Photoshop artists.
collinmcnulty | 5 hours ago
friendzis | 5 hours ago
Most information you can access publicly, including Wikipedia, is a result of astroturfing fight. Most information online had not been trustable for double digit number of years now.
> we already experience misleading articles today
Again, had been happening for decades.
> footage of some incident somewhere may have been entirely fabricated by AI
Not like we did not already have doctored footage plaguing the public.
> Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video
Necessity to inspect the supply chain for snake oil has been a thing since at least EA (the Nasir one).
We may be dealing with the problem of spam, but the problems have already been there.
pstuart | 5 hours ago
ottah | 4 hours ago
ACS_Solver | 3 hours ago
Misinformation in pure text form has always been cheapest, but is even cheaper now that text generation is basically a solved problem. Photos have been more expensive, it used to take time and skill with a photo editor to produce a believable image of an event that never happened. The cost is now very low, it's mostly about prompting skills. Fake videos were considerably harder, especially coupled with speech. Just a few years ago I could assume any video I saw was either real or a time-consuming, deliberate fake.
We've now entered a time where fake videos of famous people take actual effort to tell apart, and can be produced for a low cost - something accessible to an individual, not a big corporation. We can have an entirely fake video of Trump, or another world leader, giving a speech and it will look like the real thing, with the audiovisual "tells" of it being fake getting harder to notice every few months.
friendzis | 2 hours ago
So it's a spam issue. And normally, while annoying it's possible to fight spam, however on these topics we have built structures that disable the very mechanisms allowing us to fight spam. That's worrying.
The fact that someone can instruct their computer to astroturf their flight tracking app on some forum for nerds is irrelevant - people have been instructing "marketing agencies" to astroturf their brand of caffeinated sugar water on tv, radio and press for decades and centuries. For a very long time the "traditional media" was aware that their ability to sell astroturfing capacity was hanging on their general trustworthiness. Then the internets rose to prominence, traditional media followed by selling more and more of their capacity to astroturfers. Now we have a worrying situation that the internets might be spammed by astroturfers a bit too much, but the backup is broken already. Now that's truly frightening.
Welcome to the post-truth world, where objective references outside of your own village cannot exist.
thisisit | 4 hours ago
There will be some regulatory capture in between.
World will kick into gear only when something really bad happens. Maybe a influential person - rich or politician - fooled into doing something catastrophic due to a deepfake video/image. Until then normal people being affected isn't going to move the needle.
Miraste | 4 hours ago
SirMaster | 3 hours ago
Just have Apple and Google digitally sign videos and photos recorded from phones and then have Google and Meta, etc display that they are authentic when shown on their platforms.
alpha_squared | 3 hours ago
SirMaster | 3 hours ago
If the bits that make up the video as was recorded by the camera don't match the hash anymore, then you know it was modified. That doesn't mean it's fake, it just means use skepticism when viewing. On the other hand the ones that have not been modified and still match can be trusted.
SAI_Peregrinus | 2 hours ago
Miraste | 3 hours ago
petesergeant | 3 hours ago
red-iron-pine | 3 hours ago
i've thought about this off and on and how to implement it. Not easily, was my general takeaway.
or rather, it's easily to implement but you're in a adversarial relationship with bad actors and easy implementations may be easily broken
e.g. your certs gotta come from somewhere and stay protected, and how do you update and control them. key management for every single camera on every phone, etc.
esafak | 4 hours ago
vaildegraff | 7 hours ago
hellcow | 4 hours ago
amelius | 6 hours ago
How was this solved, actually? More training data, or was there more to it?
hgo | 6 hours ago
octopoc | 6 hours ago
“Auntie, it’s me! N*** k** f**! X is really a man! ** did 9/11!”
“Oh it really is you Johnny!”
We’re all going to have to start communicating this way. Best of luck.
I offer consulting services on the side to help professionals hone these skills. $250 / hour.
slekker | 6 hours ago
wat10000 | 6 hours ago
ui301 | 6 hours ago
mikkupikku | 5 hours ago
readthenotes1 | an hour ago
sharperguy | 6 hours ago
anal_reactor | 5 hours ago
arjie | 4 hours ago
readthenotes1 | 59 minutes ago
ui301 | 6 hours ago
https://www.linkedin.com/posts/fabianhemmert_handwriting-vs-...
It feels good to connect with humans that way.
The same I am trying to do with my (vibe coded!) site "jetzt" (German for "now"), to which I photo blog impressions from everyday life. Only insiders will know what they mean beyond their aesthetic, and it also feels like a good way of human connection in these times.
https://jetzt.cx/
(No food, no plane wings, just ugly banalities and beautiful nothingness from everyday life.)
ui301 | 6 hours ago
https://ars.electronica.art/panic/de/view/reverse-turing-tes...
(I.e. trying to hide the fact that you're human, among a group of AIs)
hk1337 | 6 hours ago
kriro | 6 hours ago
bitmasher9 | 6 hours ago
kriro | 3 hours ago
But at least for friends and family it should be possible to create some flow where every member has a key-combo and you trust them to only sign stuff they wrote etc. and have local mini-keysign parties.
bitmasher9 | 3 hours ago
pixl97 | 2 hours ago
You have far too much faith in humanity. The majority of my extended family members are not smart enough to resist continuous attacks and would eventually not only sign, but give away the key in question.
Simply put I think we are stretching humanity farther than intellectual ability allows in a lot of people.
paganel | 6 hours ago
mystraline | 6 hours ago
But about deepfakes, these exist to re-add 6 fingers. Once you do this, you can claim the video was generated.
https://www.etsy.com/listing/1667241073/realistic-silicone-s...
bluefirebrand | 5 hours ago
I truly believe that it is a crime against humanity
tom-blk | 5 hours ago
elzbardico | 5 hours ago
That's why it always falls back to the same tired formalistic clichês, like "Not this, but that", rampant baiting and sensationalism, because that's what would get high marks from your typical low-rent liberal arts annotator.
iamacyborg | 5 hours ago
Tell us more about this axe you appear to need to grind.
elzbardico | 5 hours ago
But I am very critical of what pass as the modern liberal arts academic establishment. To avoid a very long text, let's say that my view is heavily influence by Ortega y Gasset.
pdyc | 5 hours ago
Alen_P | 5 hours ago
linsomniac | 5 hours ago
I thought I'd get at least some traction, considering part of the family works for No Such Agency. Nope. <shrug>
Somewhat related: over the last few weeks at work we've started having people calling our customer support asking for their e-mail addresses to be changed. The first one went through, but the scammer somehow messed it up and the address bounced. They called back in and the support person they talked to recognized by voice that it wasn't the same person they'd talked to in the past. Now we've had this happen to 3 different accounts, the first two times was people with thick Indian accents, the most recent one was suspected of being AI generated voice.
card_zero | 4 hours ago
josefritzishere | 4 hours ago
spiritplumber | 4 hours ago
scotty79 | 4 hours ago
Really? The coffee in his cup, filled to the brim, did the most bizarre dance possible. And he handled the cup as if was empty, without any care.
slibhb | 4 hours ago
inanutshellus | 4 hours ago
The people you'd want to be wary of would be the ones that'd look legit.
e.g. "yes i guess i will send my son $400,000 in cash tonight because he's been kidnapped, and i know it's real because there's no AI watermark that all the nice US/EU companies use."
hirako2000 | 4 hours ago
krunck | an hour ago
Mexed Missaging.
vagab0nd | 46 minutes ago
It's not even close.
It's easy to "pass the Turing test" for 5 minutes. It's extremely hard if you try to hold a longer, continuous conversation. Anything longer than 10 minutes the user will immediately know it's not human. Some problems you'll encounter:
- The bot needs to handle all situations, especially the nonsensical ones. This is when the user types "EEEEEEEEEEEEE...", or curse words, repeatedly.
- Who would've thought that it's extremely hard to decide when to stop talking?
- No matter how well you build the "persona" for the bot, they'll eventually converge to the same one, which is that of the llm itself.
- You'll notice that the bot is ignoring something obvious (e.g. it's not remembering past convo), and then give it some instructions to help with that. And then that'll be THE ONLY THING it does.