I'm definitely not doing this. And it sounds like avoiding it isn't going to be too painful for my personal needs?
Users who aren’t verified as adults will not be able to access age-restricted servers and channels, won’t be able to speak in Discord’s livestream-like “stage” channels, and will see content filters for any content Discord detects as graphic or sensitive. They will also get warning prompts for friend requests from potentially unfamiliar users, and DMs from unfamiliar users will be automatically filtered into a separate inbox.
There is no scenario where any of the Discords I'm in or would ever be in are worth sending my ID or face scan to a 3rd party company for.
I'm concerned it won't stop there. I wonder who is responsible for determining when a server is 'adult' oriented and what the criteria are? What are the consequences for mislabeling a server as not adult oriented if Discord later determines it is? Point being that I could see this being similar to reddit or any other hierarchy situation where people at the very top dictate the terms and everyone underneath them has to carry them out even if the way of carrying them out is nonsense. It could be every discord server owner just labels their server as 'adult' to avoid onerous rules or consequences, or perhaps Discord will only just tag a server as adult oriented if it finds them not to be, so then no one is pressured to switch their server over because the worst that can happen is what they would have had to do anyhow. But I suspect that Discord may have to do more than that, because then it's an easy way to bombard their system by spinning up servers not tagged as for 'adult' content and so anyone can view them and it's on Discord to tag them all.
When will it be the case that the criteria where something is 'teen' friendly is a server without profanity or certain levels of violent content etc? Think about movies or video game ratings, where someone decides if you're 16 you can't play GTA or watch a certain movie or such. What is going to make Discord immune from the pressure to apply some arbitrary bullshit rules once they have an identity verification system?
I don't expect any servers that I'm part of would qualify under 'adult' content as they've laid out in this post, so I'll possibly keep using Discord, but I surely hope there's a decent alternative because I don't have any intention of letting them store my facial characteristics in their database somewhere or my ID where the information can get leaked out and be used for any other purposes.
Ditto. The thing I'm taking issue with here though is just how much this highlights that much of my social life is tied up in and highly dependent on a single company whose policy has long since stopped being something I find tolerable. Discord has made it clear to me that it is eager to step down the path of enshittification, yet many communities and friends of mine use it as their primary social space.
I don't want any single company having that much control over my social life, much less one so eager to trample on people's privacy or whose future looks so grim. I don't know what I'm going to do about it just yet, but it's abundantly clear that this is not sustainable.
Unfortunately they already rolled this out in some places and I was affected. There were a few image channels that were not necessarily for adults (Image channels with memes) that I wasn't able to access anymore. Worst is that most people in them went ahead with the verification so the only option for me was to not see the content anymore or verify myself but I refuse to do so.
Key privacy protections of Discord’s age-assurance approach include:
On-device processing: Video selfies for facial age estimation never leave a user’s device.
Quick deletion: Identity documents submitted to our vendor partners are deleted quickly— in most cases, immediately after age confirmation.
Straightforward verification: In most cases, users complete the process once and their Discord experience adapts to their verified age group. Users may be asked to use multiple methods only when more information is needed to assign an age group.
Private status: A user’s age verification status cannot be seen by other users.
Personally I have no interest in doing that myself regardless of what reassurances they provide. At present I don't think there's anything I'd be missing out on if I didn't, but I imagine people will find workarounds to using their own face before long.
Even so, Discord was the one to hire them and make the initial demands for the IDs. It doesn't matter that it was a third party that got breached. That third party only had that information because Discord hired them in the first place.
They say in the article they stopped using that vendor for age verification, but the damage is already done. Discord has already proven to have had incredibly faulty judgment once, and with something vital. A government-issued ID being leaked is so much worse than a credit card. Given Discord's size and popularity, and the sensitivity of the information, they should have vetted the vendor's security and data storage practices much more thoroughly.
It's like a museum hiring a third party security company to provide guards and security equipment for some special temporary exhibit, who then fail to detect and stop a guy breaking into the exhibit before running off with priceless art and artifacts. The owners of the stolen pieces don't say, "oh, that museum did nothing wrong, it was all the security company." The museum holds equal responsibility because ultimately, it was THEIR responsibility to keep the exhibit safe. THEY made that promise to the owners, not the security company.
Likewise, Discord promised its users their data would be safe and deleted right away. Who got hacked is irrelevant, because the fact the breach happened at all shows that Discord failed to ensure the data would be deleted like they promised.
So ultimately: yeah, people are going to blame Discord because they did fail at some stage.
That’s a separate thing. What they promised would automatically be deleted is the automated age verification, which as of now has never been hacked. Not that it’s existed for very long, but nonetheless, that commitment was kept.
Essentially, if you get denied by the automatic system, you can make a support ticket and say, hey, the machine was wrong, let me prove that to a human. And then the human will ask you to send over a picture of your ID.
No one ever promised that this would delete your ID. It’s bespoke service, not an official pathway.
Not to mention it was Zendesk that was hacked. Using Zendesk isn’t a sign of poor judgement, it’s by far the most common IT ticketing software. If you’re going to avoid any company that uses Zendesk you may as well just avoid ever emailing customer support.
Do you have a source on Zendesk being the one breached? I'm not asking to be snarky or combating, I'm genuinely asking because I'm finding sources (namely Discord itself) claiming it's a company called 5CA that was breached. I initially only found articles referencing 5CA (if they named the vendor at all), and after I saw your comments mentioning Zendesk I had to dig a bit to find some.
Based on this site, the leak was first attributed to Zendesk and then Discord publicly named 5CA as the vendor. Both 5CA and Zendesk naturally denied being the ones hacked. And... That's essentially all I can really confirm.
Honestly, it's a bit confusing trying to sort out where, exactly, the breach happened. A lot of the articles are from right when the breach was announced, or after Discord named 5CA. And with today's announcement, there's even more articles to sift through. This is one of the few more in-depth writeups I can find from after October, with a focus on 5CA, but I have no clue how accurate or reputable that particular site (or writer?) is. There was a failure at some point, and the fact it's not clear where that failure point was doesn't really sit well with me given how major of a breach this is. If you have some a source that goes more in-depth, I'd really appreciate it.
That aside, to address your first point, I'll just quote the relevant bit from the Discord blog post I shared:
Privacy-protecting process. Discord and k-ID do not permanently store personal identity documents or users’ video selfies. Images of a user's identity documents and ID match selfies are deleted directly after their age group is confirmed, and the video selfie used for facial age estimation never leaves their device.
Succinct and direct promise to users that they will delete images of identity documents and ID match selfies. I'll grant you that it specifies "Discord and k-ID", which does leave some leeway to argue they never promised the same of third party vendors. They also don't use the word "immediately" or any language to specify when it would be deleted. Which is arguably just as bad, because that bullet point in their statement implies an explicit promise that they'll secure our privacy while leaving Discord leeway to avoid or minimize accountability precisely in cases like this.
Just... Discord really hasn't done anything to convince me (and others) that they've taken measures to ensure this never happens again. After that breach, I and many others just don't want to risk giving our government-issued IDs to any private companies over the internet. It really doesn't matter who was hacked, just that it happened to Discord users once and we don't want it to happen to us.
Succinct and direct promise to users that they will delete images of identity documents and ID match selfies. I'll grant you that it specifies "Discord and k-ID", which does leave some leeway to argue they never promised the same of third party vendors. They also don't use the word "immediately" or any language to specify when it would be deleted. Which is arguably just as bad, because that bullet point in their statement implies an explicit promise that they'll secure our privacy while leaving Discord leeway to avoid or minimize accountability precisely in cases like this.
Notably, that wording also doesn't specify more that no data at all about the 'video selfies' is transmitted, just that the video itself doesn't leave the device. This leaves room for them to create data points about your face from the video, send those data points, and then reconstruct your face from those data points. I have seen other wording mentioned in other articles that supposedly is more specific that no data will be transmitted other than the approximate age the on-device detection comes up with, but that requires more time and expertise for people to dive into all the various policies involved to determine what exactly is covered and what is left for interpretation.
Even then, policies are one thing, actualities and consequences are another thing. They can say they won't transmit anything but that doesn't mean it's actually true, and as we've become incredibly familiar with in the past 10 years or so, the consequences for lying and harming others are basically so little as to be none at all.
This leaves room for them to create data points about your face from the video, send those data points, and then reconstruct your face from those data points.
Can you explain what this looks like? Any sufficiently detailed description sounds like a compressed photo to me, and they're explicitly not sending those.
Fully collecting everyone's personal data isn't something every company does. Discord isn't Facebook. What are the incentives to lie and deal with storing this data?
What are the incentives to lie and deal with storing this data?
Monetisation. AI training. Which is maybe the same thing. The fact that they could do anything and have it be easier to ask forgiveness than permission. Also history. How many times do we have to be lied to? I'm sure they'll be on the level /this/ time. As opposed to all the other times.
It's not just pornography that they're worried about. It's child predators. Social platforms are racing now to get ahead of the looming threat of an outright ban from governments worried about child predation. The public outrage when pedophiles groom children over the internet has reached a point that it is impossible to ignore. Something must be done; at least, that's the increasing sentiment.
That just sounds like right-wing propaganda used to justify ID/face capturing. How is age verification actually going to combat child predators? Adults can still communicate with children in either environment.
I work in the industry and this shit affects my livelihood. I almost lost my job because of this fear. I'm just describing the reality of the work that my colleagues have been putting in.
Eventually, only children and trusted adults (parents, relatives, etc) will be able to talk with children. Random adult strangers will not be able to. This will be common across social platforms. This is my belief, anyway.
Eventually, only children and trusted adults (parents, relatives, etc) will be able to talk with children.
Ok so the only way to "protect the children" is to literally verify everyone, children included, and keep them in their own pen.
What Discord is doing right now is technically useless and won't protect anyone. It looks like they're just slowly turning up the water heat so that we don't feel once it's boiling and we must all submit our ID.
It seems to me that possibly the greatest predators to children are billionaires and their toys. The scale of harm done already and that will be done just by LLM alone I can only imagine, social media and various content algorithms even ignoring unrestrained communication between prototypical child sexual predators. Societies and cultures growing more distant and ever more dependent on tech billionaires platforms, I can think of no greater threat to children and adults alike.
I'm not in disagreement with you, but whenever news of a child predator chatting with a child on a messaging/social platform hits, the general reaction is "how dare the platform not have caught this". The conversation isn't about the relatively low incident rate. It's not about educating parents to look after their children more. It's about blaming the platform. And again, I'm not saying platforms can't be more responsible. It's just that if the platform is confronted by this, the logical solution from their POV is to build age controls into their platforms.
Are you saying that LLMs are grooming children? Billionaires are bad, but there are only 3,279 billionaires globally which is not nearly enough people to explain the tens of thousands of minors abused every year.
I'm saying that LLMs are warping their minds and causing serious harm, up to and including death. Notably when I said billionaires and their toys being the greatest predators to children, I did not say sexual predators.
Just as an example and starting point of what I mean, and I think it's going to get much worse. And those are just the known cases enough to put on Wikipedia. There are obviously going to be more than just the known cases. Then expand that to social media platforms and you can't easily quantify the harm caused because often the harm caused through social media has another person on the other end, so you can say it's not the platform it's the person, but the platforms are carefully constructed in ways to create certain forms of contact between people for engagement, so I won't give them that leeway.
That's not to discount the harm caused by protoypical child sexual predators, but the scale at which that harm is happening is completely dwarfed by the scale at which billionaires and their reach extends. Hundreds of millions, possibly billions of people, are impacted by those 3,279 billionaires. Of course I don't actually exclude hundred millionaires or such either, and some billionaires like Bezo's ex-wife aren't the prototypical psychopath billionaire either, so I don't really care about the exact specific number of people, billionaire is more of a stand-in for extremely rich assholes who use their wealth to exert power over others.
I read an interesting book on internet pornography and how addictive it can be for people, especially teens/young adults whose brains are actively developing. The book features a number of testimonials from people struggling with pornography addiction, but also included research that was current at the time. I didn't process that 2014 was over a decade ago until I started writing this comment, so it isn't up-to-date research being cited. The book is titled Your Brain on Porn.
I haven't looked in to subsequent research, but I did find how it was an interesting read in framing how different internet pornography is from previous forms of pornography. It also got me thinking about how addictive social media platforms/the internet can be. While they're not praying on the "reproduce" portions of the brain, they still offer that low effort novelty/dopamine that makes it so easy to just keep engaging with the platforms.
I'm not for Discord requiring IDs, since I don't trust them or other platforms with that information and would prefer a privacy first age attestation standard being adopted before platforms attempt to gate off any sort of adult content behind an age gap, but governments are definitely pushing for things like this from what I've seen regarding social media bans/adult content age enforcement laws going into effect. That being said, I thought I'd share this book as there has been some research done on the negatives of internet pornography for people.
A lot of FOSS communities have switched to Matrix protocol chats - I host an unfederated server that also allows calls via a LiveKit service, and it works quite well. The mobile clients leave something to be desired though.
It depends on what features you'd want in an alternative. I've heard self hosting stoatchat (formerly Revolt, GitHub) is the "next closest thing". Never set it up/tried it out myself, though, can't verify firsthand.
It's way less mature than Discord and not very popular, but it's the closest to Discord that I know of. As far as I know, it's pretty secure and trusted and is open source.
I've been trying to get my own instance of Stoat up for hours today and haven't been able to get through it. I wish there was a self-hosted alternative that has a decent feature set, mobile apps, and isn't 14 different containers.
It literally just got hit with some flack for the maintainer dealing with AI commits (that have since been reverted). IDK if I'd conaider that a full dealbreaker, but it's something to keep in mind for those who care.
I like IRC, but unless it changed since I last used it years ago, the way it works is too different from Discord to function as a viable replacement for most people. The messages are tied to the sessions. You can't see previous message history from before your session, logging off will typically wipe all the conversation you were present for, and people can't send you a message when you're offline. Some IRC clients can store the messages, but it still limits what you can see to things you were present for.
It's useful for conversation in real time, but many people use Discord to share information and updates. I'm on many fandom servers where people consult older conversations for writing reference, some servers for news on manga translations, a server with my high school friends to coordinate hangouts or share updates (one person's phone just never got texts from mine for some reason), and a couple servers used to coordinate workers on large-scale creative projects. IRC just doesn't work for those cases.
However, some users may not have to go through either form of age verification. Discord is also rolling out an age inference model that analyzes metadata like the types of games a user plays, their activity on Discord, and behavioral signals like signs of working hours or the amount of time they spend on Discord.
my guess is that a good deal of us will be verified without identification.
I’d expect the opposite honestly - I think that’s more a fig leaf to try reduce negative feedback like when reddit promised to explore custom css on new reddit. Especially given that I expect people who do not choose to share their activity with discord are overrepresented here
It might not change anything yet. But since Discord is showing no restraint on automatically adding account restrictions to existing accounts, they could easily do the same with servers. They could just make a sweeping change for any non-community server to be considered 18+, because those have less moderation.
Also sorry, I've commented a bit but I don't know my way around tagging a post.
Not a huge deal, we all just do our best, and @mycketforvirrad the silent hero will come and add/remove tags. Take a look at the topic log in the sidebar of this thread (or any thread) to get a feel for how the tags are commonly used and organized.
Probably the most important tags are those to do with things like US politics, so that people who don't want to have them in their feed can filter them out by tag.
I used Stoat for awhile back when it was still Revolt.
Nutshell: Overall, it is pretty good, and very close to Discord in most ways.
Caveats and negatives: It is being developed and improved at a very slow pace ... it is often a little bit buggy ... it does occasionally go down, and/or get bogged down from user load (and I expect that issue will become much worse after Discord implements this) ... and finally, the various ways in which it is not quite as feature-rich as Discord are minor annoyances, but over time, they can become frustrating.
Technically, I still have a small "friends-and-family" server on the platform, but we-all moved to a self-hosted instance of Matrix/Element over a year ago. Matrix is less like Discord, takes more of an adjustment, but overall, we are happier on it now, than we were on Stoat.
I'm wondering how adept these age verification tools are, and if AI is actually a good use-case for bypassing them without scanning your own face. Has anybody experimented with using AI generated faces to trick these tools?
I read that, in the past, some age verification tools have been tricked by things as simple as screenshots of posed faces in Garry's Mod. I assume that those simple tricks have been patched out, but I imagine AI generated faces would be much more difficult to detect.
This was a good reminder to cancel my Nitro subscription.
Not exactly sure where the communities I am in will migrate to, but it's definitely top-of-mind for everyone now. There's a fairly strong sentiment of not giving in to the requirements, so I don't necessarily see us sticking around on the platform.
I do think the dystopia will follow us eventually wherever we go, for the most part, but who knows
Yeah and often dystopia is bipartisan so I have zero hopes about it. Especially where I'm at (the US). The entire structure is a flawed-from-the-beginning dumpster fire built on a crumbling foundation of atrocity. With basically zero politicians that even remotely represent me or my interests. The main parties are both complicit in the horrors, historically and presently, internally and internationally, and the entire tech industry is ready to grovel at their feet at a moment's notice to do their bidding. I'm not saying electoral politics doesn't matter, but I am saying that it's completely f'd. Gonna take a lot more than a ballot box to fix this nation
I don’t see the “despite leaking 70,000” ids in the original title. If it was an editorial choice, ultimately I don’t think that was their problem. The software they used for support tickets had a security breach because of poor credentialing hygiene.
It’s a must that automated ID checking can fallback to customer support. But falling back to customer support means customer support needs the documents, and that is an opportunity for data to leak, since humans are infamously bad at security.
But it’s much better than to not have any recourse in case the systems perform poorly.
I don’t see the “despite leaking 70,000” ids in the original title. If it was an editorial choice, ultimately I don’t think that was their problem. The software they used for support tickets had a security breach because of poor credentialing hygiene.
If as a business you choose to partner with an organisation with leaky data practices, it's absolutely your responsibility. You don't get to wash your hands of your poor choices and throw your carefully selected partner under the bus, in an attempt to deflect your responsibility for handling your customer's data.
It was a zendesk issue. Zendesk is big enough that I don’t really see it as an issue, or a moral failing, to use them for support tickets. Sometimes stuff happens. Even AWS has its outages.
Essentially, I don’t see a reason why that would not be a one-off. It’s not like they used shady software or had poor data practices. They used probably the biggest support tickets/IT software suite which had an exploit at that time.
Presumably they’re still using zendesk. But that’s different from what they’re talking about in the post.
The first step is that they use auth0 or whatever to verify your identity with automated systems. This is where the premise that your data is immediately deleted is.
If you get rejected, then you can file a support ticket, and as part of that you’d have to send a picture of your ID. No guarantee there, I mean if nothing else the bitmap will have to land on the support operator’s computer so they can look at it to begin with.
If you don’t trust zendesk, then if you never file a ticket it’ll never get to that step to begin with.
Essentially, I don’t see a reason why that would not be a one-off.
Didn't you just say the reason in the prior comment?
But falling back to customer support means customer support needs the documents, and that is an opportunity for data to leak, since humans are infamously bad at security.
So if humans are infamously bad at security, and they still have the same system that puts the infamously bad humans in a place to fuck up their security, then why wouldn't it happen again?
One, you have to take proactive effort to get on this pipeline. By default, you won’t be in customer support. You have to be the one to initiate that.
Two, yeah, humans are leaky. I don’t particularly assume any privacy with any support tickets I file. This is what it is. It could be age privacy, could be a payment issue. If I’m concerned about the information being out there, I wouldn’t involve humans in IT unless the need was great.
I think they are responsible for figuring out how to make sure it doesn’t happen again, which may or may not mean switching vendors, depending on what ZenDesk does to make sure it never happens again.
They’re also responsible for implementing this new age verification system, including vetting any vendors they use. How much does the previous incident really reflect on that if they’re using different vendors?
I do think some skepticism about how a new, complicated system is implemented is warranted.
hamstergeddon | a month ago
I'm definitely not doing this. And it sounds like avoiding it isn't going to be too painful for my personal needs?
There is no scenario where any of the Discords I'm in or would ever be in are worth sending my ID or face scan to a 3rd party company for.
Grumble4681 | a month ago
I'm concerned it won't stop there. I wonder who is responsible for determining when a server is 'adult' oriented and what the criteria are? What are the consequences for mislabeling a server as not adult oriented if Discord later determines it is? Point being that I could see this being similar to reddit or any other hierarchy situation where people at the very top dictate the terms and everyone underneath them has to carry them out even if the way of carrying them out is nonsense. It could be every discord server owner just labels their server as 'adult' to avoid onerous rules or consequences, or perhaps Discord will only just tag a server as adult oriented if it finds them not to be, so then no one is pressured to switch their server over because the worst that can happen is what they would have had to do anyhow. But I suspect that Discord may have to do more than that, because then it's an easy way to bombard their system by spinning up servers not tagged as for 'adult' content and so anyone can view them and it's on Discord to tag them all.
When will it be the case that the criteria where something is 'teen' friendly is a server without profanity or certain levels of violent content etc? Think about movies or video game ratings, where someone decides if you're 16 you can't play GTA or watch a certain movie or such. What is going to make Discord immune from the pressure to apply some arbitrary bullshit rules once they have an identity verification system?
I don't expect any servers that I'm part of would qualify under 'adult' content as they've laid out in this post, so I'll possibly keep using Discord, but I surely hope there's a decent alternative because I don't have any intention of letting them store my facial characteristics in their database somewhere or my ID where the information can get leaked out and be used for any other purposes.
LukeZaz | a month ago
Ditto. The thing I'm taking issue with here though is just how much this highlights that much of my social life is tied up in and highly dependent on a single company whose policy has long since stopped being something I find tolerable. Discord has made it clear to me that it is eager to step down the path of enshittification, yet many communities and friends of mine use it as their primary social space.
I don't want any single company having that much control over my social life, much less one so eager to trample on people's privacy or whose future looks so grim. I don't know what I'm going to do about it just yet, but it's abundantly clear that this is not sustainable.
gco | a month ago
Unfortunately they already rolled this out in some places and I was affected. There were a few image channels that were not necessarily for adults (Image channels with memes) that I wasn't able to access anymore. Worst is that most people in them went ahead with the verification so the only option for me was to not see the content anymore or verify myself but I refuse to do so.
Durinthal | a month ago
Discord press release link for reference.
Personally I have no interest in doing that myself regardless of what reassurances they provide. At present I don't think there's anything I'd be missing out on if I didn't, but I imagine people will find workarounds to using their own face before long.
tachyon | a month ago
All of those protections are horse shit. This is the same company that got hacked in October.
Hollow | a month ago
The customer support agency that was receiving the emails of ID document scans got hacked, Discord itself wasn't. I imagine they've been fired.
CannibalisticApple | a month ago
Even so, Discord was the one to hire them and make the initial demands for the IDs. It doesn't matter that it was a third party that got breached. That third party only had that information because Discord hired them in the first place.
They say in the article they stopped using that vendor for age verification, but the damage is already done. Discord has already proven to have had incredibly faulty judgment once, and with something vital. A government-issued ID being leaked is so much worse than a credit card. Given Discord's size and popularity, and the sensitivity of the information, they should have vetted the vendor's security and data storage practices much more thoroughly.
People currently have little reason to trust Discord's promise that any photos, whether of our faces or IDs, will be deleted. They promised that it wouldn't be stored last time, but obviously that was false.
It's like a museum hiring a third party security company to provide guards and security equipment for some special temporary exhibit, who then fail to detect and stop a guy breaking into the exhibit before running off with priceless art and artifacts. The owners of the stolen pieces don't say, "oh, that museum did nothing wrong, it was all the security company." The museum holds equal responsibility because ultimately, it was THEIR responsibility to keep the exhibit safe. THEY made that promise to the owners, not the security company.
Likewise, Discord promised its users their data would be safe and deleted right away. Who got hacked is irrelevant, because the fact the breach happened at all shows that Discord failed to ensure the data would be deleted like they promised.
So ultimately: yeah, people are going to blame Discord because they did fail at some stage.
stu2b50 | a month ago
That’s a separate thing. What they promised would automatically be deleted is the automated age verification, which as of now has never been hacked. Not that it’s existed for very long, but nonetheless, that commitment was kept.
Essentially, if you get denied by the automatic system, you can make a support ticket and say, hey, the machine was wrong, let me prove that to a human. And then the human will ask you to send over a picture of your ID.
No one ever promised that this would delete your ID. It’s bespoke service, not an official pathway.
Not to mention it was Zendesk that was hacked. Using Zendesk isn’t a sign of poor judgement, it’s by far the most common IT ticketing software. If you’re going to avoid any company that uses Zendesk you may as well just avoid ever emailing customer support.
CannibalisticApple | a month ago
Do you have a source on Zendesk being the one breached? I'm not asking to be snarky or combating, I'm genuinely asking because I'm finding sources (namely Discord itself) claiming it's a company called 5CA that was breached. I initially only found articles referencing 5CA (if they named the vendor at all), and after I saw your comments mentioning Zendesk I had to dig a bit to find some.
Based on this site, the leak was first attributed to Zendesk and then Discord publicly named 5CA as the vendor. Both 5CA and Zendesk naturally denied being the ones hacked. And... That's essentially all I can really confirm.
Honestly, it's a bit confusing trying to sort out where, exactly, the breach happened. A lot of the articles are from right when the breach was announced, or after Discord named 5CA. And with today's announcement, there's even more articles to sift through. This is one of the few more in-depth writeups I can find from after October, with a focus on 5CA, but I have no clue how accurate or reputable that particular site (or writer?) is. There was a failure at some point, and the fact it's not clear where that failure point was doesn't really sit well with me given how major of a breach this is. If you have some a source that goes more in-depth, I'd really appreciate it.
That aside, to address your first point, I'll just quote the relevant bit from the Discord blog post I shared:
Succinct and direct promise to users that they will delete images of identity documents and ID match selfies. I'll grant you that it specifies "Discord and k-ID", which does leave some leeway to argue they never promised the same of third party vendors. They also don't use the word "immediately" or any language to specify when it would be deleted. Which is arguably just as bad, because that bullet point in their statement implies an explicit promise that they'll secure our privacy while leaving Discord leeway to avoid or minimize accountability precisely in cases like this.
Just... Discord really hasn't done anything to convince me (and others) that they've taken measures to ensure this never happens again. After that breach, I and many others just don't want to risk giving our government-issued IDs to any private companies over the internet. It really doesn't matter who was hacked, just that it happened to Discord users once and we don't want it to happen to us.
Grumble4681 | a month ago
Notably, that wording also doesn't specify more that no data at all about the 'video selfies' is transmitted, just that the video itself doesn't leave the device. This leaves room for them to create data points about your face from the video, send those data points, and then reconstruct your face from those data points. I have seen other wording mentioned in other articles that supposedly is more specific that no data will be transmitted other than the approximate age the on-device detection comes up with, but that requires more time and expertise for people to dive into all the various policies involved to determine what exactly is covered and what is left for interpretation.
Even then, policies are one thing, actualities and consequences are another thing. They can say they won't transmit anything but that doesn't mean it's actually true, and as we've become incredibly familiar with in the past 10 years or so, the consequences for lying and harming others are basically so little as to be none at all.
Minori | a month ago
Can you explain what this looks like? Any sufficiently detailed description sounds like a compressed photo to me, and they're explicitly not sending those.
Fully collecting everyone's personal data isn't something every company does. Discord isn't Facebook. What are the incentives to lie and deal with storing this data?
trim | a month ago
Monetisation. AI training. Which is maybe the same thing. The fact that they could do anything and have it be easier to ask forgiveness than permission. Also history. How many times do we have to be lied to? I'm sure they'll be on the level /this/ time. As opposed to all the other times.
teaearlgraycold | a month ago
I don’t understand why seeing pornography is so harmful that we need our privacy invaded like this.
gary | a month ago
It's not just pornography that they're worried about. It's child predators. Social platforms are racing now to get ahead of the looming threat of an outright ban from governments worried about child predation. The public outrage when pedophiles groom children over the internet has reached a point that it is impossible to ignore. Something must be done; at least, that's the increasing sentiment.
derekiscool | a month ago
That just sounds like right-wing propaganda used to justify ID/face capturing. How is age verification actually going to combat child predators? Adults can still communicate with children in either environment.
gary | a month ago
I work in the industry and this shit affects my livelihood. I almost lost my job because of this fear. I'm just describing the reality of the work that my colleagues have been putting in.
Eventually, only children and trusted adults (parents, relatives, etc) will be able to talk with children. Random adult strangers will not be able to. This will be common across social platforms. This is my belief, anyway.
0xSim | a month ago
Ok so the only way to "protect the children" is to literally verify everyone, children included, and keep them in their own pen.
What Discord is doing right now is technically useless and won't protect anyone. It looks like they're just slowly turning up the water heat so that we don't feel once it's boiling and we must all submit our ID.
sparksbet | a month ago
That's because it is. But when that right wing propaganda leads to legislation being passed, companies do sort of have to comply.
Grumble4681 | a month ago
It seems to me that possibly the greatest predators to children are billionaires and their toys. The scale of harm done already and that will be done just by LLM alone I can only imagine, social media and various content algorithms even ignoring unrestrained communication between prototypical child sexual predators. Societies and cultures growing more distant and ever more dependent on tech billionaires platforms, I can think of no greater threat to children and adults alike.
gary | a month ago
I'm not in disagreement with you, but whenever news of a child predator chatting with a child on a messaging/social platform hits, the general reaction is "how dare the platform not have caught this". The conversation isn't about the relatively low incident rate. It's not about educating parents to look after their children more. It's about blaming the platform. And again, I'm not saying platforms can't be more responsible. It's just that if the platform is confronted by this, the logical solution from their POV is to build age controls into their platforms.
Minori | a month ago
Are you saying that LLMs are grooming children? Billionaires are bad, but there are only 3,279 billionaires globally which is not nearly enough people to explain the tens of thousands of minors abused every year.
Grumble4681 | a month ago
I'm saying that LLMs are warping their minds and causing serious harm, up to and including death. Notably when I said billionaires and their toys being the greatest predators to children, I did not say sexual predators.
https://en.wikipedia.org/wiki/Deaths_linked_to_chatbots
Just as an example and starting point of what I mean, and I think it's going to get much worse. And those are just the known cases enough to put on Wikipedia. There are obviously going to be more than just the known cases. Then expand that to social media platforms and you can't easily quantify the harm caused because often the harm caused through social media has another person on the other end, so you can say it's not the platform it's the person, but the platforms are carefully constructed in ways to create certain forms of contact between people for engagement, so I won't give them that leeway.
That's not to discount the harm caused by protoypical child sexual predators, but the scale at which that harm is happening is completely dwarfed by the scale at which billionaires and their reach extends. Hundreds of millions, possibly billions of people, are impacted by those 3,279 billionaires. Of course I don't actually exclude hundred millionaires or such either, and some billionaires like Bezo's ex-wife aren't the prototypical psychopath billionaire either, so I don't really care about the exact specific number of people, billionaire is more of a stand-in for extremely rich assholes who use their wealth to exert power over others.
pekt | a month ago
I read an interesting book on internet pornography and how addictive it can be for people, especially teens/young adults whose brains are actively developing. The book features a number of testimonials from people struggling with pornography addiction, but also included research that was current at the time. I didn't process that 2014 was over a decade ago until I started writing this comment, so it isn't up-to-date research being cited. The book is titled Your Brain on Porn.
I haven't looked in to subsequent research, but I did find how it was an interesting read in framing how different internet pornography is from previous forms of pornography. It also got me thinking about how addictive social media platforms/the internet can be. While they're not praying on the "reproduce" portions of the brain, they still offer that low effort novelty/dopamine that makes it so easy to just keep engaging with the platforms.
I'm not for Discord requiring IDs, since I don't trust them or other platforms with that information and would prefer a privacy first age attestation standard being adopted before platforms attempt to gate off any sort of adult content behind an age gap, but governments are definitely pushing for things like this from what I've seen regarding social media bans/adult content age enforcement laws going into effect. That being said, I thought I'd share this book as there has been some research done on the negatives of internet pornography for people.
[OP] Tmbreen | a month ago
Does anyone have good alternatives? I am very happy with the discord I've built for my friends, but this is ridiculous.
Also sorry, I've commented a bit but I don't know my way around tagging a post.
Wulfsta | a month ago
A lot of FOSS communities have switched to Matrix protocol chats - I host an unfederated server that also allows calls via a LiveKit service, and it works quite well. The mobile clients leave something to be desired though.
goose | a month ago
It depends on what features you'd want in an alternative. I've heard self hosting stoatchat (formerly Revolt, GitHub) is the "next closest thing". Never set it up/tried it out myself, though, can't verify firsthand.
derekiscool | a month ago
There's Stoat (formerly called Revolt)
It's way less mature than Discord and not very popular, but it's the closest to Discord that I know of. As far as I know, it's pretty secure and trusted and is open source.
ShroudedScribe | a month ago
I've been trying to get my own instance of Stoat up for hours today and haven't been able to get through it. I wish there was a self-hosted alternative that has a decent feature set, mobile apps, and isn't 14 different containers.
derekiscool | a month ago
Yeah, sadly there aren't any good alternatives if you're looking for a discord-like experience.
Honestly, I wish a company like Proton would release a Discord alternative. I would happily pay for a privacy focused option.
WrathOfTheHydra | a month ago
It literally just got hit with some flack for the maintainer dealing with AI commits (that have since been reverted). IDK if I'd conaider that a full dealbreaker, but it's something to keep in mind for those who care.
derekiscool | a month ago
I don't like that, but at least it's open source so people can see exactly what's being committed.
tachyon | a month ago
Internet Relay Chat. It's been around for decades and will outlive Discord.
CannibalisticApple | a month ago
I like IRC, but unless it changed since I last used it years ago, the way it works is too different from Discord to function as a viable replacement for most people. The messages are tied to the sessions. You can't see previous message history from before your session, logging off will typically wipe all the conversation you were present for, and people can't send you a message when you're offline. Some IRC clients can store the messages, but it still limits what you can see to things you were present for.
It's useful for conversation in real time, but many people use Discord to share information and updates. I'm on many fandom servers where people consult older conversations for writing reference, some servers for news on manga translations, a server with my high school friends to coordinate hangouts or share updates (one person's phone just never got texts from mine for some reason), and a couple servers used to coordinate workers on large-scale creative projects. IRC just doesn't work for those cases.
goose | a month ago
Should out to #Tildes on Libera! We're up to 12 users! 13 if you include ChanServ!
Raistlin | a month ago
IRC still exists? Christ, that's a trip.
trim | a month ago
My ISP does customer support on it.
goose | a month ago
What ISP, out of curiosity?
trim | a month ago
Ah I'd rather not say. It's kind of a niche ISP.
JCPhoenix | a month ago
My subreddit had an IRC channel -- dead by the end -- through like the late 2010s.
I can't remember why, but I have definitely been on an IRC server post-pandemic. Even if just briefly.
vord | a month ago
With web-based clients the server can host as well it's pretty easy to use too.
tomf | a month ago
my guess is that a good deal of us will be verified without identification.
Macha | a month ago
I’d expect the opposite honestly - I think that’s more a fig leaf to try reduce negative feedback like when reddit promised to explore custom css on new reddit. Especially given that I expect people who do not choose to share their activity with discord are overrepresented here
tomf | a month ago
we’ll see how it rolls out. IRC servers wouldn’t ever do this. :)
stu2b50 | a month ago
Unless that discord server is marked as adult content (and why would you self report in that case), why would this change anything?
ShroudedScribe | a month ago
It might not change anything yet. But since Discord is showing no restraint on automatically adding account restrictions to existing accounts, they could easily do the same with servers. They could just make a sweeping change for any non-community server to be considered 18+, because those have less moderation.
Omnicrola | a month ago
#meta #offtopic
Not a huge deal, we all just do our best, and @mycketforvirrad the silent hero will come and add/remove tags. Take a look at the topic log in the sidebar of this thread (or any thread) to get a feel for how the tags are commonly used and organized.
Probably the most important tags are those to do with things like US politics, so that people who don't want to have them in their feed can filter them out by tag.
vord | a month ago
The natural conclusion of the internet being transformed into a digital shopping mall and DMV.
Can't wait for all the phishers to get copies of everybody's faces and ID's. Maybe I should register discird.com.....
Mikie | a month ago
Time to go back to old school Ventrillo or Teamspeak or something ancient and self-hosted.
trim | a month ago
Paywall bypass: https://archive.is/PqusV
Eric_the_Cerise | a month ago
I used Stoat for awhile back when it was still Revolt.
Nutshell: Overall, it is pretty good, and very close to Discord in most ways.
Caveats and negatives: It is being developed and improved at a very slow pace ... it is often a little bit buggy ... it does occasionally go down, and/or get bogged down from user load (and I expect that issue will become much worse after Discord implements this) ... and finally, the various ways in which it is not quite as feature-rich as Discord are minor annoyances, but over time, they can become frustrating.
Technically, I still have a small "friends-and-family" server on the platform, but we-all moved to a self-hosted instance of Matrix/Element over a year ago. Matrix is less like Discord, takes more of an adjustment, but overall, we are happier on it now, than we were on Stoat.
derekiscool | a month ago
I'm wondering how adept these age verification tools are, and if AI is actually a good use-case for bypassing them without scanning your own face. Has anybody experimented with using AI generated faces to trick these tools?
I read that, in the past, some age verification tools have been tricked by things as simple as screenshots of posed faces in Garry's Mod. I assume that those simple tricks have been patched out, but I imagine AI generated faces would be much more difficult to detect.
Minori | a month ago
If a human can't tell a difference, I'm not sure how any model would be able to either. We run into the same situation with LLM text.
Hobofarmer | a month ago
As owner of a medium sized gaming server... If I choose not to age verify like this will I be locked out of necessary features?
@asinine
Sheep | a month ago
If your server or channels in your server are marked as nsfw, you will not be able to view/enter them.
If any media is sent that discord's automated tools detect as NSFW, it will be filtered out (discord displays a warning about it).
You also can't speak in stage channels.
That's about it.
Hobofarmer | a month ago
Ok thanks then it shouldn't impact me.
kingofsnake | a month ago
"Hobo farming" doesn't exactly sound workplace appropriate. 🤨
Hobofarmer | a month ago
Am I a hobo who farms or a farmer of hobos? It's like a rorschach test.
kingofsnake | a month ago
And I'm a psychopath apparently.
0x29A | a month ago
This was a good reminder to cancel my Nitro subscription.
Not exactly sure where the communities I am in will migrate to, but it's definitely top-of-mind for everyone now. There's a fairly strong sentiment of not giving in to the requirements, so I don't necessarily see us sticking around on the platform.
I do think the dystopia will follow us eventually wherever we go, for the most part, but who knows
stu2b50 | a month ago
In this case the dystopia is mostly because of legal regulations, so they'll only stop when they change at the ballot box.
0x29A | a month ago
Yeah and often dystopia is bipartisan so I have zero hopes about it. Especially where I'm at (the US). The entire structure is a flawed-from-the-beginning dumpster fire built on a crumbling foundation of atrocity. With basically zero politicians that even remotely represent me or my interests. The main parties are both complicit in the horrors, historically and presently, internally and internationally, and the entire tech industry is ready to grovel at their feet at a moment's notice to do their bidding. I'm not saying electoral politics doesn't matter, but I am saying that it's completely f'd. Gonna take a lot more than a ballot box to fix this nation
[OP] Tmbreen | a month ago
Yeah. Maybe this will be the event that launches some competition? I can only hope.
stu2b50 | a month ago
I don’t see the “despite leaking 70,000” ids in the original title. If it was an editorial choice, ultimately I don’t think that was their problem. The software they used for support tickets had a security breach because of poor credentialing hygiene.
It’s a must that automated ID checking can fallback to customer support. But falling back to customer support means customer support needs the documents, and that is an opportunity for data to leak, since humans are infamously bad at security.
But it’s much better than to not have any recourse in case the systems perform poorly.
trim | a month ago
If as a business you choose to partner with an organisation with leaky data practices, it's absolutely your responsibility. You don't get to wash your hands of your poor choices and throw your carefully selected partner under the bus, in an attempt to deflect your responsibility for handling your customer's data.
stu2b50 | a month ago
It was a zendesk issue. Zendesk is big enough that I don’t really see it as an issue, or a moral failing, to use them for support tickets. Sometimes stuff happens. Even AWS has its outages.
Essentially, I don’t see a reason why that would not be a one-off. It’s not like they used shady software or had poor data practices. They used probably the biggest support tickets/IT software suite which had an exploit at that time.
trim | a month ago
Phew, that's okay then. Who knows what is in use at the new place, and the new policy announced has weasel words in it too.
We'll delete your data, except where we won't. Kind of thing. Great.
stu2b50 | a month ago
Presumably they’re still using zendesk. But that’s different from what they’re talking about in the post.
The first step is that they use auth0 or whatever to verify your identity with automated systems. This is where the premise that your data is immediately deleted is.
If you get rejected, then you can file a support ticket, and as part of that you’d have to send a picture of your ID. No guarantee there, I mean if nothing else the bitmap will have to land on the support operator’s computer so they can look at it to begin with.
If you don’t trust zendesk, then if you never file a ticket it’ll never get to that step to begin with.
Grumble4681 | a month ago
Didn't you just say the reason in the prior comment?
So if humans are infamously bad at security, and they still have the same system that puts the infamously bad humans in a place to fuck up their security, then why wouldn't it happen again?
stu2b50 | a month ago
Two reasons:
One, you have to take proactive effort to get on this pipeline. By default, you won’t be in customer support. You have to be the one to initiate that.
Two, yeah, humans are leaky. I don’t particularly assume any privacy with any support tickets I file. This is what it is. It could be age privacy, could be a payment issue. If I’m concerned about the information being out there, I wouldn’t involve humans in IT unless the need was great.
skybrian | a month ago
I think they are responsible for figuring out how to make sure it doesn’t happen again, which may or may not mean switching vendors, depending on what ZenDesk does to make sure it never happens again.
They’re also responsible for implementing this new age verification system, including vetting any vendors they use. How much does the previous incident really reflect on that if they’re using different vendors?
I do think some skepticism about how a new, complicated system is implemented is warranted.