I'm definitely not doing this. And it sounds like avoiding it isn't going to be too painful for my personal needs?
Users who aren’t verified as adults will not be able to access age-restricted servers and channels, won’t be able to speak in Discord’s livestream-like “stage” channels, and will see content filters for any content Discord detects as graphic or sensitive. They will also get warning prompts for friend requests from potentially unfamiliar users, and DMs from unfamiliar users will be automatically filtered into a separate inbox.
There is no scenario where any of the Discords I'm in or would ever be in are worth sending my ID or face scan to a 3rd party company for.
I'm concerned it won't stop there. I wonder who is responsible for determining when a server is 'adult' oriented and what the criteria are? What are the consequences for mislabeling a server as not adult oriented if Discord later determines it is? Point being that I could see this being similar to reddit or any other hierarchy situation where people at the very top dictate the terms and everyone underneath them has to carry them out even if the way of carrying them out is nonsense. It could be every discord server owner just labels their server as 'adult' to avoid onerous rules or consequences, or perhaps Discord will only just tag a server as adult oriented if it finds them not to be, so then no one is pressured to switch their server over because the worst that can happen is what they would have had to do anyhow. But I suspect that Discord may have to do more than that, because then it's an easy way to bombard their system by spinning up servers not tagged as for 'adult' content and so anyone can view them and it's on Discord to tag them all.
When will it be the case that the criteria where something is 'teen' friendly is a server without profanity or certain levels of violent content etc? Think about movies or video game ratings, where someone decides if you're 16 you can't play GTA or watch a certain movie or such. What is going to make Discord immune from the pressure to apply some arbitrary bullshit rules once they have an identity verification system?
I don't expect any servers that I'm part of would qualify under 'adult' content as they've laid out in this post, so I'll possibly keep using Discord, but I surely hope there's a decent alternative because I don't have any intention of letting them store my facial characteristics in their database somewhere or my ID where the information can get leaked out and be used for any other purposes.
Ditto. The thing I'm taking issue with here though is just how much this highlights that much of my social life is tied up in and highly dependent on a single company whose policy has long since stopped being something I find tolerable. Discord has made it clear to me that it is eager to step down the path of enshittification, yet many communities and friends of mine use it as their primary social space.
I don't want any single company having that much control over my social life, much less one so eager to trample on people's privacy or whose future looks so grim. I don't know what I'm going to do about it just yet, but it's abundantly clear that this is not sustainable.
A lot of FOSS communities have switched to Matrix protocol chats - I host an unfederated server that also allows calls via a LiveKit service, and it works quite well. The mobile clients leave something to be desired though.
It depends on what features you'd want in an alternative. I've heard self hosting stoatchat (formerly Revolt, GitHub) is the "next closest thing". Never set it up/tried it out myself, though, can't verify firsthand.
It's way less mature than Discord and not very popular, but it's the closest to Discord that I know of. As far as I know, it's pretty secure and trusted and is open source.
I like IRC, but unless it changed since I last used it years ago, the way it works is too different from Discord to function as a viable replacement for most people. The messages are tied to the sessions. You can't see previous message history from before your session, logging off will typically wipe all the conversation you were present for, and people can't send you a message when you're offline. Some IRC clients can store the messages, but it still limits what you can see to things you were present for.
It's useful for conversation in real time, but many people use Discord to share information and updates. I'm on many fandom servers where people consult older conversations for writing reference, some servers for news on manga translations, a server with my high school friends to coordinate hangouts or share updates (one person's phone just never got texts from mine for some reason), and a couple servers used to coordinate workers on large-scale creative projects. IRC just doesn't work for those cases.
Also sorry, I've commented a bit but I don't know my way around tagging a post.
Not a huge deal, we all just do our best, and @mycketforvirrad the silent hero will come and add/remove tags. Take a look at the topic log in the sidebar of this thread (or any thread) to get a feel for how the tags are commonly used and organized.
Probably the most important tags are those to do with things like US politics, so that people who don't want to have them in their feed can filter them out by tag.
It might not change anything yet. But since Discord is showing no restraint on automatically adding account restrictions to existing accounts, they could easily do the same with servers. They could just make a sweeping change for any non-community server to be considered 18+, because those have less moderation.
However, some users may not have to go through either form of age verification. Discord is also rolling out an age inference model that analyzes metadata like the types of games a user plays, their activity on Discord, and behavioral signals like signs of working hours or the amount of time they spend on Discord.
my guess is that a good deal of us will be verified without identification.
I’d expect the opposite honestly - I think that’s more a good leaf to try reduce negative feedback like when reddit promised to explore custom css on new reddit. Especially given that I expect people who do not choose to share their activity with discord are overrepresented here
Key privacy protections of Discord’s age-assurance approach include:
On-device processing: Video selfies for facial age estimation never leave a user’s device.
Quick deletion: Identity documents submitted to our vendor partners are deleted quickly— in most cases, immediately after age confirmation.
Straightforward verification: In most cases, users complete the process once and their Discord experience adapts to their verified age group. Users may be asked to use multiple methods only when more information is needed to assign an age group.
Private status: A user’s age verification status cannot be seen by other users.
Personally I have no interest in doing that myself regardless of what reassurances they provide. At present I don't think there's anything I'd be missing out on if I didn't, but I imagine people will find workarounds to using their own face before long.
I used Stoat for awhile back when it was still Revolt.
Nutshell: Overall, it is pretty good, and very close to Discord in most ways.
Caveats and negatives: It is being developed and improved at a very slow pace ... it is often a little bit buggy ... it does occasionally go down, and/or get bogged down from user load (and I expect that issue will become much worse after Discord implements this) ... and finally, the various ways in which it is not quite as feature-rich as Discord are minor annoyances, but over time, they can become frustrating.
Technically, I still have a small "friends-and-family" server on the platform, but we-all moved to a self-hosted instance of Matrix/Element over a year ago. Matrix is less like Discord, takes more of an adjustment, but overall, we are happier on it now, than we were on Stoat.
I don’t see the “despite leaking 70,000” ids in the original title. If it was an editorial choice, ultimately I don’t think that was their problem. The software they used for support tickets had a security breach because of poor credentialing hygiene.
It’s a must that automated ID checking can fallback to customer support. But falling back to customer support means customer support needs the documents, and that is an opportunity for data to leak, since humans are infamously bad at security.
But it’s much better than to not have any recourse in case the systems perform poorly.
I don’t see the “despite leaking 70,000” ids in the original title. If it was an editorial choice, ultimately I don’t think that was their problem. The software they used for support tickets had a security breach because of poor credentialing hygiene.
If as a business you choose to partner with an organisation with leaky data practices, it's absolutely your responsibility. You don't get to wash your hands of your poor choices and throw your carefully selected partner under the bus, in an attempt to deflect your responsibility for handling your customer's data.
It was a zendesk issue. Zendesk is big enough that I don’t really see it as an issue, or a moral failing, to use them for support tickets. Sometimes stuff happens. Even AWS has its outages.
Essentially, I don’t see a reason why that would not be a one-off. It’s not like they used shady software or had poor data practices. They used probably the biggest support tickets/IT software suite which had an exploit at that time.
Presumably they’re still using zendesk. But that’s different from what they’re talking about in the post.
The first step is that they use auth0 or whatever to verify your identity with automated systems. This is where the premise that your data is immediately deleted is.
If you get rejected, then you can file a support ticket, and as part of that you’d have to send a picture of your ID. No guarantee there, I mean if nothing else the bitmap will have to land on the support operator’s computer so they can look at it to begin with.
If you don’t trust zendesk, then if you never file a ticket it’ll never get to that step to begin with.
Essentially, I don’t see a reason why that would not be a one-off.
Didn't you just say the reason in the prior comment?
But falling back to customer support means customer support needs the documents, and that is an opportunity for data to leak, since humans are infamously bad at security.
So if humans are infamously bad at security, and they still have the same system that puts the infamously bad humans in a place to fuck up their security, then why wouldn't it happen again?
One, you have to take proactive effort to get on this pipeline. By default, you won’t be in customer support. You have to be the one to initiate that.
Two, yeah, humans are leaky. I don’t particularly assume any privacy with any support tickets I file. This is what it is. It could be age privacy, could be a payment issue. If I’m concerned about the information being out there, I wouldn’t involve humans in IT unless the need was great.
I think they are responsible for figuring out how to make sure it doesn’t happen again, which may or may not mean switching vendors, depending on what ZenDesk does to make sure it never happens again.
They’re also responsible for implementing this new age verification system, including vetting any vendors they use. How much does the previous incident really reflect on that if they’re using different vendors?
I do think some skepticism about how a new, complicated system is implemented is warranted.
I'm wondering how adept these age verification tools are, and if AI is actually a good use-case for bypassing them without scanning your own face. Has anybody experimented with using AI generated faces to trick these tools?
I read that, in the past, some age verification tools have been tricked by things as simple as screenshots of posed faces in Garry's Mod. I assume that those simple tricks have been patched out, but I imagine AI generated faces would be much more difficult to detect.
hamstergeddon | 6 hours ago
I'm definitely not doing this. And it sounds like avoiding it isn't going to be too painful for my personal needs?
There is no scenario where any of the Discords I'm in or would ever be in are worth sending my ID or face scan to a 3rd party company for.
Grumble4681 | 4 hours ago
I'm concerned it won't stop there. I wonder who is responsible for determining when a server is 'adult' oriented and what the criteria are? What are the consequences for mislabeling a server as not adult oriented if Discord later determines it is? Point being that I could see this being similar to reddit or any other hierarchy situation where people at the very top dictate the terms and everyone underneath them has to carry them out even if the way of carrying them out is nonsense. It could be every discord server owner just labels their server as 'adult' to avoid onerous rules or consequences, or perhaps Discord will only just tag a server as adult oriented if it finds them not to be, so then no one is pressured to switch their server over because the worst that can happen is what they would have had to do anyhow. But I suspect that Discord may have to do more than that, because then it's an easy way to bombard their system by spinning up servers not tagged as for 'adult' content and so anyone can view them and it's on Discord to tag them all.
When will it be the case that the criteria where something is 'teen' friendly is a server without profanity or certain levels of violent content etc? Think about movies or video game ratings, where someone decides if you're 16 you can't play GTA or watch a certain movie or such. What is going to make Discord immune from the pressure to apply some arbitrary bullshit rules once they have an identity verification system?
I don't expect any servers that I'm part of would qualify under 'adult' content as they've laid out in this post, so I'll possibly keep using Discord, but I surely hope there's a decent alternative because I don't have any intention of letting them store my facial characteristics in their database somewhere or my ID where the information can get leaked out and be used for any other purposes.
LukeZaz | an hour ago
Ditto. The thing I'm taking issue with here though is just how much this highlights that much of my social life is tied up in and highly dependent on a single company whose policy has long since stopped being something I find tolerable. Discord has made it clear to me that it is eager to step down the path of enshittification, yet many communities and friends of mine use it as their primary social space.
I don't want any single company having that much control over my social life, much less one so eager to trample on people's privacy or whose future looks so grim. I don't know what I'm going to do about it just yet, but it's abundantly clear that this is not sustainable.
[OP] Tmbreen | 7 hours ago
Does anyone have good alternatives? I am very happy with the discord I've built for my friends, but this is ridiculous.
Also sorry, I've commented a bit but I don't know my way around tagging a post.
Wulfsta | 6 hours ago
A lot of FOSS communities have switched to Matrix protocol chats - I host an unfederated server that also allows calls via a LiveKit service, and it works quite well. The mobile clients leave something to be desired though.
goose | 6 hours ago
It depends on what features you'd want in an alternative. I've heard self hosting stoatchat (formerly Revolt, GitHub) is the "next closest thing". Never set it up/tried it out myself, though, can't verify firsthand.
derekiscool | 6 hours ago
There's Stoat (formerly called Revolt)
It's way less mature than Discord and not very popular, but it's the closest to Discord that I know of. As far as I know, it's pretty secure and trusted and is open source.
tachyon | 4 hours ago
Internet Relay Chat. It's been around for decades and will outlive Discord.
CannibalisticApple | 4 hours ago
I like IRC, but unless it changed since I last used it years ago, the way it works is too different from Discord to function as a viable replacement for most people. The messages are tied to the sessions. You can't see previous message history from before your session, logging off will typically wipe all the conversation you were present for, and people can't send you a message when you're offline. Some IRC clients can store the messages, but it still limits what you can see to things you were present for.
It's useful for conversation in real time, but many people use Discord to share information and updates. I'm on many fandom servers where people consult older conversations for writing reference, some servers for news on manga translations, a server with my high school friends to coordinate hangouts or share updates (one person's phone just never got texts from mine for some reason), and a couple servers used to coordinate workers on large-scale creative projects. IRC just doesn't work for those cases.
goose | 4 hours ago
Should out to #Tildes on Libera! We're up to 12 users! 13 if you include ChanServ!
Raistlin | 3 hours ago
IRC still exists? Christ, that's a trip.
trim | 3 hours ago
My ISP does customer support on it.
goose | 2 hours ago
What ISP, out of curiosity?
trim | 2 hours ago
Ah I'd rather not say. It's kind of a niche ISP.
vord | 3 hours ago
With web-based clients the server can host as well it's pretty easy to use too.
JCPhoenix | 35 minutes ago
My subreddit had an IRC channel -- dead by the end -- through like the late 2010s.
I can't remember why, but I have definitely been on an IRC server post-pandemic. Even if just briefly.
Omnicrola | 3 hours ago
#meta #offtopic
Not a huge deal, we all just do our best, and @mycketforvirrad the silent hero will come and add/remove tags. Take a look at the topic log in the sidebar of this thread (or any thread) to get a feel for how the tags are commonly used and organized.
Probably the most important tags are those to do with things like US politics, so that people who don't want to have them in their feed can filter them out by tag.
stu2b50 | 6 hours ago
Unless that discord server is marked as adult content (and why would you self report in that case), why would this change anything?
ShroudedScribe | 5 hours ago
It might not change anything yet. But since Discord is showing no restraint on automatically adding account restrictions to existing accounts, they could easily do the same with servers. They could just make a sweeping change for any non-community server to be considered 18+, because those have less moderation.
tomf | 4 hours ago
my guess is that a good deal of us will be verified without identification.
Macha | 4 hours ago
I’d expect the opposite honestly - I think that’s more a good leaf to try reduce negative feedback like when reddit promised to explore custom css on new reddit. Especially given that I expect people who do not choose to share their activity with discord are overrepresented here
tomf | 4 hours ago
we’ll see how it rolls out. IRC servers wouldn’t ever do this. :)
Durinthal | 6 hours ago
Discord press release link for reference.
Personally I have no interest in doing that myself regardless of what reassurances they provide. At present I don't think there's anything I'd be missing out on if I didn't, but I imagine people will find workarounds to using their own face before long.
tachyon | 4 hours ago
All of those protections are horse shit. This is the same company that got hacked in October.
Mikie | 6 hours ago
Time to go back to old school Ventrillo or Teamspeak or something ancient and self-hosted.
Eric_the_Cerise | 5 hours ago
I used Stoat for awhile back when it was still Revolt.
Nutshell: Overall, it is pretty good, and very close to Discord in most ways.
Caveats and negatives: It is being developed and improved at a very slow pace ... it is often a little bit buggy ... it does occasionally go down, and/or get bogged down from user load (and I expect that issue will become much worse after Discord implements this) ... and finally, the various ways in which it is not quite as feature-rich as Discord are minor annoyances, but over time, they can become frustrating.
Technically, I still have a small "friends-and-family" server on the platform, but we-all moved to a self-hosted instance of Matrix/Element over a year ago. Matrix is less like Discord, takes more of an adjustment, but overall, we are happier on it now, than we were on Stoat.
vord | 3 hours ago
The natural conclusion of the internet being transformed into a digital shopping mall and DMV.
Can't wait for all the phishers to get copies of everybody's faces and ID's. Maybe I should register discird.com.....
trim | 6 hours ago
Paywall bypass: https://archive.is/PqusV
stu2b50 | 6 hours ago
I don’t see the “despite leaking 70,000” ids in the original title. If it was an editorial choice, ultimately I don’t think that was their problem. The software they used for support tickets had a security breach because of poor credentialing hygiene.
It’s a must that automated ID checking can fallback to customer support. But falling back to customer support means customer support needs the documents, and that is an opportunity for data to leak, since humans are infamously bad at security.
But it’s much better than to not have any recourse in case the systems perform poorly.
trim | 6 hours ago
If as a business you choose to partner with an organisation with leaky data practices, it's absolutely your responsibility. You don't get to wash your hands of your poor choices and throw your carefully selected partner under the bus, in an attempt to deflect your responsibility for handling your customer's data.
stu2b50 | 6 hours ago
It was a zendesk issue. Zendesk is big enough that I don’t really see it as an issue, or a moral failing, to use them for support tickets. Sometimes stuff happens. Even AWS has its outages.
Essentially, I don’t see a reason why that would not be a one-off. It’s not like they used shady software or had poor data practices. They used probably the biggest support tickets/IT software suite which had an exploit at that time.
trim | 6 hours ago
Phew, that's okay then. Who knows what is in use at the new place, and the new policy announced has weasel words in it too.
We'll delete your data, except where we won't. Kind of thing. Great.
stu2b50 | 6 hours ago
Presumably they’re still using zendesk. But that’s different from what they’re talking about in the post.
The first step is that they use auth0 or whatever to verify your identity with automated systems. This is where the premise that your data is immediately deleted is.
If you get rejected, then you can file a support ticket, and as part of that you’d have to send a picture of your ID. No guarantee there, I mean if nothing else the bitmap will have to land on the support operator’s computer so they can look at it to begin with.
If you don’t trust zendesk, then if you never file a ticket it’ll never get to that step to begin with.
Grumble4681 | 4 hours ago
Didn't you just say the reason in the prior comment?
So if humans are infamously bad at security, and they still have the same system that puts the infamously bad humans in a place to fuck up their security, then why wouldn't it happen again?
stu2b50 | 4 hours ago
Two reasons:
One, you have to take proactive effort to get on this pipeline. By default, you won’t be in customer support. You have to be the one to initiate that.
Two, yeah, humans are leaky. I don’t particularly assume any privacy with any support tickets I file. This is what it is. It could be age privacy, could be a payment issue. If I’m concerned about the information being out there, I wouldn’t involve humans in IT unless the need was great.
skybrian | 6 hours ago
I think they are responsible for figuring out how to make sure it doesn’t happen again, which may or may not mean switching vendors, depending on what ZenDesk does to make sure it never happens again.
They’re also responsible for implementing this new age verification system, including vetting any vendors they use. How much does the previous incident really reflect on that if they’re using different vendors?
I do think some skepticism about how a new, complicated system is implemented is warranted.
derekiscool | 2 hours ago
I'm wondering how adept these age verification tools are, and if AI is actually a good use-case for bypassing them without scanning your own face. Has anybody experimented with using AI generated faces to trick these tools?
I read that, in the past, some age verification tools have been tricked by things as simple as screenshots of posed faces in Garry's Mod. I assume that those simple tricks have been patched out, but I imagine AI generated faces would be much more difficult to detect.
Hobofarmer | 6 hours ago
As owner of a medium sized gaming server... If I choose not to age verify like this will I be locked out of necessary features?
@asinine
Sheep | 6 hours ago
If your server or channels in your server are marked as nsfw, you will not be able to view/enter them.
If any media is sent that discord's automated tools detect as NSFW, it will be filtered out (discord displays a warning about it).
You also can't speak in stage channels.
That's about it.
Hobofarmer | 3 hours ago
Ok thanks then it shouldn't impact me.
kingofsnake | an hour ago
"Hobo farming" doesn't exactly sound workplace appropriate. 🤨