I have a theory that most of social media is designed to foment misanthropy, and I feel the same about AI agents. I think AI agents train people to be abusive to others without consequences.
Its a growing belief. When one takes a break from social media then returns, its almost immediately apparent that its people lying to you and/or trying to make you as scared and/or angry as possible.
I feel vindicated in what I once held as an unpopular opinion is gaining consensus. I predict that eventually there will be a data leak and sensitive information including people's government IDs will leak, resulting in even more litigation and more legislation (like GDPR) that ties fines to percentage of corporate income, not a small number they can price into their budgets.
The thing I noticed is that I just don't trust AI because of how sycophantic it is. If you're good, you don't need to brown-nose. AI browns its linear algebra nose like mad.
I tried doing career outline plan since I'm planning on moving and I just dropped it after a couple prompts. No matter what it insulated you in your own opinion bubble with not much outside perspective. What this really makes me think about are the people who use this for therapy. They're not getting any outside perspective, it's a echo chamber that's going to lead them off a ego cliff once they find disappointment in real life, and likely they'll go back to that same conversation and get pulled right back in.
I have a former friend with a history of hospitalization for breaks with reality doing exactly this. Calling it "better than any human therapist." Yep, I'm sure it feels great not having someone dragging you back into reality kicking and screaming and holding you accountable for your bullshit. Good luck with that. (Eta we no longer speak because of an insane reaction she had to me having cancer. Chatbots making you unkind tracks.)
Seriously. To some people they live in a reality where everyone is mean to them all the time. They are fat or ugly or weird and everyone is just mad dogging them for existing. Then they talk to a chatbot and are convinced it is syncopanthic when in reality it just talks to you like you are a hot cheerleader or CEO.
I was just telling my wife this research and it really is to an apples to apples study. If you asked friends and family about it your AITA likely 80% or higher would be on your side. If you get 100 strangers, yeah, it may be 40% as per the study.
landothedead | 23 hours ago
This explains billionaires.
CarlJH | 19 hours ago
I have a theory that most of social media is designed to foment misanthropy, and I feel the same about AI agents. I think AI agents train people to be abusive to others without consequences.
Vanillas_Guy | 11 hours ago
Its a growing belief. When one takes a break from social media then returns, its almost immediately apparent that its people lying to you and/or trying to make you as scared and/or angry as possible.
I feel vindicated in what I once held as an unpopular opinion is gaining consensus. I predict that eventually there will be a data leak and sensitive information including people's government IDs will leak, resulting in even more litigation and more legislation (like GDPR) that ties fines to percentage of corporate income, not a small number they can price into their budgets.
ProfessorNoPuede | a day ago
The thing I noticed is that I just don't trust AI because of how sycophantic it is. If you're good, you don't need to brown-nose. AI browns its linear algebra nose like mad.
Trotodo | 22 hours ago
I tried doing career outline plan since I'm planning on moving and I just dropped it after a couple prompts. No matter what it insulated you in your own opinion bubble with not much outside perspective. What this really makes me think about are the people who use this for therapy. They're not getting any outside perspective, it's a echo chamber that's going to lead them off a ego cliff once they find disappointment in real life, and likely they'll go back to that same conversation and get pulled right back in.
Junior-Biscotti-6546 | 18 hours ago
I have a former friend with a history of hospitalization for breaks with reality doing exactly this. Calling it "better than any human therapist." Yep, I'm sure it feels great not having someone dragging you back into reality kicking and screaming and holding you accountable for your bullshit. Good luck with that. (Eta we no longer speak because of an insane reaction she had to me having cancer. Chatbots making you unkind tracks.)
Optimal-Savings-4505 | 19 hours ago
I've tried asking explicitly for pushback from sycophantic chatbots, and they can do it, albeit the pushback can be somewhat lackluster.
bunker_man | 12 hours ago
Yeah. I immediately get turned off if it throws out random praise. And I try to only ask questions that can't have subjective value assessments.
immersive-matthew | a day ago
Sycophantic is the new engagement.
TwoFlower68 | 23 hours ago
That's because others aren't as sycophantic. Get with the program, scrubs!
sweetica | 21 hours ago
So, is that is what's wrong with with some people these days?
Okay- time for anybody who is doing so to stop talking to the AI, please!
It's rotting your brains and turning you into insufferable assholes.
Nellasofdoriath | 12 hours ago
I'm gonna go out on a limb here to say then maybe if we didn't treat each other like dicks all the fucking time , people wouldn't find AI so addictive
Accurate_Stuff9937 | 8 hours ago
Seriously. To some people they live in a reality where everyone is mean to them all the time. They are fat or ugly or weird and everyone is just mad dogging them for existing. Then they talk to a chatbot and are convinced it is syncopanthic when in reality it just talks to you like you are a hot cheerleader or CEO.
Nellasofdoriath | 12 hours ago
Just saying
immersive-matthew | a day ago
I was just telling my wife this research and it really is to an apples to apples study. If you asked friends and family about it your AITA likely 80% or higher would be on your side. If you get 100 strangers, yeah, it may be 40% as per the study.