> While the majority of Gen Zers (51%) still use the technology weekly, growth has slowed to a crawl, increasing only four percentage points over the past year. This stagnation in adoption is accompanied by a sharp decline in positive sentiment.
Unless the next generation avoids it en masse, only leaving niche users like coders and executives pushing down their employee's throats. This usage is not enough to justify ROI on data centers, eventually leading to bankruptcy due to debt, taking down heavily invested Big Tech with it. This is the way.
The AI is for the "billionaires". The billionaires do not give AI to their own kids, just like how they don't give them phones, social media or a games console until they are old enough.
AI psychosis is real and the billionaires who own the AI chatbots know this.
Token cost started increasing exponentially for frontier LLMs, and they improved mostly on coding tasks incredibly over the last half year while staying behind in non-verifiable tasks.
The main social problem with automation in general was that less intelligent people have been left behind as only boring physical tasks are left for them to do, and people don't generally want to go back destroying their body from the prospects of an office job.
At some point frontier AI will only getting only worthwile to use for only super highly intelligent and motivated AI researchers which is a tiny part of the population.
May I also add that this isn't just (or at all) about intelligence.
I'm lucky enough to be at a company where I have a large budget in terms of what I can spend in tokens. This gives me an enormous advantage over someone who is just as intelligent as me and who has the same experience as me minus the interaction I have with LLMs.
In this case the crucial difference is not intelligence, it's that I found myself in the right place to be able to go up, whereas a lot of people which are otherwise like me didn't get that opportunity through no fault of their own.
People tend to attribute their successes to their own merit and their failures to happenstance, but if we're honest with ourselves the real world has a lot of randomness in it.
You're totally right, I probably simplified the problem too much. At the same people don't just get randomly assigned to companies, and I know I would quickly switch if I would be working at a company which doesn't have this policy.
> ...while 31% of Gen Z now report feeling outright anger toward the technology...
31% seems remarkably high. Here we seem to be running up against the limitations of statistics. It is hard to interpret whether this is a scared-and-angry sort of angry or if there is something AI-related happening that is making them angry. I might have been lucky in my experiences, but generally if people get angry there is a reason other than "things are changing".
> generally if people get angry there is a reason other than "things are changing"
Silicon Valley’s leaders have been one upping themselves on messaging to the public that they’re building a doomsday device. And then, bewilderingly to the outside, all of us who read through that bullshit then appear to merrily go along with the apparent suicide pact.
Most Gen Z, it appears, can also see through the bullshit. But about a third of them taking the message sincerely seems par for the course, and as you said, I wouldn’t assume it’s just aversion to change.
> Silicon Valley’s leaders have been one upping themselves on messaging to the public that they’re building a doomsday device. And then, bewilderingly to the outside, all of us who read through that bullshit then appear to merrily go along with the apparent suicide pact.
What I can't decide, for Anthropic, OpenAI, and xAI, is if the part which is BS is that they don't take the doom risk seriously at all*, or if the BS is that despite taking it seriously they think they are best placed to actually solve the doom. Or both.
Meta at least it is obvious they don't even understand the potential of AI, neither for good nor ill.
Google and Microsoft seem to be treating it as normal software, with normal risks. If they have doom opinions, they are drowned out by all the other news going on right now.
* xAI obviously doesn't care about reputational risk, porn, trolling, propaganda, but this isn't the same question as doom.
The thing is what is the alternative option? If you don't build it, then your rivals do. All of your electronic and military systems then become compromised because your rivals can exploit or negate them one way or another via AI. So you have to build it regardless. It's a suicide pact the same way nuclear weapons are a suicide pact but if you don't have them then you're at the mercy of anyone else who does.
Build it without the catastrophising PR. Try, you know, selling it as something that will do good.
Altman wanking on about how he’s going to end the species is great for attracting investors—this crap worked for Bankman-Fried, too [1]. If you are building world-ending kit, you’re a super-important dude who has to be listened to. On the other end of the spectrum, you have everyone quietly doing their work.
I think the fear narrative is a bit of a thought terminating cliche.
Most people who aren't in AI sees plain as day how everything AI touches is turning into the digital equivalent of flimsy IKEA furniture. The main selling point of AI so far is that it makes things cheaper to produce while still looking good at a glance.
"The thing I used to like costs the same or more but is now cheaper quality and worse and they think I'm dumb enough not to notice" really isn't a selling point, but pretty much the universal western post-2008 experience, and nothing quite embodies this transformation like AI.
But yeah, you also have all the AI CEOs chewing the scenery like Jeremy Irons in the DnD movie which really hasn't done the image of AI any favors either.
There are at least some redeeming features of AI, but I think it's become this scapegoat for a lot of things that it touches that are also larger unsolved problems with the economy, and it's even used that way, e.g. to motivate layoffs that would otherwise signal to investors that a company isn't doing as well as they'd like you to think.
The other recurring theme is a mantra along the lines of "ends justify the means" when it comes to building data centers and all the consequences of that in the present, for some promise that AI will somehow have a net benefit to all eventually while hand-waving the details.
> Most people who aren't in AI sees plain as day how everything AI touches is turning into the digital equivalent of flimsy IKEA furniture.
I really love this comparison. Everyone bitches about Ikea, but at the end of the day unless you're rich as fuck then "buying new furniture" means either Ikea or some other shop that adopted exactly the same business model, because we all know that the price/quality ratio is unbeatable. Ikea furniture can easily outlive you as long as you pick the correct product for your use case. "I put my fat ass on a dining table that's explicitly marketed for light distributed load and it broke in half, boo-hoo Ikea bad" like no shit, if you need a table you can stand on then choose one with extra support beams, Ikea has these too. "But if you disassemble and reassemble Ikea it falls apart" okay cool but the cost of transporting old furniture to your new house is often higher than just buying new furniture anyway. Not to mention that the chances that your old furniture will match your new house are pretty much zero.
This translates to engineers not being able to grasp the concept of "good enough" where end user doesn't care about quality improvements beyond certain threshold. Cue the audiophiles remaining perplexed to this day why nobody uses 24-bit FLAC.
I feel this is written from the mindset, "all objects within a set must be obtained at once." Or, perhapse, "nothing lasts anyway, so there's no reason to bother."
Quality has its cost. A quality dining table could only have the potential to be sold to every room one might place a dining table, exactly one time. IKEA might sell that same dining table to the same room every year. IKEA is destined for the landfill; quality can outlive a bloodline. Sales of quality must sustain all those employed for the process accordingly. - Sure, some TLC is required, but IKEA can't even get wet.
Quality also provides additional benefits. They are not only a functional object, but a wealth of sentiment and memories for the home. They are also a symbol to the pride one takes in their craft, and a silhouette of their creator's experience and deserved reputation. IKEA is for parties and showrooms/staging. Quality is for comfort and places of importance.
My heart aches, that the notion of buying prefabricated trash to use in the interim of its journey, is better than searching individually for items that will bring character and meaning—as well as, functional superiority—over the course of a lifetime.
This equates to software bragging about how great its algorythm adjusted the color of a Submit button to improve deliverability on a website masquerading as a web app that could have been written in HTML and CSS without the button at all.
Cool, if you're rich you can do that. Mere mortals have to make a decision "do I buy a luxury dining table or do I send my child to college" and it's understandable that choosing between these two isn't exactly straightforward.
Thing is that quality furniture can be bought second hand, since they last so long. At least you used to be able to, when they still made that stuff.
Going the IKEA route you'll end up re-buying the same furniture over and over in ever crappier quality. Once you add that to the equation, it suddenly isn't quite as cheap anymore.
I tried. It's a major PITA, and transport costs alone make the whole endeavor not economically viable, so I backed off.
A friend of mine bought some renovated old wardrobe and I helped him move it from the hall to the bedroom and this resulted in me for the first time ever seeing him have a heated, emotional argument with his wife.
Finally, your pink bathroom from 50's just isn't fashionable. Trust me on this one. Give your cabinets to someone who can skillfully paint them (which costs money obviously).
> Going the IKEA route you'll end up re-buying the same furniture over and over in ever crappier quality.
I've never had Ikea furniture randomly fail on me, and I haven't heard of this happening in my social circle. Again, don't sit on a god damn Linnmon table. It's for everyday use items only. If you need something sturdier, choose a different product.
Think about the anger toward Clippy. Now think about Clippy, but where feeding Clippy is a significant part of GDP, and there's a religious fervor around Clippy, especially among the older and wealthy.
That's my personal impression of the anger. It's not so much luddite anger, its like Clippy anger and millenial anti-Boomer anger mixed together.
It's like a twist on the Turing test, where some humans can't tell the difference between a human and a computer, but others can, and they tend to be younger on average. The Turing test ironically ends up telling you more about the person taking the test.
Nobody will sue you for that. In every age we have had people like you, wishing things would go back to "normal", and w.r.t. technology you lot never get your way, but neither do you cause problems for anybody else. All you're doing is pissing into the wind and getting yourself wet, as is your right.
Some of them are quite happy! Others are miserable; many are abused. It's a high-control group, that raises its children to believe the outside world as a terrible, scary place, and they are the only safe place to be.
Many people are happy in cults, or they couldn't function. But that doesn't mean that the cults are, overall, a positive thing.
I hear that attitude about AI is much more positive in China. So people like him, in aggregate, could potentially be a danger and cause the US to give up the lead for the rest of century. Takes one bad election..
People who reject AI are a danger? Wow. This just sounds like setting up the foundation of narrative for having the government bail out these AI companies when bill finally comes due.
The narratives around the pressure to blindly accept AI is crazy. They try every angle from "you are a communist", to "you are too stupid".
I speculate it has a lot to do with surveillance capitalism. It's the same type of tactics that have been used for things like the banning of marijuana, or the health merits of cigarettes. Fear mongering and lying so a few robber Barron's can profiteer.
I think AI is useful. I think it was rolled out haphazardly similar to how people used to gargle radioactive isotopes or slather them on as after shave so others can profit quickly. There are so many issues with the technology that the press won't even cover yet because we all have to play stupid until trends emerge to report on otherwise billionaire defense contractors will send their figurative or possibly literal hit squads after us. We have to wait for the tumors to grow, the jaws to fall off before society will remember "maybe we shouldn't be slapping radioactive stuff all over ourselves so some wealthy white dude gets wealthier"
The future of ai is in small local models people pay 0 dollars to upgrade or use. Anything else is meritless exploitation and destruction. That's why the US will lose. Reality has a liberal bias. Tough pill for ai libertarians to swallow. So they mud fling.
You frame this as if all technology is inherently good and anyone who opposes it is just dumb and wasting their time. People used to think Segways were dumb. They used to think 3D TVs were dumb. They used to think lobotomies were dumb. They used to think Xray shoe sizing was dumb. They used to think uranium in household appliances and toys was dumb.
I think this time is different. I’m not Gen Z, yet once my kids are out of school, I’m planning to leave tech behind as much as possible.
When I started in tech, at the dawn of the internet, it was an exciting field full of hope and the promise to empower and enrich the lives of people. Tech now is largely the opposite.
Enshitification is making things progressively worse. tech companies are creating systems and tools with dark patterns abound to ensure you no longer own anything, are under constant surveillance, and populations at large are manipulated through the magic of propaganda and illusory truth. Even the productivity gains are perversely used to not give people more time through fewer work days/hours but to instead give them more work. People are losing their connection to others and the world around them.
Everyone tends to focus on Orwell’s 1984, but I find Fahrenheit 451 to be the more prescient book. I used to be annoyed by the book people’s choice to leave society and wait for it to collapse so they could help rebuild. In my mind, they should have been mounting an resistance. Fair to say I understand the book people’s perspective so much more now.
Don't worry, it's a shiny tool at the moment. The electric screwdriver had its wow moment too.
I still haven't found a single person willing to go to the movies, and watch an AI movie. If it wasn't made by a person, there is no 'personal'-ity to it. It's just bland.
Eventually things will slow and slide back to thoughtful first, crapload second.
AI is making some degree of growth in Spotify IIRC.
I feel like a lot of the stuff my nieces listen to are AI music. It's like a hodgepodge of popular songs with little rhyme or reason. Very 'sloppy' but if they like it....
It's hard for me to confirm if they really are AI or not. But I'm willing to bet that (random Roblox game they're interested in today) == heavily AI made. Maybe there's some real human effort here or there but I have heavy suspicions.
I feel like a lot of the stuff my nieces listen to are AI music.
Didn't we all start as kids listening to music that is so formulaic that it could as well be AI-generated? A subset of people iteratively refines their music tastes and starts listening everything from bebop to obscure Canadian hardcore bands and will recognize quality in music.
But I am of the opinion that AI slop is displacing a lot of would-be beginner musicians and making it even harder for them to break out.
For better or worse, a lot of beginner artists were relying upon my nieces and their classmates) clicking on their music and sharing them for Spotify $$$.
I'm looking forward to being able to play an endless stream of (background) music that is generated on the fly with my preferences, never to be heard again by anyone unless I hit a button to capture what I last heard. How cool would that be? I'm tired of scouring through volumes of sounds I don't enjoy to find a rare nugget.
Have you seen any recent mainstream movie made by "a person"? "Human made" is not the quality brand most people are looking for today. If authors are mentaly ill and have shitty personality, AI slop will be better.
For the past year, I think I've watched more AI generated video content than movies in terms of hours spent. Some of it is quite good (eg. Neuralwiz)! Granted, I watch very few movies, but still, I'd say this kind of counts.
3D movies were a huge shiny new tool for a while too. I hated them. They still exist but its not so in-your-face (pun!) like it was. Like hey we did a remake of the Godfather but in 3D!
I hope AI follows the same path and diminishes. Still available, but only where it make sense.
I share a similar dislike of 3D movies (probably not hate, though) as adding 3d doesn't really add anything to the experience of watching a film. What interests me about films are the acting, the writing, good direction and interesting ideas and adding 3D doesn't make any difference to that. I'd go so far as to say that adding colour to films doesn't really add much either.
AI generated films are almost certainly going to have at best mediocre acting/writing/direction and will almost certainly just recycle ideas. I hope AI films flop so hard that studios end up shunning them.
I agree about the 3D not adding anything, but my hate comes from the fact they gave me headaches, so were unwatchable. And for a while it was hard to even find a non-3D showtime.
I think AI movies (or shorts for that matter, since I am not aware of any feature length movie) currently are not bland, they are simply of very low quality because they are pushing the limits of the current technology. However by the time the technology catches up (might not be as soon as many expect), then nobody will care about them because it will not have personality.
We are building general thinking machines with the aim of replacing all human labour, ... but humans won't be replaced, they will find other jobs, because when we introduced tractors they were able to find other jobs, ... totally the same scenario.
I love the cognitive dissonance.
Even in the best case scenario where the generated wealth will be distributed, and somehow we will be able to keep them in check (unlikely), what would be the point of life in a world where machines can best us at everything?
Technology has been replacing manual and mental labor for millennia, and especially in the last 150 years. A farmer or accountant from 1875 would be utterly shocked by how much we depend on machines and the social and industrial instituitions they enable.
And all the benefits that brings. Not just in raw economic terms, but in quality of (family, community, recreational, commercial, ecological, medical) life.
Kind hard to imagine it will suck if another order-of-magnitude leap along that long line happens.
A farmer or accountant from 1875 would be utterly shocked by how much we depend on machines and the social and industrial instituitions they enable.
A bit of a tangential anecdote from my dad, who is a retired a biologist. He was one of the first in the department to use a computer in the 1970s and wrote some programs to do tedious calculations that had to be done by hand before and took days of human labor. Even a 1970s computer could finish the calculations with his programs in a few minutes.
His boss, an older tenured professor, could not believe that 'these damn computers' can possibly be right. Doing the same calculations in a few minutes? Impossible. So for a few weeks (or months, I forget), he did all the calculations done on the computer by hand to prove that the computer must be wrong.
One day he comes to my dad and says "can you show me how to use one of these computers?"
If you can't see the difference between prior technological jumps and this current jump, you are part of the problem.
The world is changing quickly. Our most coveted defining traits - our minds - are under attack. This is a technology that seeks to replicate your thought processes and critical thinking and then to execute it at machine speeds.
If you think this is like the industrial revolution, you're actually right. We're still replacing animals with machines. But now we are the animals.
Anything other than a serious discussion about UBI or a post-labour economy is a joke. This is technology that aims to displace most of us.
The motorized tractor and other agricultural technologies aimed and did, in fact, “displace most of us” once upon a time. And now, because I’m not a farmer, I get to spend much more time with my family, in recreational pursuits, sleeping, …
> And now, because I’m not a farmer, I get to spend much more time with my family, in recreational pursuits, sleeping, …
You'll have even more time with your family when you are no longer a SWE, e.g.
When automation displaced farmer manual labour, it also led to new jobs opening up for that labour to flow into.
What new jobs/fields do you see developing out of AI tools and how they've been marketed so far?
Every step of automation across the history of humanity has led to a "concentration of power" in jobs/fields which required brainpower. AI is the technology coming for brainpower. Where do we go from there? Back to farming?
And when I say AI is coming for the brainpower, it's coming for it in two ways: directly where it takes our jobs and indirectly where a lot of people using it are seemingly getting dumber. Both are quite dangerous to our combined futures.
> Technology has been replacing manual and mental labor for millennia
The difference this time is that no one can articulate what are these "new" jobs that people will find. When agricultural jobs were being decimated, factories were opening up (whether they were better jobs or not is a different discussion, but the point being that the technology opened up new opportunities while destroying the old ones. We do not see this with AI and I have yet to read even any reasonable speculation of what these "new opportunities" might be. Sure you could argue that the future is unknown, but we should be able to at least glimpse it. And yet, we can't. Because almost any "new job" that you can come up with that doesn't exist today (which is already hard to imagine), could ostensibly also be replaced with AI.
So all we have is comments like yours, vague "it worked before so it'll work again" (lets ignore the fact that the circumstances are completely different), or even worse, "people will have time to focus on things that matter" but no explanation of how they'll pay the rent and buy food to survive.
> all the benefits ... raw economic terms ... quality of (family, community, recreational, commercial, ecological, medical) life
In what way is AI improving any of these? So far, it's making all of these worse. Productivity increases don't matter if they don't benefit more than just a few wealthy shareholders.
I mean, there is much more to life than work... so let's not pretend it's all about working.
Everyone in America is now fed and most children grow up spending a ton of time with both parents. This is because of automation greatly raising productivity and bringing costs down throughout the 20th century.
It's easy to think things are terrible, but they are actually insanely good. Just 100 years ago life was horrible for basically everyone by today's standards, now it's not.
AI will continue the trend, raise productivity and bring costs down. Now it's for white collar output, instead of manufacturing and agriculture.
The labor force disruption will be painful, as it always is, especially in a country without a strong social safety net, but things will be better on the other side because we just made a ton of work more efficient and can produce more with less.
We shouldn't throw the baby out with the bath water just because it affects us this time...
And remember if there aren't jobs, people probably won't just lay down and die.
It won't be Marvin saying, "Oh god I'm so depressed, what's the point?" We'll just start killing each other in massive numbers cause, well, if you can't create anything and there isn't enough for everyone, what else is there to do but fight over what there is
But that's the thing, and what's really different from how it's ever been before: there absolutely is enough for everyone.
It's being deliberately gatekept from us by the wealthy, and by those who believe that no one should be allowed to have anything they haven't "earned".
The tragic thing is, to the extent that you're right, people will probably mostly kill other people who have nothing, rather than turning their anger and violence where it truly deserves to go: the rich bastards who want to own everything and prevent the rest of us from having anything.
There isn't though. Our infrastructure cannot sustain the AI race. Food supplies are weakening. An ever increasing population is being encouraged by billionaires.
There could be! But there currently is not. Nor is there any plan for that to change.
Well, you're right that our infrastructure can't sustain the AI race, but (while it's true I didn't make that clear), that's not remotely what I was talking about.
Even with food supplies "weakening"—which is only happening due to the pointless Iran war, not due to any larger trends—we still have plenty of food to feed every human being on the planet.
And regardless of what billionaires might encourage, population growth is slowing. (To the extent that it might become a genuine economic problem in a few decades if we don't find a way to adjust our economic systems to stop depending on a constantly-increasing population.)
Tarrifs wiped out a lot of farmers. Fertilizer shortages absolutely are a result of the Iran war.
Population growth can occur if billionaires lobby Republicans and make birth control illegal. Which is happening right now. Last week we were seeing some scary news about that.
There is a big difference between what we could be doing and what is happening. It's more profitable for the ultra wealthy that we can barely survive. I know it sounds abrasive, but it's a fact and there is a lot of evidence pointing to this. Googling Elon musk wants people to have more children is scary how many hits there are from different conversations. Bezos as well.
> We are building general thinking machines with the aim of replacing all human labour, ... but humans won't be replaced, they will find other jobs, because when we introduced tractors they were able to find other jobs, ... totally the same scenario.
Technically, there's no cognitive dissonance in the statement you made, at least with the way you worded it. Thinking machines can only do thinking labor (for now), so the bright future ahead is one where mental work is reserved for the elite, while everyone else does hard, physical work in places that are too messy for the machines to operate in at the moment.
> Even in the best case scenario where the generated wealth will be distributed, and somehow we will be able to keep them in check (unlikely), what would be the point of life in a world where machines can best us at everything?
Read some of the Culture novels by Iain. M. Banks.
Two example scenarios described by Kurzweil in Singularity is Near: super intelligence augmenting human intelligence via direct brain interface (humans vs AI goes back to intelligence vs intelligence as usual), or, we get to live like very very pampered and worshiped cats.
> when we introduced tractors they were able to find other jobs
Coincidentally, I am reading Grapes of Wrath. Chapter 5 is my favourite, and it's about how the big banks tractor people off the land. The whole damn book is as relevant as ever, but this chapter just sticks with you.
For AI researchers, it is an understanding of what "intelligence" is and the emergence of an autonomous system that surpasses all human capabilities that learns over time.
For most AI labs like OpenAI and their investors, it used to mean an intelligent system that surpasses all human capabilities at economically useful work, then it meant $100B dollars of profits, now it is an IPO.
For Big Tech, it is digital employees and AI data-centers to "streamline" operations.
To everyone else, it is mass job displacement and unemployment.
For GenZ, it is "permanent underclass".
So it depends on who you are talking to and varies. Therefore "AGI" at this point is meaningless.
This Gen Z resentment is manufactured, so there is yet another pool of people that are angry enough to deludedly back the next aggressive idiot "savior", justifying an attack on the general population, ensuring authoritarianism is viewed to be the "only way forward."
Yes, because the current pro-AI leaders, CEOs, politicians etc. are anything but authoritarian, right? Think of the rights of the freedom-loving billionaires and their world-scale network of power! What you should really worry about is this imaginary authoritarian anti-AI figure I just came up with.
Incredibly sad how many people have no concept of collective achievement, or an understanding of what technological progress buys all of humanity. It always comes at a cost, and it's the reason we aren't dying of starvation and plague in a cold winter field at the age of 45.
Technology is not value-less. There is technology with good effects on society and technology with bad effects on society. I think very few people who are against, say, surveillance capitalism are against antibiotics.
It is a completely coherent position to like most technological progress, but at the same time be critical of some uses of ML/AI.
You are just making straw men here by suggesting that people that are critical of AI are critical of all technology.
AI is fundamentally automation of labor, and to be opposed to AI instead of preparing our systems for a post-labor world is dangerously misguided - especially with historical context on what automation of labor has done for humanity.
Well, yes, but if humans need to stay in the loop (as most previous automations of labor), it is also moving the means of production into the hands of a small number of tech companies. In 2010 or 2020, anyone with a laptop could create a startup. It might be the case that in 2030, you could only do so if the major frontier model providers allow you to do so and do not make it so expensive that it's only usable by entrenched players.
I am not fundamentally against AI, on the contrary, but I think the models should be in the hands of the wider population (i.e. open weight models), so that everyone has the means of production and can benefit from the automation. Also, it would only be fair, since the models are trained on the collective output of humanity. Of course, there are several barriers currently. There are pretty good open models, but running the near-frontier versions requires a lot of capital in the form of GPUs.
Out of curiosity. How can you explain to a Gen z fresh graduate with 50k in student loans, 5 dollar gasoline (and rising), no healthcare, housing prices at an all time high, and competing with their entire age group for the honor of holding multiple minimum wage jobs that are below the survivability wage that they should feel a collective sense of achievement? UBI isn't coming and we have multiple individuals who own measurable percentages of all of the worlds wealth. Those same people are investing heavily in automating all work these young people could hope to provide while waxing poetic about changing laws and owning media companies with cold hard cash.
It's not an anomalous sense of cynicism, hundreds of thousands of people are looking at their options and feeling hopeless. I'm glad I am not in that camp. The reason I'm not is because I was born sooner than they were. I don't blame them at all, it's looking a lot like the generation after them is cannon fodder if things trend the way they are now.
I would tell them this is the problem to fix. Taking your anger out on AI is the most shortsighted thing. When faced with a powerful new capability, disavowing the capability instead of enabling society to leverage it is absurd.
AI is fundamentally the automation of labor, and we can all see the incredible fruits we all reap from similar past leaps in capability.
Structure your society for a post-labor world. Don't halt the progress that has dramatically improved the human condition. To do so is a disservice to the species and all future humans - concretely, your own loved ones and especially your children.
You can't say a technology isn't improving life for humans, what, just 4 years into the introduction of the technology? That is not even the blink of an eye.
Does literally no one look at things from a historical perspective? The history of automation is right there on the Internet, for you to peruse at will.
Theoretically, if a technology came along that destroyed the human condition in a four year time frame, your "let's wait longer than four years and see" philosophy would kill us all.
Okay, but you have no basis to assume a technology that automates labor will do that given your priors (previous technologies that have automated general labor en masse). In other words, the FUD is not based in anything, whereas the optimism most certainly is.
I'm not trying to argue whether AI is good or bad. I'm only pointing out the flaw in your philosophy.
Like, there will one day be a technology invented that could indeed wipe us all out in 5 years. On a long enough timeline, it's a certainty that someone will come up with such a thing. And when it comes, there will be people such as yourself saying "no technology has ever wiped us out before, therefore this one won't either". And then it will wipe us out, and there will be no one left to say "well I guess this time was the exception".
I don't have anger against AI. I am a disgusted by the companies rolling it out.
UBI also won't fix things. A post ai world that the us tech ceos want us to imagine is not a utopia. The us manufacturers almost nothing on the world scale. Our biggest contributors to the world economy were things like farm goods(which are in peril), fuel (which most countries are trying to phase out for environmental and recent geopolitical issues), software which will be commoditized through AI. Anything the us can manufacture China can do better, cheaper, and faster. It's not been in our culture for decades, and our infrastructure is shoddy.and will be shoddier once data centers spin up and more wealth is concentrated to people who do not pay any taxes.
GenZ and those coming after have no chance at a sustainable life if the billionaires get what they are asking for. Also in a capitalist society asking them to sacrifice their lives for the good of others is hilarious. Especially if there is no foreseeable good to come after.
Why? I'll gladly wait to be automated away, and when I eventually am I will embrace it. I certainly won't be whining about it. This is just the cost of progress. That cost doesn't magically stop applying in 2026.
This is actually a solid comment because ideally it would be the case. I think it's the opposite though. The problem with LLMs is they are marketed not as a collective achievement. They are at their heart a tool which should belong to collective humanity. We should all be getting dividends from them and they should be collectively owned. But instead we're seeing them explicitly marketed as tools for capital centralization.
Of course no one sees it as a collective achievement when the announcements are aimed at either scaring people about how even the team behind them is worried about releasing it or for CEOs to replace workers.
Artemis II, at least in the states, was an example of people genuinely feeling collective achievement. There is absolutely no reason this AI moment couldn't be that. Instead though the companies involved have explicitly chosen fear and capital as their marketing tools. We should be seeing this as an incredible time but those involved do not want us to and plan to keep the spoils for themselves so we shouldn't.
Throughout history, automation has rarely been "marketed" as a collective achievement. That doesn't make it not one.
> But instead we're seeing them explicitly marketed as tools for capital centralization.
And labor automation, which is the single most valuable thing any technology can do. But if your answer is "kill the technology" instead of "structure society to live with it," of course you will experience pain.
Zyklon B was also fundamentally an automation of labor.
Technology is value neutral. What you call cynicism is actually people thinking more critically about technology. Technology is not a fundamental good. It can be detrimental and destructive. It can be used to oppress, to kill, to harm.
Approaching this following a "we can fix things later" stance is horribly reductive and misguided.
You spoke of History in other replies. History is littered with technological advancements that caused immeasurable harm to society.
Your mistake is that you think I am taking the generic stance of "technology is good" when in fact I am taking the stance that automating labor is good. AI is a general tool to automate labor.
Won't even engage with the comparison to Zyklon B.
In addition to the lack of value neutrality mentioned in other comments one major issue is no one is optimistic about AI. Structuring society to live with it is an option but no one is offering even glimpses of what a structure might look like outside of shrugging and saying "maybe ubi?"
There's always been the positive and negative sides of technology. The major reason the pessimism is so strong in this case is because even the technologists involved aren't optimistic. I'm not saying that propaganda is good but with nuclear we things like "My Friend the Atom." What we now call Retrofuturism promised a wonder future while the nuclear arms race was running away in the background threatening total annihilation. Nuclear was also going to automate labor all power sources do but many people were thrilled by the future being presented.
The only future people see today is something like Cyberpunk which is a pessimistic and cautionary look at the future. You cannot lament the lack of optimism around a technology when there is simply no optimistic future being presented. Maybe it's a result of deeply ingrained cultural cynicism but the fact that even those pushing the technology only have pessimistic future views make it incredibly challenging to be optimistic. If those involved cannot even offer optimism then I don't know why anyone would expect optimism from the common man.
> We should all be getting dividends from them and they should be collectively owned. But instead we're seeing them explicitly marketed as tools for capital centralization.
Well-said. You'll notice that people with jobs that are on paper most synergetic with LLM use, such as folks in the C-suite[1], are not worried at all. Why? Because they are part of the owner class and are not at risk of getting replaced. To them, it's all upside: a chance to put their feet up on the table a bit more often.
Would be nice if that also applied to the rest of us.
Interest on the national debt now costs over $1T/yr - 14% of the budget. Trump is talking about cutting social security. The reason the US economy is in a death spiral is because of moving jobs overseas, both physical and outsourcing. Wages that should have gone to US workers to be spent in the US economy are now being used to boost overseas economies like India instead.
"AI" is an achievement alright (so was designing a nuclear bomb), but if it is allowed to further gut the middle class, lowering wages, and hence spending (and tax receipts, to extent that matters any more) then it will only hasten the spiraling of the US economy down the toilet.
Companies are going to do what it best for their quarterly results, and for the C-suites bonuses. They are going to carry on outsourcing, damaging the US, if allowed to, and will replace workers with AI too, inflicting exactly the same type of harm, if that turns out to be possible. The only way to prevent either of these is with legislation.
There's a hidden unproven assertion that LLMs are net positive technological progress. Leaded gasoline was effective at improving engine performance, asbestos was an effective and low cost insulation, DDT was a very useful insecticide used in immense quantities, and thalidomide was an effective sedative. LLMs may yet join them.
> what technological progress buys all of humanity
I have yet to read a reasonable and logically sound explanation of how "AI" -- not the LLM technology itself, the tool, but rather its implementation as the promise to corporations that they can (eventually) replace most human labor -- is benefitting "all of humanity" (I'd even settle for "most of humanity").
It's a collective achievement. It was trained on humanity's creative output. The value, however, is being captured by a very small segment of the population, at the expense of everyone else.
These tools are a very explicit threatening working people, artists and the independent web. A lot of the things we love are getting killed not as a side effect, but deliberately.
It confuses people. The same way calling what Tesla does full self driving. What people envision as "full self driving" or "artificial intelligence", and what they actually get are not the same thing. When I hear artificial intelligence, I picture Data from Star Trek, not a hopped up Eliza.
I'd just like to interject for a moment. What you're refering to as Linux, is in fact, GNU/Linux, or as I've recently taken to calling it, GNU plus Linux. Linux is not an operating system unto itself, but rather another free component of a fully functioning GNU system made useful by the GNU corelibs, shell utilities and vital system components comprising a full OS as defined by POSIX.
Many computer users run a modified version of the GNU system every day, without realizing it. Through a peculiar turn of events, the version of GNU which is widely used today is often called Linux, and many of its users are not aware that it is basically the GNU system, developed by the GNU Project.
There really is a Linux, and these people are using it, but it is just a part of the system they use. Linux is the kernel: the program in the system that allocates the machine's resources to the other programs that you run. The kernel is an essential part of an operating system, but useless by itself; it can only function in the context of a complete operating system. Linux is normally used in combination with the GNU operating system: the whole system is basically GNU with Linux added, or GNU/Linux. All the so-called Linux distributions are really distributions of GNU/Linux!
The way we phrase anger with AI doesn't convey the structural realities of what's going on in the knowledge work chain. Gen Z aren't suddenly becoming anti-tech they are acting financially rationally to protect their own economic future.
In the past there was an implicit contract for white-collar employment that was based on the concept of earned experience through a period of manufacturing type work. You enter your profession by performing uninteresting, low-paying manufacturing tasks (such as, writing boilerplate type code or performing low-level quality assurance) while you gain domain expertise and gain the perspective necessary to perform high-value work at a higher level.
LLMs are now exceptionally good at consuming the 20% of an employees entry-level responsibilities.
What I see happening in the enterprise is that management is using AI to justify pulling the ladder up behind them and closing the door behind them. When a senior engineer's or senior analyst's productivity has increased by 30% due to using LLMs, the executive's response is typically not great, we have more time to work on bigger projects, but instead great, we can freeze junior hiring for 2 years.
The entry-level positions in the labor force are being automated, causing seriously low access to those roles for the Gen Z workforce. On the other hand, most senior-level positions are not being available to Gen Z workers as they lack the skills and experience required to qualify for those positions.
Stagnation in the adoption of artificial intelligence (AI) technology is the direct result of having no entry or junior level employees to work underneath senior staff members, causing a bottleneck for seniors. Employees generating raw output with AI technology have to check the results (output) for accuracy before integrating into work systems and processes as there are no entry-level employees to provide assistance to senior workers.
Gen Z workers do not dislike the tool (AI) however, they do not like how the tool is being implemented and used currently. Currently, the implementation of AI is driven by cost cutting in terms of labor rather than being focused on providing training and developing Gen Z's human capital for future use.
Pretty sure the article explicitly stated the resentment is due to their clearly stated concerns continually being explained away.
Was your intention to be an example for resentment? Or are you an AI model demonstrating the embodiment of deserving of the resentment?
A voice is being demanded. Being louder and longer is exhausting to endure. Stop rewording and reworking the reasons into something with shape and direction, that only serves to strip the voice demanding being heard. It was written as it was meant. Slop is worse than a carbon copy, of a copy, of a copy.
I don't understand what you found to be so outrageous in the parent comment. The report addresses both emotional impressions of AI use and fears of it impacting the job market. As someone from gen Z, I don't like the average quality of slop that's being pumped into the internet and further diluting it, but my bigger concern is not living under a bridge in a few years time.
I don’t know about this. After some time sitting with it, I think that mid level and senior ICs - especially those slow to adapt - are going to be at risk of getting replaced by entry level “AI native” kids. Net on net it probably washes out to “normal” patterns of turnover and hiring once things settle.
Think “Smithers, we need to hire some of these kids who know computers!” Only fast forward about 30 years and str.replace(“computers”,”agents”).
That would only be true is AI usage experience was equivalent to domain experience, especially since the former keeps getting easier. If anything, companies might want to hold onto their seniors and midlevels, because they collectively decimated the process of creating new ones by refusing to hire and train younger workers. If later down the line they have a need for someone young and AI-experienced, they could just reach out into the endless job market and scoop up as much as they like.
In some ways domain experience can be a hindrance, with ingrained pathways and practices shaped by constraints that no longer apply. My personal opinion is that you probably want a mix of domain experts who are enthusiastic about AI and some kids who are free of preexisting dogma, and are willing challenge assumptions and try out things that the old heads might chafe at.
An example from software engineering is that all production code should undergo meticulous human review. Saying “no” to this sounds crazy to an experienced SWE, but might not actually be that crazy.
I think the constraints will remain in some fields, especially where there is a high price to pay for mistakes and consequently additional regulation. You can't vibe review code that will run on medical equipment, aircraft systems or industrial machinery. It doesn't matter how few people work in these fields, the fact that they shut off the tap to making new domain experts, while everyone and their grandma is learning to use AI will mean that the experts will eventually be at a shortage after retirements, while the enthusiastic AI users will be very abundant and underpaid.
I worry and feel so much for people younger than me. As someone who entered the workforce during the GFC things were hard, but I always felt like you could make smart decisions, make decent money and build a decent life. Additionally there were plenty of interesting jobs out there around this time that required real skill and effort which you could become an expert in.
Software was really hard pre-2010. You actually had to study it because there was no AI, no stackoverflow, no NPM, etc, etc. You had to learn how to write code the hard way, typically from people who already knew how or text books, and more importantly, learn how to solve real problems often applying maths (i.e. you couldn't import a library to find the shortest path in a graph).
Similarly video editing, graphic design, 3d modelling, music production, were some other fields which were really hard. Again, there was no YouTube tutorials or AI and even the software itself was so limited compared to what we have today. You had to spend years learning the craft which meant the skill difference between those who had put years into their thing and those who had not was enormous.
I miss that world so much... I liked not being good at things and finding people who had what seemed like inhuman talent at things. I had a friend who was insanely good at graphic design and the stuff they'd send me would blow me away. The level of detail and precision didn't even seem possible to me. But now I can generate something almost just as good with AI.
Other examples would be how people who spent years practising music are now indistinguishable from someone with AI. Or how people who spent years learning blender are producing models which are indistinguishable from someone with a Meshy subscription.
There's just no reason to dedicate yourself to anything anymore and even if you did you're probably not going to get a job anyway.
I am a hardcore AI doomer, but assuming the doom scenario isn't on the table and we simply see a concentration in wealth and mass white-collar job losses, I know I'd probably be fine or maybe even benefit from that because I grew up in a time where it was hard but very much possible to acquire a talent and use it to build wealth. Gen Z on the other hand stand no chance.
Today's job market feels corrupt and product of pure luck. You either get extremely lucky and somehow land a good job, or know someone who can get you through the door. In the last year I've interview some insanely talented people from the best universities and we have decided not to employ them because we just don't need to. It's honestly hard for me to comprehend being that motivated and working that hard to struggle to even find an entry level job at the relatively mediocre company I work for...
We need to question if more productivity is always good. It seems to me the way that productivity is distributed is essential. If it's largely just corporations benefitting from the productivity gains then we're creating a world that's not suitable for humans. This will create a world in which productivity, and therefore wealth, will concentrate to fewer and fewer people, whilst the average person struggles to find ways to demonstrate their employability. If AI is creating a world that is much richer by some metrics, but much poorer by most the average person cares about, then is it even a technology worth having? Why would Gen Z consent to this world we're building and not seek to overthrow (rightly imo) those who have created it? Technology is suppose to make our lives better not make them harder and financially suppress us.
AI (even if pseudo-AI) is already a huge productivity multiplier and becoming better... but due to demand also becoming more expensive. And if things keep going this way, only corporations with deep pockets and the top 1% will be able to afford it upfront.
I wish Gen Z channeled their anger into making distributed AI instead of turning their backs on the problem or doing protests that will get nowhere since Boomers are still the biggest voting block.
small + local + distributed
Where is the Gen Z hacker movement? The very few into AI are all sellouts wishing they could join a big lab.
make no mistake, this is not "resentment" this is a collapse of faith, which having recently experienced myself, is what happens when the expectation is that the frog should do the right thing™, and climb out, stoke the fire (just a little bit), and climb back in the pot
Unfortunately it seems like the winds are blowing in the opposite direction - the German government for example is trying to move us to a 48-hour-week instead.
But that is not going to happen. We would need tangible, meaningful productivity improvements for it to be possible. LLMs are sort of moderately useful, but companies adopting it see no real productivity increase (they see cost increase however).
In many ways it is a bullshit technology, being marketed way beyond its capabilities.
I don't think the 8-hour work day was accepted during the industrial revolution because job providers were convinced of productivity gains. It was won due to the leverage the labor unions and workers had over job providers.
The whole thing rests on whether workers can maintain their leverage in the AI era. That leverage is being eroded by the attempt by job providers to decouple humans from production and productivity (whether they will be succesful in this iteration is, as you say, debatable). The other axis is the attempt to erode democratic norms in order to prevent the majority from checking the power of a wealthy minority.
Once those two points of leverage are completely eroded, the only leverage left is mob power. And as history has shown on multiple occasions, excercising that leverage is ugly and messy.
> I don't think the 8-hour work day was accepted during the industrial revolution because job providers were convinced of productivity gains. It was won due to the leverage the labor unions and workers had over job providers.
Good point. Count me in.
> The whole thing rests on whether workers can maintain their leverage in the AI era. That leverage is being eroded by the attempt by job providers to decouple humans from production and productivity
Being brutally honest, I have zero concerns about AI displacing jobs en masse. I honestly believe it to be a deadend technology.
And I say this as someone that uses it daily.
I am a lot more concerned about the incoming economic downturn. Things have the potential get very ugly pretty soon. This will have a huge detrimental impact on labor rights for the foreseeable future. AI is just smoke and mirrors, something asshole CEOs can gesture at to spin layoffs as optimistic and sound like visionaries while mismanaging companies under their control.
It's not about productivity. Never was. If humanity doubled its productivity tomorrow, companies would either demand triple the output and/or lay off the majority of their workers. The length of work weeks is a question of worker rights, which at this point seems like a distant concept from a bygone era. It will never make sense for businesses to shorten work weeks voluntarily.
sure, but don't expect companies to pay you the same as they did for the 5-day work-week.
And there lies the rub: yes, AI can increase productivity. But the gains of that productivity are being, and will be, captured entirely by shareholders, not employees.
It seems so. I think there's general awareness that the I.T. productivity gains of the last 50 years have been entirely captured by capital. And I think there is general awareness that the same scenario is threatening to unfold now with AI.
> don't expect companies to pay you the same as they did for the 5-day work-week.
There's room here for argument and further evidence gathering. I've read of cases where the leap in productivity fully justifies holding pay steady. But admittedly these leaps may or may not hold up over the longer run.
Many people might (say) trade a 10% pay cut for a 50% jump in weekly days off. Others can't afford to do that because, well, that's the state of things. But work is becoming a scarce resource, and if you look at the numbers, more is desired (especially. part-time) by pensioners, students, and others. If only health care was state-funded, so that it was not a deterrence to headcount.
There certainly is a problem of p companies laying people off and not making new hires, but there is a bigger problem in that this is the most dangerous technology in human history and could lead to our extinction in the next year or two.
kristianp | a day ago
Sell NVIDIA!!!
trolleski | a day ago
[OP] mgh2 | a day ago
Hamuko | a day ago
10xDev | 23 hours ago
[OP] mgh2 | 23 hours ago
trolleski | an hour ago
rvz | 23 hours ago
AI psychosis is real and the billionaires who own the AI chatbots know this.
xiphias2 | a day ago
The main social problem with automation in general was that less intelligent people have been left behind as only boring physical tasks are left for them to do, and people don't generally want to go back destroying their body from the prospects of an office job.
At some point frontier AI will only getting only worthwile to use for only super highly intelligent and motivated AI researchers which is a tiny part of the population.
ekjhgkejhgk | a day ago
May I also add that this isn't just (or at all) about intelligence.
I'm lucky enough to be at a company where I have a large budget in terms of what I can spend in tokens. This gives me an enormous advantage over someone who is just as intelligent as me and who has the same experience as me minus the interaction I have with LLMs.
In this case the crucial difference is not intelligence, it's that I found myself in the right place to be able to go up, whereas a lot of people which are otherwise like me didn't get that opportunity through no fault of their own.
People tend to attribute their successes to their own merit and their failures to happenstance, but if we're honest with ourselves the real world has a lot of randomness in it.
xiphias2 | 23 hours ago
twoodfin | a day ago
roenxi | a day ago
31% seems remarkably high. Here we seem to be running up against the limitations of statistics. It is hard to interpret whether this is a scared-and-angry sort of angry or if there is something AI-related happening that is making them angry. I might have been lucky in my experiences, but generally if people get angry there is a reason other than "things are changing".
JumpCrisscross | a day ago
Silicon Valley’s leaders have been one upping themselves on messaging to the public that they’re building a doomsday device. And then, bewilderingly to the outside, all of us who read through that bullshit then appear to merrily go along with the apparent suicide pact.
Most Gen Z, it appears, can also see through the bullshit. But about a third of them taking the message sincerely seems par for the course, and as you said, I wouldn’t assume it’s just aversion to change.
ben_w | a day ago
What I can't decide, for Anthropic, OpenAI, and xAI, is if the part which is BS is that they don't take the doom risk seriously at all*, or if the BS is that despite taking it seriously they think they are best placed to actually solve the doom. Or both.
Meta at least it is obvious they don't even understand the potential of AI, neither for good nor ill.
Google and Microsoft seem to be treating it as normal software, with normal risks. If they have doom opinions, they are drowned out by all the other news going on right now.
* xAI obviously doesn't care about reputational risk, porn, trolling, propaganda, but this isn't the same question as doom.
lostmsu | a day ago
Where did you get this notion? Did you hallucinate it?
JumpCrisscross | 23 hours ago
Thirty-one percent being smaller than half.
rcarr | 21 hours ago
JumpCrisscross | 17 hours ago
Build it without the catastrophising PR. Try, you know, selling it as something that will do good.
Altman wanking on about how he’s going to end the species is great for attracting investors—this crap worked for Bankman-Fried, too [1]. If you are building world-ending kit, you’re a super-important dude who has to be listened to. On the other end of the spectrum, you have everyone quietly doing their work.
[1] https://conversationswithtyler.com/episodes/sam-bankman-frie...
marginalia_nu | a day ago
Most people who aren't in AI sees plain as day how everything AI touches is turning into the digital equivalent of flimsy IKEA furniture. The main selling point of AI so far is that it makes things cheaper to produce while still looking good at a glance.
"The thing I used to like costs the same or more but is now cheaper quality and worse and they think I'm dumb enough not to notice" really isn't a selling point, but pretty much the universal western post-2008 experience, and nothing quite embodies this transformation like AI.
But yeah, you also have all the AI CEOs chewing the scenery like Jeremy Irons in the DnD movie which really hasn't done the image of AI any favors either.
There are at least some redeeming features of AI, but I think it's become this scapegoat for a lot of things that it touches that are also larger unsolved problems with the economy, and it's even used that way, e.g. to motivate layoffs that would otherwise signal to investors that a company isn't doing as well as they'd like you to think.
keyringlight | 23 hours ago
anal_reactor | 22 hours ago
I really love this comparison. Everyone bitches about Ikea, but at the end of the day unless you're rich as fuck then "buying new furniture" means either Ikea or some other shop that adopted exactly the same business model, because we all know that the price/quality ratio is unbeatable. Ikea furniture can easily outlive you as long as you pick the correct product for your use case. "I put my fat ass on a dining table that's explicitly marketed for light distributed load and it broke in half, boo-hoo Ikea bad" like no shit, if you need a table you can stand on then choose one with extra support beams, Ikea has these too. "But if you disassemble and reassemble Ikea it falls apart" okay cool but the cost of transporting old furniture to your new house is often higher than just buying new furniture anyway. Not to mention that the chances that your old furniture will match your new house are pretty much zero.
This translates to engineers not being able to grasp the concept of "good enough" where end user doesn't care about quality improvements beyond certain threshold. Cue the audiophiles remaining perplexed to this day why nobody uses 24-bit FLAC.
tactlesscamel | 20 hours ago
Quality has its cost. A quality dining table could only have the potential to be sold to every room one might place a dining table, exactly one time. IKEA might sell that same dining table to the same room every year. IKEA is destined for the landfill; quality can outlive a bloodline. Sales of quality must sustain all those employed for the process accordingly. - Sure, some TLC is required, but IKEA can't even get wet.
Quality also provides additional benefits. They are not only a functional object, but a wealth of sentiment and memories for the home. They are also a symbol to the pride one takes in their craft, and a silhouette of their creator's experience and deserved reputation. IKEA is for parties and showrooms/staging. Quality is for comfort and places of importance.
My heart aches, that the notion of buying prefabricated trash to use in the interim of its journey, is better than searching individually for items that will bring character and meaning—as well as, functional superiority—over the course of a lifetime.
This equates to software bragging about how great its algorythm adjusted the color of a Submit button to improve deliverability on a website masquerading as a web app that could have been written in HTML and CSS without the button at all.
anal_reactor | 17 hours ago
marginalia_nu | 17 hours ago
Going the IKEA route you'll end up re-buying the same furniture over and over in ever crappier quality. Once you add that to the equation, it suddenly isn't quite as cheap anymore.
anal_reactor | 15 hours ago
I tried. It's a major PITA, and transport costs alone make the whole endeavor not economically viable, so I backed off.
A friend of mine bought some renovated old wardrobe and I helped him move it from the hall to the bedroom and this resulted in me for the first time ever seeing him have a heated, emotional argument with his wife.
Finally, your pink bathroom from 50's just isn't fashionable. Trust me on this one. Give your cabinets to someone who can skillfully paint them (which costs money obviously).
> Going the IKEA route you'll end up re-buying the same furniture over and over in ever crappier quality.
I've never had Ikea furniture randomly fail on me, and I haven't heard of this happening in my social circle. Again, don't sit on a god damn Linnmon table. It's for everyday use items only. If you need something sturdier, choose a different product.
derbOac | 23 hours ago
That's my personal impression of the anger. It's not so much luddite anger, its like Clippy anger and millenial anti-Boomer anger mixed together.
It's like a twist on the Turing test, where some humans can't tell the difference between a human and a computer, but others can, and they tend to be younger on average. The Turing test ironically ends up telling you more about the person taking the test.
feverzsj | a day ago
aetherspawn | a day ago
Sue me, I have that right.
redsocksfan45 | a day ago
abc123abc123 | a day ago
lostmsu | a day ago
zer0tonin | 23 hours ago
So at least they are quite happy during winter.
danaris | 22 hours ago
Some of them are quite happy! Others are miserable; many are abused. It's a high-control group, that raises its children to believe the outside world as a terrible, scary place, and they are the only safe place to be.
Many people are happy in cults, or they couldn't function. But that doesn't mean that the cults are, overall, a positive thing.
aworks | 18 hours ago
Originating from 17th century persecution for their religous beliefs and practices. I can't speak of modern-day mistreatment of the group.
admissionsguy | a day ago
nkrisc | a day ago
2ndorderthought | a day ago
I speculate it has a lot to do with surveillance capitalism. It's the same type of tactics that have been used for things like the banning of marijuana, or the health merits of cigarettes. Fear mongering and lying so a few robber Barron's can profiteer.
I think AI is useful. I think it was rolled out haphazardly similar to how people used to gargle radioactive isotopes or slather them on as after shave so others can profit quickly. There are so many issues with the technology that the press won't even cover yet because we all have to play stupid until trends emerge to report on otherwise billionaire defense contractors will send their figurative or possibly literal hit squads after us. We have to wait for the tumors to grow, the jaws to fall off before society will remember "maybe we shouldn't be slapping radioactive stuff all over ourselves so some wealthy white dude gets wealthier"
The future of ai is in small local models people pay 0 dollars to upgrade or use. Anything else is meritless exploitation and destruction. That's why the US will lose. Reality has a liberal bias. Tough pill for ai libertarians to swallow. So they mud fling.
conartist6 | 23 hours ago
after all, they think that a) they have a right to my property and b) creativity and hard work are dead
enoint | 21 hours ago
admissionsguy | 19 hours ago
nkrisc | 13 hours ago
kdheiwns | a day ago
And they were all right.
thepryz | 23 hours ago
When I started in tech, at the dawn of the internet, it was an exciting field full of hope and the promise to empower and enrich the lives of people. Tech now is largely the opposite.
Enshitification is making things progressively worse. tech companies are creating systems and tools with dark patterns abound to ensure you no longer own anything, are under constant surveillance, and populations at large are manipulated through the magic of propaganda and illusory truth. Even the productivity gains are perversely used to not give people more time through fewer work days/hours but to instead give them more work. People are losing their connection to others and the world around them.
Everyone tends to focus on Orwell’s 1984, but I find Fahrenheit 451 to be the more prescient book. I used to be annoyed by the book people’s choice to leave society and wait for it to collapse so they could help rebuild. In my mind, they should have been mounting an resistance. Fair to say I understand the book people’s perspective so much more now.
keyle | a day ago
I still haven't found a single person willing to go to the movies, and watch an AI movie. If it wasn't made by a person, there is no 'personal'-ity to it. It's just bland.
Eventually things will slow and slide back to thoughtful first, crapload second.
dragontamer | a day ago
I feel like a lot of the stuff my nieces listen to are AI music. It's like a hodgepodge of popular songs with little rhyme or reason. Very 'sloppy' but if they like it....
It's hard for me to confirm if they really are AI or not. But I'm willing to bet that (random Roblox game they're interested in today) == heavily AI made. Maybe there's some real human effort here or there but I have heavy suspicions.
microtonal | a day ago
Didn't we all start as kids listening to music that is so formulaic that it could as well be AI-generated? A subset of people iteratively refines their music tastes and starts listening everything from bebop to obscure Canadian hardcore bands and will recognize quality in music.
dragontamer | 23 hours ago
But I am of the opinion that AI slop is displacing a lot of would-be beginner musicians and making it even harder for them to break out.
For better or worse, a lot of beginner artists were relying upon my nieces and their classmates) clicking on their music and sharing them for Spotify $$$.
thunky | 22 hours ago
blitzar | a day ago
The last 27 marvel movies might as well have been written by ai, plenty of people have been to see those.
izacus | 23 hours ago
throw849494 | a day ago
scragz | 23 hours ago
stalfie | 23 hours ago
admissionsguy | 23 hours ago
Eddy_Viscosity2 | 23 hours ago
I hope AI follows the same path and diminishes. Still available, but only where it make sense.
ndsipa_pomu | 22 hours ago
AI generated films are almost certainly going to have at best mediocre acting/writing/direction and will almost certainly just recycle ideas. I hope AI films flop so hard that studios end up shunning them.
Eddy_Viscosity2 | 21 hours ago
pllbnk | 23 hours ago
dauertewigkeit | a day ago
I love the cognitive dissonance.
Even in the best case scenario where the generated wealth will be distributed, and somehow we will be able to keep them in check (unlikely), what would be the point of life in a world where machines can best us at everything?
twoodfin | a day ago
And all the benefits that brings. Not just in raw economic terms, but in quality of (family, community, recreational, commercial, ecological, medical) life.
Kind hard to imagine it will suck if another order-of-magnitude leap along that long line happens.
microtonal | 23 hours ago
A bit of a tangential anecdote from my dad, who is a retired a biologist. He was one of the first in the department to use a computer in the 1970s and wrote some programs to do tedious calculations that had to be done by hand before and took days of human labor. Even a 1970s computer could finish the calculations with his programs in a few minutes.
His boss, an older tenured professor, could not believe that 'these damn computers' can possibly be right. Doing the same calculations in a few minutes? Impossible. So for a few weeks (or months, I forget), he did all the calculations done on the computer by hand to prove that the computer must be wrong.
One day he comes to my dad and says "can you show me how to use one of these computers?"
SecretDreams | 23 hours ago
The world is changing quickly. Our most coveted defining traits - our minds - are under attack. This is a technology that seeks to replicate your thought processes and critical thinking and then to execute it at machine speeds.
If you think this is like the industrial revolution, you're actually right. We're still replacing animals with machines. But now we are the animals.
Anything other than a serious discussion about UBI or a post-labour economy is a joke. This is technology that aims to displace most of us.
twoodfin | 23 hours ago
SecretDreams | 23 hours ago
You'll have even more time with your family when you are no longer a SWE, e.g.
When automation displaced farmer manual labour, it also led to new jobs opening up for that labour to flow into.
What new jobs/fields do you see developing out of AI tools and how they've been marketed so far?
Every step of automation across the history of humanity has led to a "concentration of power" in jobs/fields which required brainpower. AI is the technology coming for brainpower. Where do we go from there? Back to farming?
And when I say AI is coming for the brainpower, it's coming for it in two ways: directly where it takes our jobs and indirectly where a lot of people using it are seemingly getting dumber. Both are quite dangerous to our combined futures.
2ndorderthought | 21 hours ago
SecretDreams | 21 hours ago
2ndorderthought | 20 hours ago
giacomoforte | 23 hours ago
2ndorderthought | 22 hours ago
senordevnyc | 17 hours ago
2ndorderthought | 16 hours ago
munksbeer | 15 hours ago
insane_dreamer | 18 hours ago
The difference this time is that no one can articulate what are these "new" jobs that people will find. When agricultural jobs were being decimated, factories were opening up (whether they were better jobs or not is a different discussion, but the point being that the technology opened up new opportunities while destroying the old ones. We do not see this with AI and I have yet to read even any reasonable speculation of what these "new opportunities" might be. Sure you could argue that the future is unknown, but we should be able to at least glimpse it. And yet, we can't. Because almost any "new job" that you can come up with that doesn't exist today (which is already hard to imagine), could ostensibly also be replaced with AI.
So all we have is comments like yours, vague "it worked before so it'll work again" (lets ignore the fact that the circumstances are completely different), or even worse, "people will have time to focus on things that matter" but no explanation of how they'll pay the rent and buy food to survive.
> all the benefits ... raw economic terms ... quality of (family, community, recreational, commercial, ecological, medical) life
In what way is AI improving any of these? So far, it's making all of these worse. Productivity increases don't matter if they don't benefit more than just a few wealthy shareholders.
lostmsu | 13 hours ago
spicyusername | 23 hours ago
Everyone in America is now fed and most children grow up spending a ton of time with both parents. This is because of automation greatly raising productivity and bringing costs down throughout the 20th century.
It's easy to think things are terrible, but they are actually insanely good. Just 100 years ago life was horrible for basically everyone by today's standards, now it's not.
AI will continue the trend, raise productivity and bring costs down. Now it's for white collar output, instead of manufacturing and agriculture.
The labor force disruption will be painful, as it always is, especially in a country without a strong social safety net, but things will be better on the other side because we just made a ton of work more efficient and can produce more with less.
We shouldn't throw the baby out with the bath water just because it affects us this time...
insane_dreamer | 18 hours ago
conartist6 | 22 hours ago
It won't be Marvin saying, "Oh god I'm so depressed, what's the point?" We'll just start killing each other in massive numbers cause, well, if you can't create anything and there isn't enough for everyone, what else is there to do but fight over what there is
danaris | 22 hours ago
It's being deliberately gatekept from us by the wealthy, and by those who believe that no one should be allowed to have anything they haven't "earned".
The tragic thing is, to the extent that you're right, people will probably mostly kill other people who have nothing, rather than turning their anger and violence where it truly deserves to go: the rich bastards who want to own everything and prevent the rest of us from having anything.
2ndorderthought | 21 hours ago
There could be! But there currently is not. Nor is there any plan for that to change.
danaris | 5 hours ago
Even with food supplies "weakening"—which is only happening due to the pointless Iran war, not due to any larger trends—we still have plenty of food to feed every human being on the planet.
And regardless of what billionaires might encourage, population growth is slowing. (To the extent that it might become a genuine economic problem in a few decades if we don't find a way to adjust our economic systems to stop depending on a constantly-increasing population.)
2ndorderthought | 35 minutes ago
Population growth can occur if billionaires lobby Republicans and make birth control illegal. Which is happening right now. Last week we were seeing some scary news about that.
There is a big difference between what we could be doing and what is happening. It's more profitable for the ultra wealthy that we can barely survive. I know it sounds abrasive, but it's a fact and there is a lot of evidence pointing to this. Googling Elon musk wants people to have more children is scary how many hits there are from different conversations. Bezos as well.
tavavex | 19 hours ago
Technically, there's no cognitive dissonance in the statement you made, at least with the way you worded it. Thinking machines can only do thinking labor (for now), so the bright future ahead is one where mental work is reserved for the elite, while everyone else does hard, physical work in places that are too messy for the machines to operate in at the moment.
munksbeer | 15 hours ago
Read some of the Culture novels by Iain. M. Banks.
mrsvanwinkle | 12 hours ago
nicbou | 11 hours ago
Coincidentally, I am reading Grapes of Wrath. Chapter 5 is my favourite, and it's about how the big banks tractor people off the land. The whole damn book is as relevant as ever, but this chapter just sticks with you.
https://genius.com/John-steinbeck-chapter-5-the-grapes-of-wr...
card_zero | 11 hours ago
rvz | a day ago
keyle | a day ago
rvz | 22 hours ago
For AI researchers, it is an understanding of what "intelligence" is and the emergence of an autonomous system that surpasses all human capabilities that learns over time.
For most AI labs like OpenAI and their investors, it used to mean an intelligent system that surpasses all human capabilities at economically useful work, then it meant $100B dollars of profits, now it is an IPO.
For Big Tech, it is digital employees and AI data-centers to "streamline" operations.
To everyone else, it is mass job displacement and unemployment.
For GenZ, it is "permanent underclass".
So it depends on who you are talking to and varies. Therefore "AGI" at this point is meaningless.
bsenftner | a day ago
tavavex | 19 hours ago
solenoid0937 | a day ago
I guess cynicism is trendy.
microtonal | 23 hours ago
It is a completely coherent position to like most technological progress, but at the same time be critical of some uses of ML/AI.
You are just making straw men here by suggesting that people that are critical of AI are critical of all technology.
solenoid0937 | 23 hours ago
microtonal | 23 hours ago
Well, yes, but if humans need to stay in the loop (as most previous automations of labor), it is also moving the means of production into the hands of a small number of tech companies. In 2010 or 2020, anyone with a laptop could create a startup. It might be the case that in 2030, you could only do so if the major frontier model providers allow you to do so and do not make it so expensive that it's only usable by entrenched players.
I am not fundamentally against AI, on the contrary, but I think the models should be in the hands of the wider population (i.e. open weight models), so that everyone has the means of production and can benefit from the automation. Also, it would only be fair, since the models are trained on the collective output of humanity. Of course, there are several barriers currently. There are pretty good open models, but running the near-frontier versions requires a lot of capital in the form of GPUs.
financltravsty | 22 hours ago
2ndorderthought | 23 hours ago
It's not an anomalous sense of cynicism, hundreds of thousands of people are looking at their options and feeling hopeless. I'm glad I am not in that camp. The reason I'm not is because I was born sooner than they were. I don't blame them at all, it's looking a lot like the generation after them is cannon fodder if things trend the way they are now.
solenoid0937 | 23 hours ago
I would tell them this is the problem to fix. Taking your anger out on AI is the most shortsighted thing. When faced with a powerful new capability, disavowing the capability instead of enabling society to leverage it is absurd.
AI is fundamentally the automation of labor, and we can all see the incredible fruits we all reap from similar past leaps in capability.
Structure your society for a post-labor world. Don't halt the progress that has dramatically improved the human condition. To do so is a disservice to the species and all future humans - concretely, your own loved ones and especially your children.
arvid-lind | 23 hours ago
You clearly accept this as Progress, but isn't the core debate here that it doesn't improve life for humans?
solenoid0937 | 23 hours ago
Does literally no one look at things from a historical perspective? The history of automation is right there on the Internet, for you to peruse at will.
arvid-lind | 23 hours ago
solenoid0937 | 17 hours ago
maplethorpe | 11 hours ago
solenoid0937 | 7 hours ago
maplethorpe | 3 hours ago
Like, there will one day be a technology invented that could indeed wipe us all out in 5 years. On a long enough timeline, it's a certainty that someone will come up with such a thing. And when it comes, there will be people such as yourself saying "no technology has ever wiped us out before, therefore this one won't either". And then it will wipe us out, and there will be no one left to say "well I guess this time was the exception".
danaris | 22 hours ago
but making it measurably worse for vastly more people
can you really say that that technology is "improving life for humans"?
munksbeer | 14 hours ago
It isn't.
2ndorderthought | 23 hours ago
UBI also won't fix things. A post ai world that the us tech ceos want us to imagine is not a utopia. The us manufacturers almost nothing on the world scale. Our biggest contributors to the world economy were things like farm goods(which are in peril), fuel (which most countries are trying to phase out for environmental and recent geopolitical issues), software which will be commoditized through AI. Anything the us can manufacture China can do better, cheaper, and faster. It's not been in our culture for decades, and our infrastructure is shoddy.and will be shoddier once data centers spin up and more wealth is concentrated to people who do not pay any taxes.
GenZ and those coming after have no chance at a sustainable life if the billionaires get what they are asking for. Also in a capitalist society asking them to sacrifice their lives for the good of others is hilarious. Especially if there is no foreseeable good to come after.
solenoid0937 | 17 hours ago
justonepost2 | 23 hours ago
solenoid0937 | 17 hours ago
roxolotl | 23 hours ago
Of course no one sees it as a collective achievement when the announcements are aimed at either scaring people about how even the team behind them is worried about releasing it or for CEOs to replace workers.
Artemis II, at least in the states, was an example of people genuinely feeling collective achievement. There is absolutely no reason this AI moment couldn't be that. Instead though the companies involved have explicitly chosen fear and capital as their marketing tools. We should be seeing this as an incredible time but those involved do not want us to and plan to keep the spoils for themselves so we shouldn't.
solenoid0937 | 23 hours ago
> But instead we're seeing them explicitly marketed as tools for capital centralization.
And labor automation, which is the single most valuable thing any technology can do. But if your answer is "kill the technology" instead of "structure society to live with it," of course you will experience pain.
surgical_fire | 22 hours ago
Technology is value neutral. What you call cynicism is actually people thinking more critically about technology. Technology is not a fundamental good. It can be detrimental and destructive. It can be used to oppress, to kill, to harm.
Approaching this following a "we can fix things later" stance is horribly reductive and misguided.
You spoke of History in other replies. History is littered with technological advancements that caused immeasurable harm to society.
solenoid0937 | 17 hours ago
Won't even engage with the comparison to Zyklon B.
surgical_fire | 14 hours ago
roxolotl | 21 hours ago
There's always been the positive and negative sides of technology. The major reason the pessimism is so strong in this case is because even the technologists involved aren't optimistic. I'm not saying that propaganda is good but with nuclear we things like "My Friend the Atom." What we now call Retrofuturism promised a wonder future while the nuclear arms race was running away in the background threatening total annihilation. Nuclear was also going to automate labor all power sources do but many people were thrilled by the future being presented.
The only future people see today is something like Cyberpunk which is a pessimistic and cautionary look at the future. You cannot lament the lack of optimism around a technology when there is simply no optimistic future being presented. Maybe it's a result of deeply ingrained cultural cynicism but the fact that even those pushing the technology only have pessimistic future views make it incredibly challenging to be optimistic. If those involved cannot even offer optimism then I don't know why anyone would expect optimism from the common man.
archagon | 16 hours ago
Well-said. You'll notice that people with jobs that are on paper most synergetic with LLM use, such as folks in the C-suite[1], are not worried at all. Why? Because they are part of the owner class and are not at risk of getting replaced. To them, it's all upside: a chance to put their feet up on the table a bit more often.
Would be nice if that also applied to the rest of us.
[1] "Mark Zuckerberg Is Building an AI Agent to Help Him Be CEO" https://news.ycombinator.com/item?id=47491355
HarHarVeryFunny | 23 hours ago
"AI" is an achievement alright (so was designing a nuclear bomb), but if it is allowed to further gut the middle class, lowering wages, and hence spending (and tax receipts, to extent that matters any more) then it will only hasten the spiraling of the US economy down the toilet.
solenoid0937 | 17 hours ago
HarHarVeryFunny | 12 hours ago
ThrowawayR2 | 18 hours ago
insane_dreamer | 13 hours ago
I have yet to read a reasonable and logically sound explanation of how "AI" -- not the LLM technology itself, the tool, but rather its implementation as the promise to corporations that they can (eventually) replace most human labor -- is benefitting "all of humanity" (I'd even settle for "most of humanity").
You're welcome to try.
nicbou | 11 hours ago
These tools are a very explicit threatening working people, artists and the independent web. A lot of the things we love are getting killed not as a side effect, but deliberately.
kshahkshah | 23 hours ago
Interesting results regardless when they compare the shift of 2025 to 2026
amanaplanacanal | 23 hours ago
tyleo | 23 hours ago
madaxe_again | 23 hours ago
amanaplanacanal | 20 hours ago
anal_reactor | 23 hours ago
Many computer users run a modified version of the GNU system every day, without realizing it. Through a peculiar turn of events, the version of GNU which is widely used today is often called Linux, and many of its users are not aware that it is basically the GNU system, developed by the GNU Project.
There really is a Linux, and these people are using it, but it is just a part of the system they use. Linux is the kernel: the program in the system that allocates the machine's resources to the other programs that you run. The kernel is an essential part of an operating system, but useless by itself; it can only function in the context of a complete operating system. Linux is normally used in combination with the GNU operating system: the whole system is basically GNU with Linux added, or GNU/Linux. All the so-called Linux distributions are really distributions of GNU/Linux!
DarenWatson | 23 hours ago
In the past there was an implicit contract for white-collar employment that was based on the concept of earned experience through a period of manufacturing type work. You enter your profession by performing uninteresting, low-paying manufacturing tasks (such as, writing boilerplate type code or performing low-level quality assurance) while you gain domain expertise and gain the perspective necessary to perform high-value work at a higher level.
LLMs are now exceptionally good at consuming the 20% of an employees entry-level responsibilities.
What I see happening in the enterprise is that management is using AI to justify pulling the ladder up behind them and closing the door behind them. When a senior engineer's or senior analyst's productivity has increased by 30% due to using LLMs, the executive's response is typically not great, we have more time to work on bigger projects, but instead great, we can freeze junior hiring for 2 years.
The entry-level positions in the labor force are being automated, causing seriously low access to those roles for the Gen Z workforce. On the other hand, most senior-level positions are not being available to Gen Z workers as they lack the skills and experience required to qualify for those positions.
Stagnation in the adoption of artificial intelligence (AI) technology is the direct result of having no entry or junior level employees to work underneath senior staff members, causing a bottleneck for seniors. Employees generating raw output with AI technology have to check the results (output) for accuracy before integrating into work systems and processes as there are no entry-level employees to provide assistance to senior workers.
Gen Z workers do not dislike the tool (AI) however, they do not like how the tool is being implemented and used currently. Currently, the implementation of AI is driven by cost cutting in terms of labor rather than being focused on providing training and developing Gen Z's human capital for future use.
jerrythegerbil | 22 hours ago
Was your intention to be an example for resentment? Or are you an AI model demonstrating the embodiment of deserving of the resentment?
A voice is being demanded. Being louder and longer is exhausting to endure. Stop rewording and reworking the reasons into something with shape and direction, that only serves to strip the voice demanding being heard. It was written as it was meant. Slop is worse than a carbon copy, of a copy, of a copy.
tavavex | 19 hours ago
pfisherman | 21 hours ago
Think “Smithers, we need to hire some of these kids who know computers!” Only fast forward about 30 years and str.replace(“computers”,”agents”).
tavavex | 19 hours ago
pfisherman | 19 hours ago
An example from software engineering is that all production code should undergo meticulous human review. Saying “no” to this sounds crazy to an experienced SWE, but might not actually be that crazy.
tavavex | 16 hours ago
kypro | 23 hours ago
Software was really hard pre-2010. You actually had to study it because there was no AI, no stackoverflow, no NPM, etc, etc. You had to learn how to write code the hard way, typically from people who already knew how or text books, and more importantly, learn how to solve real problems often applying maths (i.e. you couldn't import a library to find the shortest path in a graph).
Similarly video editing, graphic design, 3d modelling, music production, were some other fields which were really hard. Again, there was no YouTube tutorials or AI and even the software itself was so limited compared to what we have today. You had to spend years learning the craft which meant the skill difference between those who had put years into their thing and those who had not was enormous.
I miss that world so much... I liked not being good at things and finding people who had what seemed like inhuman talent at things. I had a friend who was insanely good at graphic design and the stuff they'd send me would blow me away. The level of detail and precision didn't even seem possible to me. But now I can generate something almost just as good with AI.
Other examples would be how people who spent years practising music are now indistinguishable from someone with AI. Or how people who spent years learning blender are producing models which are indistinguishable from someone with a Meshy subscription.
There's just no reason to dedicate yourself to anything anymore and even if you did you're probably not going to get a job anyway.
I am a hardcore AI doomer, but assuming the doom scenario isn't on the table and we simply see a concentration in wealth and mass white-collar job losses, I know I'd probably be fine or maybe even benefit from that because I grew up in a time where it was hard but very much possible to acquire a talent and use it to build wealth. Gen Z on the other hand stand no chance.
Today's job market feels corrupt and product of pure luck. You either get extremely lucky and somehow land a good job, or know someone who can get you through the door. In the last year I've interview some insanely talented people from the best universities and we have decided not to employ them because we just don't need to. It's honestly hard for me to comprehend being that motivated and working that hard to struggle to even find an entry level job at the relatively mediocre company I work for...
We need to question if more productivity is always good. It seems to me the way that productivity is distributed is essential. If it's largely just corporations benefitting from the productivity gains then we're creating a world that's not suitable for humans. This will create a world in which productivity, and therefore wealth, will concentrate to fewer and fewer people, whilst the average person struggles to find ways to demonstrate their employability. If AI is creating a world that is much richer by some metrics, but much poorer by most the average person cares about, then is it even a technology worth having? Why would Gen Z consent to this world we're building and not seek to overthrow (rightly imo) those who have created it? Technology is suppose to make our lives better not make them harder and financially suppress us.
alecco | 23 hours ago
I wish Gen Z channeled their anger into making distributed AI instead of turning their backs on the problem or doing protests that will get nowhere since Boomers are still the biggest voting block.
small + local + distributed
Where is the Gen Z hacker movement? The very few into AI are all sellouts wishing they could join a big lab.
metalman | 23 hours ago
euroderf | 23 hours ago
Timon3 | 22 hours ago
surgical_fire | 22 hours ago
But that is not going to happen. We would need tangible, meaningful productivity improvements for it to be possible. LLMs are sort of moderately useful, but companies adopting it see no real productivity increase (they see cost increase however).
In many ways it is a bullshit technology, being marketed way beyond its capabilities.
sillyfluke | 21 hours ago
I don't think the 8-hour work day was accepted during the industrial revolution because job providers were convinced of productivity gains. It was won due to the leverage the labor unions and workers had over job providers.
The whole thing rests on whether workers can maintain their leverage in the AI era. That leverage is being eroded by the attempt by job providers to decouple humans from production and productivity (whether they will be succesful in this iteration is, as you say, debatable). The other axis is the attempt to erode democratic norms in order to prevent the majority from checking the power of a wealthy minority.
Once those two points of leverage are completely eroded, the only leverage left is mob power. And as history has shown on multiple occasions, excercising that leverage is ugly and messy.
surgical_fire | 21 hours ago
Good point. Count me in.
> The whole thing rests on whether workers can maintain their leverage in the AI era. That leverage is being eroded by the attempt by job providers to decouple humans from production and productivity
Being brutally honest, I have zero concerns about AI displacing jobs en masse. I honestly believe it to be a deadend technology.
And I say this as someone that uses it daily.
I am a lot more concerned about the incoming economic downturn. Things have the potential get very ugly pretty soon. This will have a huge detrimental impact on labor rights for the foreseeable future. AI is just smoke and mirrors, something asshole CEOs can gesture at to spin layoffs as optimistic and sound like visionaries while mismanaging companies under their control.
tavavex | 19 hours ago
insane_dreamer | 13 hours ago
sure, but don't expect companies to pay you the same as they did for the 5-day work-week.
And there lies the rub: yes, AI can increase productivity. But the gains of that productivity are being, and will be, captured entirely by shareholders, not employees.
euroderf | 8 hours ago
euroderf | 4 hours ago
There's room here for argument and further evidence gathering. I've read of cases where the leap in productivity fully justifies holding pay steady. But admittedly these leaps may or may not hold up over the longer run.
Many people might (say) trade a 10% pay cut for a 50% jump in weekly days off. Others can't afford to do that because, well, that's the state of things. But work is becoming a scarce resource, and if you look at the numbers, more is desired (especially. part-time) by pensioners, students, and others. If only health care was state-funded, so that it was not a deterrence to headcount.
ChrisArchitect | 17 hours ago
Discussion then:
Study found that young adults have grown less hopeful and more angry about AI
https://news.ycombinator.com/item?id=47704443
and related:
The More Young People Use AI, the More They Hate It
https://news.ycombinator.com/item?id=47963163
deferredgrant | 17 hours ago
silexia | 15 hours ago