The desktop computer won't completely disappear. Instead, the outward form of the personal computer will be retained, but the function — and the design — will change to a terminal connected to the cloud (which is another word for server farm, which is another word for mainrack, which converges on mainframes, as previously prophesied). True standalone personal computers may return to their roots: toys for hobbyists. (emphasis mine)
I echo the author's disappointment with "appliances" to some degree. But I think it's important -- precisely for this reason! -- to cultivate a healthy separation between hobby and professional development.
True standalone personal computing, with character and magic, isn't commercially viable on a wide scale at this point, and hasn't been for a very long time. There are exceptions but certainly not enough for most developers to be employed on it.
That doesn't mean it's dead. People are still doing homebrew games, experimenting with new designs (including in the 8-bit space, see e.g. Zeal or Mega65. It's not all nostalgia (see e.g. PineTime for something that's not retrocomputing), I picked Zeal and Mega65 just because the 8-bit era is mentioned in the story.
It's just that the large-scale commercial software arena is no longer where you need to look for it anymore.
I am 57 and I started at 13. I like agentic engineering for the most part.
The fun part for me has always been designing the system with the realization of the system being the slog. I am probably pretty conservative in my usage, getting it to do the chore tasks and things like write specific functions or modules (eg I am fairly precise in what I ask of it).
But it lets me focus on the design rather than the implementation details.
i mean, surely this has been done! this isn't exactly "AI sentiment by age for tech workers", but here is a 2023 preprint. it's primarily a cross-sectional study about AI usage across gender and age cohorts (n=1480), but they also had a separate analysis focusing only on people with "technology-related" education (n=836) which is a bit broader than tech workers.
obviously AI usage and AI sentiment are two different things, but on pages 8 and 10 they have a big table where they cluster respondents' reasons to use / not use AI, with examples, which might better align with what you had in mind
I disagree because this isn't new compared to the many many people that copy most of their code from stack overflow. You don't need the details but you still need to have a basic grasp and understanding.
For now this is true with LLMs.
We had these things before. Not just SO, but also intricate good auto complete systems.
We have frameworks that evidently make people incapable of vanilla JavaScript.
We have cloud systems that hide servers behind insane amounts of abstractions but that have people still fail cause they don't get that behind all of that are real servers regardless of the product being called serverless.
It feels like we're on the brink of getting beyond that but right now AI feels more like a replacement for SO and library hunting. That's great and all but right now it isn't more than that and no matter what people think and expect right now we don't know if it will be anything more than that anytime soon.
I think it would be great but given how many huge companies have been chewing their teeth on it with essentially unlimited resources available we're still not there. Maybe it needs just a bit longer. Maybe basic LLM usag will just find new ways of using the same technology, like with agents. I think much can come out of that but that big change isn't there.. The biggest effect right now is that you don't have to reinvent the wheel because you can just strip copyright off almost everything in existence which is off course very much changing the landscape but programing so far is still programming with now another tool that provides a great interface for automatically integrating SO answers which is great but also not more than that.
I think the jump is big but then so is the jump from offline to internet of things, always online, etc.
I agree on much of what you wrote. But the big difference with copy pasta from Stack Overflow is that it required, searching, reading and often evaluating one answer among many. Maybe this is shifting to prompt jujitsu where you ask the LLM if it can come up with a better solution, or if has considered this and that. But that is a very different interaction, and one that feels significantly shallower that the critical and social engagement that reading on Stack Overflow cultivated.
Reading on Stack Overflow cultivated social engsgement?
I think SO famously got rid of that. Maybe even making us more happy with the stuff that LLMs tend to return
I do indeed think that prompt jujutsu is replacing google jujitsu (or the same like when pasting errors). The need to evaluate the answer also seems on a similar level.
I do agree that this does mind to less and more dull interaction.
Another related worry is that it kind of leads to "knowledge" or "data", think Internet content gets stuck in time. We also see LLMs taking an extreme likingn into eg Python as well as certain writing styles. You may think of these things what you want, but I think these things just like present day morale and trends will get stuck. Especially with new content being created with the aid of LLMs.
Even before that this trend already started because of social Networks and somewhat centralized human interaction and even traditional media mostly copying and at best translating and putting into own words.
This already seemed to dry out more independent culture. The recent decade has seen a shift to a quite limited "mainstream culture" (including accompanying "counter cultures").
I think this will get worse on both the social and the technological side that already has been suffering for a while from extreme monocultures despite s lot more people being involved. See how most just,try to copy FAANG and how websites, Interfaces and phones are essentially all the same.
While often declared such they are by no means objectively better, and more like fashion trends and hypes than clear advancements. I fesr that even those could stagnate because "AIs" recycle what is already there and minds adapt and people use whatever the outcome might be.
Given that even before they new products were often hyped as technological innovation and as huge leaps forwards I wonder if this will even be noticable. Maybe we'll even engrave these cycles in models to give the perception of innovation just like with classical fashion and things like toothbrush designs.
That's what I very much worry about. Humankind stagnating and reducing itself both technologically end socially.
Maybe I’m not getting something important but to me the big promise of — and the big threat by — computers was always that they did exactly what you told them. Not necessarily what you wanted, even less likely something useful, but definitely what you told them. Computers obeyed. If it didn’t work it meant your order was bad or your whole technical approach was futile.
LLM don’t follow orders like that, they obey more the way most humans do. Somewhat, but not quite exactly, usually trying to cut corners, sometimes lying a little, needing to be cajoled or intimidated or reminded. One can argue that this also just means your order was bad but for the lack of a better term I’d say it feels more like your leadership was bad. Even though clearly LLM have neither consciousness nor character and do not orient themselves around any leadership at all.
Interestingly enough to a lot of people this feels more accessible and not less and it arguably is. But the people who were able to get something useful and maybe even some fun out of the old stuff are likely preselected in a very special way.
The thing I feel reminded about most is the tantrum some type of seasoned C developers throw when you tell them that they aren’t writing for an actual machine but for a stochastic probability engine simulating one specifically for them with a lot of branch prediction and even darker magic happening
I hope this is insightful or at least amusing for some. This is my first comment on lobsters and I still need to get a feeling for the tone/vibe here
I think you've hit the nail on the head, but perhaps not quite squarely. I grew up with computers (I wrote my first, admittedly not very complex, program when I was 3), and the computer would say yes or no with no prevarication. I either understood the rules of the game or I didn't, either translated my intent into instructions correctly or didn't, and thought through all the edge cases or did not. Either way, if I failed, it was because I didn't understand what I was doing enough or didn't express myself clearly, and either way, it was fully within my power to fix the issue and get what I wanted. I've spent the last 35 years of my life getting better at reliably getting it right, and now the "market" is saying that that has no value. That smarts a bit.
But there's a side effect of all the effort getting good at telling a computer what to do: I learned to care about the details obsessively. When writing a program, I can't not think about how data will be laid out in memory, what the access patterns will be, whether my hot loop will invoke the GC or call out to a machine on the other side of an ocean. I can't even stop thinking about those things when using a program that somebody else writes, and AI explicitly tells you not to care about those things, that those are details that you should not even know about. As a result, I end up with a pile of absolute dreck that I feel shame at attaching my name to. Thinking about claiming the output of my own work reminds me of a cross between blatant plagarism and some of the managers I've had in the past claiming credit for my work. My professional pride screams no, but the hellscape that is the job market is right now asks "do you really have a choice?".
I am not generally inclined towards hate, but I hate this. Everything I value about the craft of software engineering and software development is being dismissed as irrelevant. I see the future that will result and it fills me with despair, but like a modern day Cassandra, my warnings go unheeded. In the end, I just couldn't do it; I left pure technical work 8 months ago and went into management. If I can't escape it, at least I can give my staff a bubble where they don't have to worry about having KPI's about their AI use and they can focus on doing the right thing for the long term rather than getting the feature shipped now. But I still have that despair; how long will I be able to provide that shield?
This is my first comment on lobsters and I still need to get a feeling for the tone/vibe here
I recognize the feeling, but just like people can have horses as a hobby even after they were supplanted by cars, developers can have flow inducing programming that requires brain power as a hobby too.
I can also recommend the game of Go. It feels like it tickles some of the same parts of the brain.
I think this covers my experience too. The "no pixel on the screen unless I put it there" really resonated with me. All the way from modal popups to ads in windows start menu and so on.
I enjoyed this article a lot, probably because I'm of the same vintage (but not a CTO). One thing that I've found striking in recent writing about AI in the software industry is that there has been a notable shift from talking about Building AI Into Everything to Building Everything With AI. Maybe it has just been a shift in my reading habits, but I mostly try to avoid it TBH. Perhaps this belies the coming bubble-pop--but either way I'm grateful for the change, even as I resist using these tools in my daily work as much as possible.
In my best of all possible worlds the "fallow period" which he speaks of will be the realization that, when automation is trivially accomplished, it becomes less important. We will write less, not more code. We will automate less things. The current trajectory we are on doesn't seem to support this at all, but in a world where code is automatically generated and reviewed, perhaps the need for it will vanish in a puff of smoke? Is a world with less code a better world? Maybe our job will be to remove it, like plumbers, or Harry/Harriet Tuttles?
There was a paragraph from the post that struck a massive discord with me. He was waxing lyrical about being kept up all night by the puzzle. That happened to me on Thursday last week. I couldn’t get a refactoring to work, and I struggled to understand why.
I left my office late, had dinner, played guitar, and went to bed. Laid awake until about 6:30 am the next morning as I couldn’t stop thinking about it. Slept about 90 minutes in the end, then got up and went to work. I restarted the refactor, and got it over the line.
I felt really terrible, and the smart thing might have been to take the day off, but my fear was that I wouldn’t be able to stop thinking about it and it would completely ruin my weekend too. I slept 9+ hours the next two nights, and was still tired during the day.
If a programming problem never again keeps me awake all night, it will be too soon. Good bloody riddance!
I don’t miss the same things he does. 🙃
I'm ~46 similar experience, but I really think we're just getting started. The wonder for me is how did it take this long for autocomplete to get this good, and when will it be 100% local w/ far more control of what training data to source from, what code to read, etc. It's getting there, but yeah the wonder of how TCP Winsock worked or how LILO loaded linux, is nothing compared to where we're going.
x64k | 12 hours ago
I'm a reluctant but full believer in Winestock's theory of the eternal mainframe. It checks out:
I echo the author's disappointment with "appliances" to some degree. But I think it's important -- precisely for this reason! -- to cultivate a healthy separation between hobby and professional development.
True standalone personal computing, with character and magic, isn't commercially viable on a wide scale at this point, and hasn't been for a very long time. There are exceptions but certainly not enough for most developers to be employed on it.
That doesn't mean it's dead. People are still doing homebrew games, experimenting with new designs (including in the 8-bit space, see e.g. Zeal or Mega65. It's not all nostalgia (see e.g. PineTime for something that's not retrocomputing), I picked Zeal and Mega65 just because the 8-bit era is mentioned in the story.
It's just that the large-scale commercial software arena is no longer where you need to look for it anymore.
facundoolano | 10 hours ago
This is a great take, thank you.
edsu | 7 hours ago
Agreed, and thank you for the reference! In a similar vein I was reminded of the permacomputing ethos/community when reading the original post.
maveonair | a day ago
mtsolitary | a day ago
I wonder how AI pessimism/optimism among tech workers varies by age cohort. Would be an interesting study.
vhodges | a day ago
I am 57 and I started at 13. I like agentic engineering for the most part.
The fun part for me has always been designing the system with the realization of the system being the slog. I am probably pretty conservative in my usage, getting it to do the chore tasks and things like write specific functions or modules (eg I am fairly precise in what I ask of it).
But it lets me focus on the design rather than the implementation details.
swaits | 7 hours ago
Anecdata: I started when I was 8. I’m almost 54 now.
Programming has been fun every day since I started. Before AI and with AI.
If I’m building stuff, and thus learning, I’m having the time of my life. Without exception.
cpurdy | 2 hours ago
I started when I was 8. I am 54 now. And your comment is spot on 😁
I could have retired at 30, but this programming thing is just way too much fun.
darkkindness | 23 hours ago
i mean, surely this has been done! this isn't exactly "AI sentiment by age for tech workers", but here is a 2023 preprint. it's primarily a cross-sectional study about AI usage across gender and age cohorts (n=1480), but they also had a separate analysis focusing only on people with "technology-related" education (n=836) which is a bit broader than tech workers.
obviously AI usage and AI sentiment are two different things, but on pages 8 and 10 they have a big table where they cluster respondents' reasons to use / not use AI, with examples, which might better align with what you had in mind
aloys | a day ago
Gosh, this is hard to read... It feels heavily written with AI.
To quote the author, "Cheaper. Faster. But hollowed out"
dozens | a day ago
Really? I didn't get that impression. What did you see that made you think that?
aloys | 19 hours ago
Maybe I'm mistaken and becoming parano. I'll take the word author on it.
toastal | 12 hours ago
I thought it was hard to read with 150+ characters on a line.
reezer | 8 hours ago
I disagree because this isn't new compared to the many many people that copy most of their code from stack overflow. You don't need the details but you still need to have a basic grasp and understanding.
For now this is true with LLMs.
We had these things before. Not just SO, but also intricate good auto complete systems.
We have frameworks that evidently make people incapable of vanilla JavaScript.
We have cloud systems that hide servers behind insane amounts of abstractions but that have people still fail cause they don't get that behind all of that are real servers regardless of the product being called serverless.
It feels like we're on the brink of getting beyond that but right now AI feels more like a replacement for SO and library hunting. That's great and all but right now it isn't more than that and no matter what people think and expect right now we don't know if it will be anything more than that anytime soon.
I think it would be great but given how many huge companies have been chewing their teeth on it with essentially unlimited resources available we're still not there. Maybe it needs just a bit longer. Maybe basic LLM usag will just find new ways of using the same technology, like with agents. I think much can come out of that but that big change isn't there.. The biggest effect right now is that you don't have to reinvent the wheel because you can just strip copyright off almost everything in existence which is off course very much changing the landscape but programing so far is still programming with now another tool that provides a great interface for automatically integrating SO answers which is great but also not more than that.
I think the jump is big but then so is the jump from offline to internet of things, always online, etc.
edsu | 8 hours ago
I agree on much of what you wrote. But the big difference with copy pasta from Stack Overflow is that it required, searching, reading and often evaluating one answer among many. Maybe this is shifting to prompt jujitsu where you ask the LLM if it can come up with a better solution, or if has considered this and that. But that is a very different interaction, and one that feels significantly shallower that the critical and social engagement that reading on Stack Overflow cultivated.
reezer | 23 minutes ago
Reading on Stack Overflow cultivated social engsgement?
I think SO famously got rid of that. Maybe even making us more happy with the stuff that LLMs tend to return
I do indeed think that prompt jujutsu is replacing google jujitsu (or the same like when pasting errors). The need to evaluate the answer also seems on a similar level.
I do agree that this does mind to less and more dull interaction.
Another related worry is that it kind of leads to "knowledge" or "data", think Internet content gets stuck in time. We also see LLMs taking an extreme likingn into eg Python as well as certain writing styles. You may think of these things what you want, but I think these things just like present day morale and trends will get stuck. Especially with new content being created with the aid of LLMs.
Even before that this trend already started because of social Networks and somewhat centralized human interaction and even traditional media mostly copying and at best translating and putting into own words.
This already seemed to dry out more independent culture. The recent decade has seen a shift to a quite limited "mainstream culture" (including accompanying "counter cultures").
I think this will get worse on both the social and the technological side that already has been suffering for a while from extreme monocultures despite s lot more people being involved. See how most just,try to copy FAANG and how websites, Interfaces and phones are essentially all the same.
While often declared such they are by no means objectively better, and more like fashion trends and hypes than clear advancements. I fesr that even those could stagnate because "AIs" recycle what is already there and minds adapt and people use whatever the outcome might be.
Given that even before they new products were often hyped as technological innovation and as huge leaps forwards I wonder if this will even be noticable. Maybe we'll even engrave these cycles in models to give the perception of innovation just like with classical fashion and things like toothbrush designs.
That's what I very much worry about. Humankind stagnating and reducing itself both technologically end socially.
vonneudeck | 7 hours ago
Maybe I’m not getting something important but to me the big promise of — and the big threat by — computers was always that they did exactly what you told them. Not necessarily what you wanted, even less likely something useful, but definitely what you told them. Computers obeyed. If it didn’t work it meant your order was bad or your whole technical approach was futile.
LLM don’t follow orders like that, they obey more the way most humans do. Somewhat, but not quite exactly, usually trying to cut corners, sometimes lying a little, needing to be cajoled or intimidated or reminded. One can argue that this also just means your order was bad but for the lack of a better term I’d say it feels more like your leadership was bad. Even though clearly LLM have neither consciousness nor character and do not orient themselves around any leadership at all.
Interestingly enough to a lot of people this feels more accessible and not less and it arguably is. But the people who were able to get something useful and maybe even some fun out of the old stuff are likely preselected in a very special way.
The thing I feel reminded about most is the tantrum some type of seasoned C developers throw when you tell them that they aren’t writing for an actual machine but for a stochastic probability engine simulating one specifically for them with a lot of branch prediction and even darker magic happening
I hope this is insightful or at least amusing for some. This is my first comment on lobsters and I still need to get a feeling for the tone/vibe here
thequux | 5 hours ago
I think you've hit the nail on the head, but perhaps not quite squarely. I grew up with computers (I wrote my first, admittedly not very complex, program when I was 3), and the computer would say yes or no with no prevarication. I either understood the rules of the game or I didn't, either translated my intent into instructions correctly or didn't, and thought through all the edge cases or did not. Either way, if I failed, it was because I didn't understand what I was doing enough or didn't express myself clearly, and either way, it was fully within my power to fix the issue and get what I wanted. I've spent the last 35 years of my life getting better at reliably getting it right, and now the "market" is saying that that has no value. That smarts a bit.
But there's a side effect of all the effort getting good at telling a computer what to do: I learned to care about the details obsessively. When writing a program, I can't not think about how data will be laid out in memory, what the access patterns will be, whether my hot loop will invoke the GC or call out to a machine on the other side of an ocean. I can't even stop thinking about those things when using a program that somebody else writes, and AI explicitly tells you not to care about those things, that those are details that you should not even know about. As a result, I end up with a pile of absolute dreck that I feel shame at attaching my name to. Thinking about claiming the output of my own work reminds me of a cross between blatant plagarism and some of the managers I've had in the past claiming credit for my work. My professional pride screams no, but the hellscape that is the job market is right now asks "do you really have a choice?".
I am not generally inclined towards hate, but I hate this. Everything I value about the craft of software engineering and software development is being dismissed as irrelevant. I see the future that will result and it fills me with despair, but like a modern day Cassandra, my warnings go unheeded. In the end, I just couldn't do it; I left pure technical work 8 months ago and went into management. If I can't escape it, at least I can give my staff a bubble where they don't have to worry about having KPI's about their AI use and they can focus on doing the right thing for the long term rather than getting the feature shipped now. But I still have that despair; how long will I be able to provide that shield?
IMHO, you nailed it :-D
xyproto | 23 hours ago
I recognize the feeling, but just like people can have horses as a hobby even after they were supplanted by cars, developers can have flow inducing programming that requires brain power as a hobby too.
I can also recommend the game of Go. It feels like it tickles some of the same parts of the brain.
IcePic | 13 hours ago
I think this covers my experience too. The "no pixel on the screen unless I put it there" really resonated with me. All the way from modal popups to ads in windows start menu and so on.
edsu | 8 hours ago
I enjoyed this article a lot, probably because I'm of the same vintage (but not a CTO). One thing that I've found striking in recent writing about AI in the software industry is that there has been a notable shift from talking about Building AI Into Everything to Building Everything With AI. Maybe it has just been a shift in my reading habits, but I mostly try to avoid it TBH. Perhaps this belies the coming bubble-pop--but either way I'm grateful for the change, even as I resist using these tools in my daily work as much as possible.
In my best of all possible worlds the "fallow period" which he speaks of will be the realization that, when automation is trivially accomplished, it becomes less important. We will write less, not more code. We will automate less things. The current trajectory we are on doesn't seem to support this at all, but in a world where code is automatically generated and reviewed, perhaps the need for it will vanish in a puff of smoke? Is a world with less code a better world? Maybe our job will be to remove it, like plumbers, or Harry/Harriet Tuttles?
maveonair | a day ago
amw-zero | 6 hours ago
I didn't start programming until college. But so far I've witnessed:
Each of these was a gigantic industry-wide change. And I truly feel that the industry is still in its infancy. It's going to keep happening.
Garbi | 2 hours ago
This guy misses HIMEM?
stig | an hour ago
There was a paragraph from the post that struck a massive discord with me. He was waxing lyrical about being kept up all night by the puzzle. That happened to me on Thursday last week. I couldn’t get a refactoring to work, and I struggled to understand why.
I left my office late, had dinner, played guitar, and went to bed. Laid awake until about 6:30 am the next morning as I couldn’t stop thinking about it. Slept about 90 minutes in the end, then got up and went to work. I restarted the refactor, and got it over the line.
I felt really terrible, and the smart thing might have been to take the day off, but my fear was that I wouldn’t be able to stop thinking about it and it would completely ruin my weekend too. I slept 9+ hours the next two nights, and was still tired during the day.
If a programming problem never again keeps me awake all night, it will be too soon. Good bloody riddance! I don’t miss the same things he does. 🙃
rseymour | 21 hours ago
I'm ~46 similar experience, but I really think we're just getting started. The wonder for me is how did it take this long for autocomplete to get this good, and when will it be 100% local w/ far more control of what training data to source from, what code to read, etc. It's getting there, but yeah the wonder of how TCP Winsock worked or how LILO loaded linux, is nothing compared to where we're going.