You are not left behind

38 points by BinaryIgor 12 hours ago on lobsters | 24 comments

msangi | 8 hours ago

The whole idea of being left behind is weird to me. At the moment the field changes so quickly and the results are surprisingly good but not good enough if you look behind what the hype machine tells us.

Aren’t we confident that once something actually good comes out we can quickly catch up with the people who are supposed to leave us behind?

A while ago it was “you should become a prompt engineer or be left behind”, then it was “forget about prompts, use agents or you’re going to be left behind”. Now we should better run a claw or we will be left behind. Tomorrow I’m sure there will be a new thing.

David_Gerard | 8 hours ago

"oreos are the future" - CEO of Oreo

singpolyma | 8 hours ago

A lot of my elders in the field have told me for years that they got left behind ages ago. They can keep working their current corporate job but despite technical skill fear ever needing to find another because most new jobs in their specialty are done totally differently now that what they do.

Before LLM I never understood how this could even happen. But now I think it might also end up happening to me eventually.

[OP] BinaryIgor | 4 hours ago

That's why you should at least try to change job every year or two; it keeps you sharp and fresh :)

kornel | 8 hours ago

There's definitely overhyped nonsense happening right now, but I am wary of long-term changes. Even if the tech doesn't improve, people eventually figure out how to work with the tech's shortcomings and utilise it fully.

It is a technology transition, and so far looks like the others - overestimated short term, and (probably) underestimated long term.

Web was like that: handful of amateurish pages at dial-up speeds and prices. Streaming video: spend half a day crashing Real Player to see a postage stamp at 4fps. Smartphones went through this too: expensive gadgets with more cons than pros (no physical keyboard? no week-long battery life? delicate glass?!). But 20 years later we're in a situation where it's hard to function in a society without one (quite literally, I was forced to use an app to prove my identity to the government).

dpc_pw | 4 hours ago

Realistically, I think a lot of developers will get left behind. Not even when the technology improves, but when the current iteration of it gets full adoption.

Claude Code (and other frontier LLMs I assume) today is already a better dev than most junior to mid-senior developers, except most talented and driven ones. Yes LLMs are in a way "very smart auto-completion systems", but they make up for it in other in-humane strengths.

E.g. Claude Code is crazily good at debugging. I consider myself being very good at troubleshooting and debugging, but Claude Code can juggle so much context perfectly for extended time, making no silly human errors, cross-matching data from different logs, etc. that with a little guidence and context it can root cause problems at least 10x faster that I can most of the time. And it knows about stuff that most devs don't know like compiler internals, and all sorts of low level tooling, and so on.

Or e.g. I consider Nix to be absolutely enlightened tech, but it being a rather uncommon and different approach that what developers are used to. LLM doesn't care. LLM looks around the code and sees Nix files, so it does Nix for me not questions asked. I like HTMX? LLM does HTMX. You want React? It will do React. That's a crazily productive property, so uncommon in humans. Most humans just hate having to pick up yet another tech/framework/language/approach. Most devs can barely be bothered to understand underlying data model of git, which is a fundamental tool they use every day, and will rather whine "git is not user friendly" than just actually learn and understand it.

Even if the tech doesn't improve, people eventually figure out how to work with the tech's shortcomings and utilise it fully.

I think at the core this is what the game currently is about. LLMs are just a very powerful tool that showed up out of nowhere and is really, really weird and different from all the existing tools we had but like any tool require skill and experience to use it well. And we're currently in a phase in which we collectively are still exploring it.

lproven | 9 hours ago

This Lobste.rs post cropped up on Mastodon.

I think he's right but doesn't go far enough. I have been around this business just slightly longer than he has. I vividly remember DOS memory allocation gymnastics, the weird and non-obvious limitations of Windows 3.0, and so on.

The point about staying with DOS for its manifest advantages, and then getting left behind and trapped there, is entirely valid.

But the flipside of witnessing multiple such transitions (CP/M to DOS, DOS to 16-bit Windows, 16-bit to 32-bit, 9x to NT, and even relatively minor ones like single-core to many-core and 32-bit to 64-bit,) are that there are always multiple paths through the forest. There's always a Next Big Thing which is going to solve all tech problems.

There are always multiple NBTs, and each will have fanatical adherents who proclaim it's The Way, the best solution, the one that obviates all the others and makes them irrelevant.

Multiuser OSes on PCs (CDOS, MDOS, vDOS, etc). Cheaper simpler networking systems that save the expense of Ethernet (The $25 Network, LANtastic, multiple competing peer-to-peer LANs such as Netware Lite)... or that scale better than Ethernet (Token Ring), or are cleaner and more scalable than TCP/IP (DECnet, Appletalk, NetBEUI, IPX/SPX, IPv6) or than Ethernet and TCP/IP (ATM).

4GLs -- loads of them, notably including The Last One, named that because it was going to be the end of software. It was the last new program. Ever.

3D GUIs. VR, repeatedly. Microkernels.

This is just another.

LLMs model language. They cannot think or reason or learn. All they do is model language. This is demonstrable and replicable.

LLMs are the complementary medicine of software. They are the Emperor's New Clothes. They are a religion: the advocates have Seen The Light and they Know that this is The Future.

But like any other religion, they can't prove it. Only they see the amazing evidence. Only they feel the miraculous softness and fineness of the invisible clothes. All the rest of us need to do is try on the new clothes and we'll feel how amazingly comfy they are. All we need to do is accept $PROPHET as the True Voice of $DEITY and we will be saved.

Only the followers see the miracles. Everyone else sees parlour tricks. The man sawing a lady in half, but she can still wiggle her toes, look! Software from just a description! Just prompt it right and you too can get working (ish) software from a few lines of prompt! (OK, maybe quite a lot of lines. And iteration. And bigger context windows. And paying for the premium version.)

Just a few more tokens, we swear, it'll work.

It is a trick. You have been fooled by a trick. The lady was not really sawn in half. The magic pills, with a trillionfold dilution of extract of duck, do not in fact cure anything, and the patients keep saying they feel better until they die.

No deity ever answered a single prayer ever in history.

It's just stories. It is all just made up.

You think it's the future of software, but it's not software, it's mechanically-recovered slop and some of us can tell the taste and the texture even if you can't.

But it's accelerating climate collapse and political unrest fed by bots which is bringing WW3 closer.

Never mind that DDR5 is much more expensive now. Once Taiwan is invaded and the KMT finally loses its bit of WW2, once those cheap factories and fabs in Asia don't make cheap chips any more, we all will have to find ways to write code that fits in 4GB of RAM once again and runs on one not-very-fast core. Or less. We might be back to 16MB of RAM and 16-bit chips. We might not even be able to make them.

The real skill is making software smaller, simpler, cleaner, more general, less tuned and more robust.

The real value is in code that fits in 1 person's head, just as it always was. The basic non-fungible unit of software is units of code that 1 individual human can write and if they need to (new job, or a bomb obliterated the home and office and destroyed the machines) re-write, from memory.

When everyone is at war, the basic unit of computing is the computer that 1 person can design and build from the resources where they live, without needing other countries and commercial allies and global supply chains.

If it's too big for one human brain, it's too big, period. Outside of peacetime, you can't rely on the internet, or the mystical Community.

As someone who's also very sceptical of the prophecied AI revolution, these sorts of comments confuse me. I don't get how you can say that they're alternative medicine or the Emperor's new clothes when they do some really impressive things. Being able to get an LLM to chomp through a codebase and identify how it's structured and what the salient performance optimisations there are us really impressive. Being able to generate a script in a couple of minutes that would otherwise have taken me a couple of hours of fiddling and looking up documentation is really impressive. Being able to rubber duck my way through a complicated piece of logic with a tool that can explain how things work, and link the documentation I need to understand things more deeply is really impressive.

I mean, I don't think it's replacing me any time soon, but it's pretty good. It might not be a full imperial outfit, but it's a pretty good pair of jeans. It's doing useful things, and making my job easier. That's the sort of stuff that I like in a tool.

And sure, you make other points, but the problem with starting with "LLMs don't do anything useful" when I can clearly see LLMs doing useful things is that it makes the rest of the post deeply untrustworthy. It's difficult to talk about the moral issues with LLMs with credibility if you start by saying they don't work, because if they don't work, why do we even need moral arguments against them? I don't need a moral argument against homeopathy, I just need to know it's pointless. But the moral argument against fossil fuels is important because fossil fuels are incredibly useful and it's easy to get stuck using them if you don't understand their cost. Are LLMs more like homeopathy or like fossil fuels?

Student | 25 minutes ago

Yep llms are much more like cars and fossil fuels than anything else. Very individually useful, over indexing our society on them will probably be bad.

Student | 6 hours ago

I think you should read this article carefully. It’s very nuanced.

lproven | 5 hours ago

So you're saying you think I didn't read it carefully?

Student | 2 hours ago

You’re talking like agents are smoke and mirrors that don’t achieve anything, and the article is boosterism. It’s clear that agents are able to automate certain tasks very effectively.

sjamaan | 4 hours ago

One thing I don’t really understand is this: using an LLM seems a fairly trivial thing to do. So why worry that much? If the time comes when you can’t reasonably do without, it should be easy to pick up, no?

I too wonder what exactly they mean when they say this. Using it isn’t difficult, no one is doing anything particularly interesting that you couldn’t pick up over a week or so.

Do they mean learning the fundamentals of how LLMs work internally? That doesn’t seem to be it, not only because the doomers never seem to describe the internals but also because they also expect the internals to change fundamentally (because if they don’t, LLMs won’t become more than a curiosity).

muvlon | 3 hours ago

Yeah, oversimplifying a bit, there are two possible futures:

  • LLMs will be easy to use effectively, then you can just pick it up whenever.
  • LLMs will be hard to use effectively, you'll need a bunch of expert knowledge about prompting and such. Then it will not be the technological revolution it's sold as, even if it is here to stay. It may transform programming but not get rid of programmers any more than FORTRAN did.

fleebee | 3 hours ago

That puzzled me also. The author is saying that it's not worth it to learn current arcane, soon-to-be obsolete knowledge, but also that if you ignore it for enough time you might "have to start over from scratch, which may be very hard."

That's vague, and I'm not sure what's pointing toward more difficulty in adapting LLMs down the road, especially under the premise that they're improving—significantly enough to bring about "inflection points".

The title of the post is "You are not left behind", but at the same time it's saying that if you ignore LLMs long enough you will get left behind.

Student | 31 minutes ago

It’s a fairly easy learning curve. There is a learning curve to be productive but it’s not a problem for any individual.

I think the “get left behind” message really is more for businesses and business leaders. Both in reality (eventually an organization that doesn’t pay for ai tools will probably be much less productive than those that do) and in imagination (the fear of being shown up in the short to medium term).

ThatsInteresting | 4 hours ago

I've mentioned the LLM induced outage at $WORK, I'm going to describe two more LLM-related situations that have come up:

  • A colleague is using an LLM to generate large volumes of boilerplate code transforming json to native structures and even larger volumes of tests. The bottleneck is quickly becoming the review process as the MR's are huge. We're half joking about having a different LLM review the LLM-generated code and then have a human do it. It's putting a new spin on the "LGTM"-level of approval.
  • An upgrade of a product broke. It took me a day but between experience and intuition I found the cause and a work-around, even before the vendor. Knowing both of those you can inquire with public LLM's to get a half-correct answer as it's too new to get the answer without it.

I'm old enough that this my third AI hype cycle and I'm not yet expecting my job to be replaced (at least not adequately) but it is making some of my job worse.

alper | 9 hours ago

Carpentry and critical usage is the way to go. Hoping it goes away is never productive.

Hoping it goes away seems to have worked just fine for metaverse, web3, NFTs and VR.

[OP] BinaryIgor | 4 hours ago

The jury is still out whether it makes us more productive overall; I would say in some contexts yes, in many no

regulator | an hour ago

I felt tempted to like this article because it mirrors some of my feelings and hopes (that I am not being left behind by not using AI; that there are other paths besides immediate adoption) but after finishing it I feel like it's more AI hype. Specifically comments like

Even though [AI] is obviously still in its infancy

However, especially in software development, it is quite likely that AI-based software development will eventually become the predominant paradigm and the tools will mature.

The tools will become better and better

Comments of this sort end up in a lot of AI-skepticism posts I read, and they signal to me that the author has been taken in by pro-AI marketing, despite their skepticism.

I am not confident that AI is still in its infancy. Despite the continued work on AI and the increased demand for it, I suspect that we have plateaued in terms of tech improvements, and that the next wave of improvements will be primarily focused on the UX of AI tools, and not on the quality of the AI output itself.

Similarly, I am not convinced that "the tools will become better and better." If we look to other services such as Uber or AirBNB which have similar business models to the largest AI companies (operate at a loss while cornering the market then jack up prices and force locked-in customers to spend more because they have no other options) we can see that over time their service became significantly worse when they began needing to turn a profit. I expect a similar experience with companies like Anthropic and OpenAI: as they begin needing to actually make a profit and not just abuse revenue reporting, we will see them turn the dial up on their profit-making measures, which will necessarily make their products worse.

And the comment that "it is quite likely that AI-based software development will become the predominant paradigm" strikes me as the sort of hedging that comes as a result of believing what the AI companies are selling. They're pushing the narrative so hard that even skepticism still places the odds at "quite likely," which feels absurd to me, when you could just as easily say "I have no idea whether AI-based software development will become the predominant paradigm, but" and go from there.

I am being nitpicky, and it seems probable that the author's comments are much more reasonable and grounded than mine because of how aggressive my anti-AI slant is, but when I read this article through the lens of "How much do the comments line up with what AI companies are telling me via marketing?" I find that the answer is "a lot."

Student | 44 minutes ago

Despite the continued work on AI and the increased demand for it, I suspect that we have plateaued in terms of tech improvements

Even if we have reached the absolute limit of making unaugmented models one shot code generation (possible), we’ve really just started to explore how to pair up llms with other forms of automated reasoning and knowledge storage. I think we’re at the start of also training models how to respond to the feedback from tools. I suspect we’re at minimum going to see specially trained variants focused on specific language ecosystems.

Problem solving and adaptation never goes out of style. Most people use IDEs, Google, and high-level languages. The fundamentals didn't change when those were introduced, but the barriers to entry were reduced and with Stack Overflow and enough copy'n'paste you could string together a solution.

With AI, the speed of light didn't change, there's now a new "processor" (albeit one that runs on top of existing CPUs and GPUs). In addition to single and wide fast and deterministic processors, we have a slow and imprecise one we can run in parallel. The technical constraints like processing time and memory haven't gone away, nor the judgement balancing their usage.

The problem of "What are we actually trying to build?" hasn't gone away. The complexity of dataflows and communication problems of problem context didn't go away either. The question is what happens to token cost and how we can use AI within that constraint. The options look different if cheap tokens continue, or if enshittification leads to $10k/month AI bills. In this respect, I see possible parallels to cloud usage.

Maybe this pushes software engineers of the future in a different direction, perhaps to look more like "true" engineering firms or law firms.