I don't know if my job will still exist in ten years

22 points by lalitm a day ago on lobsters | 14 comments

hyperpape | a day ago

It’s unseemly to grieve too much over it, for two reasons. First, the whole point of being a good software engineer in the 2010s was that code provided enough leverage to automate away other jobs.

The rare member of the "leopards eating people's faces" party who understands their own face is edible. No snark directed towards the author--I think the consistency is healthy.

And as uncomfortable as it is for us, I don't think automating away jobs is a bad thing--my leopards references don't imply a moral judgment towards automation.

mtset | a day ago

In my first job, I automated a process that had previously required hours of labor by nurses per patient. Because this is very expensive, and because the job was not part of emergent patient care nor required by law or insurance, it simply didn't get done most of the time. With our automation, it got done more often, and many clinics we worked with actually hired more nurses to handle patients identified as high risk by our software.

It is possible to deploy automation in a way that both provides a lot of value and doesn't disrupt lives. It's capitalism - private ownership of the automation and the value it creates, basically - that leads to these bad outcomes.

All this to say, I don't think working in software inherently makes you a leopards eating people's faces supporter.

einacio | a day ago

I don't know, i have had to read the output for a project, and it kept doing stupid stuff. If the LLMs are so good at fixing bugs, why do they also introduce them? As long as the tools don't understand what they are doing, it's not doing more than wasting time and electricity. Understanding seems still quite far away, at least with the current algorithm. And they are only cheaper than dev time because the prices are lies themselves

simonw | a day ago

If the LLMs are so good at fixing bugs, why do they also introduce them?

How good are the automated tests?

einacio | 22 hours ago

And who writes the automated tests?

If it's the LLM, it's still random output that doesn't understand what's testing.

I have seen pages of tests for trivial stuff (like if addition works), that barely touch business logic, or that had business logic bugs which would not work with real data

simonw | 19 hours ago

The LLM writes them, you review them to make sure it didn't do anything stupid. If it did you tell it what to change.

This is one of those things which the November-onwards frontier models (Opus 4.5+, GPT 5.1+) are significantly less likely to screw up than previous models. I'm finding they have much better taste in tests now.

spc476 | 18 hours ago

If I have to micromanage the output from the LLM, I might as well write it myself.

JaDogg | 13 hours ago

I take a hybrid approach

Write tests using x as reference for y. Only write below test cases

  1. When foo is null bar should return boo 2 ...

After that in another prompt

Reduce the line count of test cases by using parameterized tests. Extract common patterns to utility functions.

spc476 | 23 hours ago

A previous manager of mine trusted the tests more than the code. I constantly bitched at him that the tests themselves might not be good as they reflected the understanding of the person writing the test (me, mostly) than be a valid test (communication at The Enterprise was not that great).

marginalia | 23 hours ago

I think oddly enough, current cutting edge models are both good at introducing bugs, and at identifying and fixing bugs once they learn of them (even frankly some impressively obscure ones), but are not as good at discovering that the bugs exist. Even if you get them to implement tests, they fairly often miss important edge cases.

Loup-Vaillant | 22 hours ago

The way I understand LLMs, they’re imitators more than anything. As such, I can see them doing an average job, though perhaps not yet.

Thing is, "average" these days is terrible. The state of our industry is so abysmal, the last thing I want is automate the same kind of crap code I’ve been seeing the past 20 years on the job. We first need to get our act together and pick up the various ways to build actually good software — that just works, performs reasonably well, and isn’t an unmaintainable mess under the hood. Then we apply machine learning. Maybe.

Oh, and there’s the reliability, trust, and liability thing. If the AI comes up with a machine checked proof that the code it generated is correct, perfect: I’ll learn to write formal specs. Until then though, quite a few niches will still need humans at the helm.

The more I use LLMs and the more time passes with them being available, the less I'm worried about losing my job to one. My main concern all along has been the anti-democratic method that these tools have taken in supposedly "reshaping the nature of work", but mostly as an excuse to lay people off.

I don't have a problem with the technology, I don't even think copyright is a useful concept, but I don't like it when people's livelihoods are put on the kill line because it would raise someone's stock price by 2% this quarter and net 15 other guys like a billion bucks. That's not even AI-specific, people did that back in the day with automatic looms and rightfully had their factory smashed.

But rich people haven't learned that lesson yet. We can cooperate to make society better, it doesn't have to be this Lord of the Flies bull-ish. We have a massive corruption problem in the US because everyone handles rich people with kid gloves. We need accountability up in here!

sanxiyn | 18 hours ago

They'd just have to get better and more reliable at doing the things they can already do.

So far, my impression is that they got better, but they didn't get more reliable. So I wouldn't use "just" here.

giann | 11 hours ago

Comment removed by author

giann | 11 hours ago

I think if we want to continue to be software engineers in the coming years, we'll have to be better ones. Most of our job right now is implementing and combining solutions that already exist: you take state of the art softwares, APIs, frameworks, and combine them to answer the issues you're company is trying to solve.

AI is good at doing just that. And it's getting better at it. What AI is not good at, and in my opinion will maybe never be good at, is coming up with original ideas. For instance, I've seen recently a video of a guy making popular AI models rewrite Minecraft in a day. It's all really impressive. But I don't think AI can come up with the idea of Minecraft if some version of it didn't exist already. Of course we also don't come up with ideas in a vacuum, but we're far better at it than AI will ever dream to be.

So yes, being a software engineer is going to be something entirely different. It's a big shift for us and I don't think we will all be able to follow it.