You Are Here

31 points by tsg 16 hours ago on lobsters | 18 comments

WilhelmVonWeiner | 8 hours ago

I wrote a backtracking implementation that worked perfectly fine and every LLM I threw the code into would tell me it was broken because of my use of Python iterators. They were stumped by a weird use of a marginally complex Python feature. The future of code, if it's all LLM-generated, will be incredibly boring.

The future of code, if it's all LLM-generated, will be incredibly boring.

i have a project idea that i want to do in go, with LLMs writing most/all of the code

i expect it to be the most dull code imaginable, with that combo

FreeFull | 5 hours ago

Was the feature "for else"?

txxnano | 3 hours ago

it was broken because of my use of Python iterators

I'm not really aware of your implementation but I'm curious to see what other programmers will think of this logic; "a weird" use of a "marginally complex" feature it's always a red flag to me in a code review, we code to be understood by the computer, but mostly our pairs so trying to outsmart everyone it's not usually a wise move

WilhelmVonWeiner | 3 hours ago

I write code to be understood by myself and other people, in a way that the computer can run efficiently.

If the feature is too "weird" or "complex" then why was it added in the first place? I think Rob Pike's quote: "[Googlers] are not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt" has seemingly been taken to heart by a lot of programmers.

sluongng | 12 hours ago

It's kinda chilling to read Marc writing about this. Especially knowing that he is an influential figure in AWS.

Personally I am still at aww by everything in the coding agent space. I still see Anthropic and OpenAI hiring, which I believe to be a good sign that human engineers have yet to be replaced fully. But the nature of the job certainly has been evolving rapidly.

I think the most interesting part of this second act is where in the stack the problems are shifting towards. We kinda short circuiting all the problems in the software side, yet there are still many problems, if not a lot more problems, on the 2 ends: hardware, energy, human and social economic. And we need more problem solvers to tackle them.

michiel | 2 hours ago

A lot of influential figures are influential because they shifted their attention to a problem domain that was commercially interesting at the right moment. Additionally, many of them are in positions where they can be reasonably expected to parrot the position of their employer.

So in a lot of cases, I'm not sure if these people are declaring the arrival of the LLM coding future because they're trying to stay relevant (or sell books / themselves), or if they honestly believe it has arrived.

And if they believe the latter, what are they basing their opinion on? A lot of people claim spectacular results, but their evaluation methods are less than rigorous. There are studies with rigorous evaluation, but their results are less than spectacular.

sluongng | an hour ago

What would it take to convince you in this case?

I say this without meaning to sound like the whining skeptic because I use these tools (not that much)... I feel like these tools really don't handle growth on an existing system super nicely. So if you have a running system and then want to change it (and not merely append to the existing system) it gets tougher to navigate.

More precisely I've struggled to get coding agents to work beyond their first iteration cleanly. They can get most of the way through a tricky change but if you want to change the strategy a bit they struggle to rework their changes in my experience.

It's a bit tricky for me to say "near-zero" when almost all of my day is working on existing systems that have to stay running.

Of course this could entirely be a skill issue thing on my part.

The author also just acknowledges the "real" thing, which is there's still looooooads of non-zero costing parts to the whole endeavor.

In other words, my best prediction is that the next two decades look like the last four. Hardly a prediction worthy of an oracle.

2 decades is a long time. The arc of the blackberry was only 10 years, and that felt valuable enough!

I won't be losing too much sleep just yet.

adamshaylor | 10 hours ago

Note the description for the ai tag: “Tag AI usage only with vibecoding.”

gerikson | 10 hours ago

This is my fault, I suggested a change to the title and forgot to fix the tags. After a succesful user edit, the submission is locked for further suggestions...

[OP] tsg | 10 hours ago

ah, thanks, I was looking for the edit button.

[OP] tsg | 10 hours ago

Looking at the state of coding agents right now, it feels to me like a human in the loop is pretty much still needed for any significant project, and that human needs to be a software engineer. The combined agent+human is faster in most (or almost all) cases, but it's still bottlenecked by the human's capacity to reason about a complex project.

I know Anthropic and Cursor tried to have complex projects implemented only by agents, and the results are impressive but still far from being comparable to what humans produced in the past.

The question is if AI advancements will soon get to the the point where the agent works better by itself, without human help, even on large projects. Until that happens, we still need knowledgable software engineers, they just don't write the code directly.

sluongng | 10 hours ago

it feels to me like a human in the loop is pretty much still needed for any significant project, and that human needs to be a software engineer

Absolutely! People like to post hype for a post-human coding future and build toward that future, but that is not the reality today.

Today, you can see OpenAI Codex team is hiring aggressively. Anthropic is also hiring, Cursor does too and so does Cognition. Pretty much all the agent labs are hiring because their agents cannot operate without human in the loop, at least not yet.

I think if you start seeing these labs pause on hiring and start downsizing their teams, that would be a big tell.

That is telling to me---if the LLMs are so good at increasing performance, when why are the companies that make LLMs hiring so many people? Shouldn't they be using their own LLMs to help them improves their LLMs? That tells me that LLMs aren't that great at their proposed advantage.

Corbin | 4 hours ago

Let's grant that business logic, glue code, and (local) systems logic have all gone to zero cost. What's left in computing? Only (roughly chronologically) distributed systems, cryptography, numerical analysis, algorithms, information theory, queue theory, computability theory, data structures, cryptanalysis, virtualization, parser theory, recursion schemes, complexity theory, capability theory, and big computers. We have a long way left to go and we have only barely made progress on the fundamentals. Business tasks like summation of a list of numbers or tallying of present workers, both standard business needs dating to the dawn of history, are still open problems which can only be efficiently solved in specific limited circumstances.

In hardware, where we have had the longest to apply our techniques, EDA dominates: some of the glue and systems logic for laying out a chip is automated and not worth doing by hand. The cost of business logic has not gone to zero despite some seventy years of EDA, though. This suggests that we shouldn't grant the author's claims without further evidence.

agent281 | 5 hours ago

I'm still waiting to see how the economics shake out and where we are at the end of the hype cycle. That said, if this LLM coding thing pans out I think the economic advantage will shift back to companies that make things and away from companies that organize things. I imagine that means that there will be fewer pure software companies.