On Making

34 points by mysticmode a day ago on lobsters | 6 comments

zetashift | 11 hours ago

It's cool to read these kinds of perspectives. This one resonates with me personally as well. I have very little (emotional) connection LLM written code, nor do I really care for software that overuses it, and as a consequence the LLM written blogposts that advertises it.

Navigating the impact of these type of changes has been hard and I feel a certain disillusionment about the software dev industry in general, but perspectives like these help to feel not alone.

ocramz | 9 hours ago

Is it because AIs seem human?

The longer you ponder this stuff, the closer to metaphysics it gets.

You, we all, like making stuff. Developing the craft etc. What these "AI-skeptical" pieces seem to miss is that you can actively engage with generated output too. Study it, modify it.

nickmonad | 4 hours ago

I don't think it's a lack of understanding that you can study or modify LLM generated code. I think the issue is it creates a situation that reverses the "direction" of thought.

If you start a project and build it up piece by piece, you get to a sweet spot where your technical and conceptual understanding of the code is so dialed in, that making changes feels easy, natural, and fun. Obviously this isn't always the case, perhaps with something really pushing the boundaries of your knowledge, or something you wrote 6 months ago now breaking loudly in production, and you're under pressure to fix it. But with LLM generated code, especially large amounts of it, you didn't really get to that sweet spot, and now it becomes more of a distrusting investigation than a relationship. You have to ask way more fundamental questions that in practice would have been determined long before you got to the amount of code you're currently looking at.

I don't know, maybe I'm overthinking it, but that's the feeling I have when staring down a mountain of LLM code.

sunflowerseastar | 6 hours ago

What if I had an AI-powered hammer and asked it to hit a nail for me? What's the difference?

Three people are using AI-powered hammers. They are making rocking chairs. Halfway through the work-to-be-done, we take away the AI-powered hammers and give them regular hammers. Person A knows how to hammer, so they finish the work just fine. Person B doesn't know how to hammer and they are unmotivated, so they give up, leaving the work unfinished. Person C has never regular-hammered before, but they were watching the AI-powered hammer carefully, and trying their best to understand it. Now that it's gone, Person C takes the regular hammer and struggles through the rest of the job. But they do manage to get it done, although a bit badly (and after hitting their own fingers a couple times, which is quite painful).

We've spotted a difference. The difference of the assitance:

  • Assistance A: unneeded from a completion and quality/standards standpoint. It's a nice-to-have, a boon, an automated speed-up of sorts. In fact, it might help someone produce even more high-quality work than they were before.
  • Assistance B: completely required, and agency-sapping in its own way. It does successfully transform some human time into a product/deliverable. But it also prevents some people (like our Person B here) from developing skill. "Teach a person to fish," etc -- it's the opposite of that. It can simply give Person B a fish as long as it's there and operational. Is it net postive (made some chairs) or net negative (could've spent time elsewhere and actually learned something) for Person B in the long run?
  • Assistance C: now here's an interesting one, right? This person was always scared to hammer. If there was no AI-powered hammer, perhaps they would've never embarked on their "makers" journey in the first place. But due to cosmic bitflips, they started with an AI hammer, then it was no longer available, and then they rose to the occasion and learned how to hammer on their own. This person goes on to make lovely rocking chairs (using a regular hammer) for their local nursing home's porch. If it weren't for the AI hammer, would that have never happened and our old folks would still be sitting inside during spring migration?

Now let's loop back to Person A. Person A actually made a rocking chair prior to our A/B/C experiment using nothing but a regular hammer. Let's call it Chair 1. They made another chair using nothing but the AI-powered hammer -- Chair 2. They made Chair 3 in our experiment (first half with AI, second half without). They then made a couple more chairs in various combinations of AI and not-AI.

You carefully inspect Chairs 1, 2, 3, 4, and 5. After you and your friends have moved them around and sat in them and enjoyed them for a long time, you have a discussion about which one's which, and you all realize that none of you can tell which one's which at all. None of you even have any guesses about which one might be Chair 1 or Chair 2, for any reason whatsoever. They all seem completely identical in every way. A newcomer arrives and asks which one was made without AI, and you all truly agree that you really don't know which one that is.

What's the difference between these chairs? Is Chair 2 kinda like blood diamonds in a way? Is Chair 1 morally superior? Is it ethically unfair to assert that the final product (which is undistinguishable from an end-user perspective) be judged on its own as-a-chair merits (as opposed to history-of-its-production merits)? What sort of ethics would demand that we must contemplate the history-of-its-production for all things we consume and use, and what does its practical calculus for day-to-day living look like?

None of you even have any guesses about which one might be Chair 1 or Chair 2, for any reason whatsoever.

I mean, this is kind of the crux of the matter, isn't it? As a general rule, physical objects built with automation tend not to look or behave quite like physical objects built by hand. When we automate something, we typically need to restrict the inputs down to make them easier to process - only certain types of wood can be used, and only in straight planks, and only if the knots aren't too large, etc. Likewise, we typically also need to restrict the outputs - we can automate straight lines, but curves aren't viable, so all our chairs will need to be precisely rectangular.

So the question becomes: are LLMs something more like a power tool, where it makes things quicker, but the wielder is primarily the one doing the work. Or are they something more like automation, where new software can be churned out at an unimaginable rate, but only if what you want fits into a predetermined form?

I think they can do both, but I fear a lot of people will only use them for the latter.

reivilibre | an hour ago

Thank you for this food for thought.

If you want to get more on the 'power tool' side of the LLM programming, there are of course the 'next edit prediction' models (e.g. https://blog.sweep.dev/posts/oss-next-edit 1.5B you can apparently run locally — yet to try myself). I think I would feel pretty happy using these on a daily basis; it seems to nullify all(?) my reservations about LLMs: runs locally (not depending on a cloud or experiencing the privacy loss of sending my code to a cloud), I feel like I stay in control, I feel like I understand what I've done, I feel like I 'own' what I've done(!).

I also have become fairly positive-minded about LLM code (pre)-review. (I did vibecode a local tool to do this, one I can't say I understand that well and therefore don't feel I 'own' and I don't feel 'I' made it as a result, as in the article). I think it's fair to still believe that you made something, even if you asked 'someone'/something to review it for you.