AI made coding more enjoyable

83 points by domysee 5 hours ago on hackernews | 82 comments

chrisjj | 5 hours ago

> AI made coding more enjoyable

...just not for users.

sergiomattei | 5 hours ago

I feel the same way.

> That includes code outside of the happy path, like error handling and input validation. But also other typing exercises like processing an entity with 10 different types, where each type must be handled separately. Or propagating one property through the system on 5 different types in multiple layers.

With AI, I feel I'm less caught up in the minutia of programming and have more cognitive space for the fun parts: engineering systems, designing interfaces and improving parts of a codebase.

I don't mind this new world. I was never too attached to my ability to pump out boilerplate at a rapid pace. What I like is engineering and this new AI world allows me to explore new approaches and connect ideas faster than I've ever been able to before.

awepofiwaop | 5 hours ago

Are you not concerned that this world is deeply tied to you having an internet connection to one of a couple companies' servers? They can jack up the price, cut you off, etc.

sergiomattei | 4 hours ago

Seeing how things are moving, I'm expecting for compute requirements to go down over a longer time horizon, as most technologies do.

I'd rather spend my time preparing for this new world now.

fogzen | 5 hours ago

Not going to last long though, at least not professionally. AI will do the spec and architecture too. The LLM will do the entire pipeline between customer or market research to deployment. This is already possible with bug fixes pretty much. And many features too depending on the business.

sarchertech | 4 hours ago

It AI gets to that level generally, there won’t be a customer, a market research department, or a software company at all.

But if AI is capable of that it’s not a big step to being capable of doing any white collar job, and we’ll either reorganize our economy completely or collapse.

sergiomattei | 4 hours ago

I don't know. LLMs are great at writing code; but you have to have the right ideas to get decent output.

I spend tons of time handholding LLMs--they're not a replacement for thinking. If you give them a closed-loop problem where it's easy to experiment and check for correctness, then sure. But many problems are open-loop where there's no clear benchmark.

LLMs are powerful if you have the right ideas. Input = output. Otherwise you get slop that breaks often and barely gets the job done, full of hallucinations and incorrect reasoning. Because they can't think for you.

perrygeo | 5 hours ago

> explore new approaches and connect ideas faster

This is the hidden super power of LLM - prototyping without attachment to the outcome.

Ten years ago, if you wanted to explore a major architectural decision, you would be bogged down for weeks in meetings convincing others, then a few more weeks making it happen. Then if it didn't work out, it feels like failure and everyone gets frustrated.

Now it's assumed you can make it work fast - so do it four different ways and test it empirically. LLMs bring us closer to doing actual science, so we can do away with all the voodoo agile rituals and high emotional attachment that used to dominate the decision process.

sodapopcan | 4 hours ago

That's only because no one understood agile or XP and they've become a "no one actually does that stuff" joke to many. I have first hand experience with prototyping full features in a day or two and throwing the result away. It comes with the added benefit of getting your hands dirty and being able to make more informed decisions when doing the actual implementation. It has always been possible, just most people didn't want to do it.

empath75 | 4 hours ago

I basically just _accidentally_ added a major new feature to one of my projects this week.

In the sense that, I was trying to explain what I wanted to do to a coworker and my manager, and we kept going back and forth trying to understand the shape of it and what value it would add and how much time it would be worth spending and what priority we should put on it.

And I was like -- let me just spend like an hour putting together a partially working prototype for you, and claude got _so close_ to just completely one-shotting the entire feature in my first prompt, that I ended up spending 3 hours just putting the finishing touches on it and we shipped it before we even wrote a user story. We did all that work after it was already done. Claude even mocked up a fully interactive UI for our UI designer to work from.

It's literally easier and faster to just tell claude to do something than to explain why you want to do it to a coworker.

danielvaughn | 5 hours ago

The way I like to think about it is to split work into two broad categories - creative work and toil. Creative work is the type of work we want to continue doing. Toil is the work we want to reduce.

edit - an interesting facet of AI progress is that the split between these two types of work gets more and more granular. It has led me to actively be aware of what I'm doing as I work, and to critically examine whether certain mechanics are inherently toilistic or creative. I realized that a LOT of what I do feels creative but isn't - the manner in which I type, the way I shape and format code. It's more in the manner of catharsis than creation.

matthewkayin | 4 hours ago

You cannot remove the toil without removing the creative work.

Just like how, in writing a story, a writer must also toil over each sentence, and should this be an emdash or a comma? and should I break the paragraph here or there? All this minutia is just as important to the final product as grand ideas and architecture are.

If you don't care about those little details, then fine. But you sacrifice some authorship of the program when you outsource those things to an agent. (And I would say, you sacrifice some quality as well).

danielvaughn | 4 hours ago

It all depends on how you split the difference. I wouldn't call the emdash vs comma problem toil. It's fine-grained and there are technical aspects to the decision, but it's also fundamentally part of the output.

CuriouslyC | 4 hours ago

You can remove a lot of toil from the writing process without taking away a writer's ability to do line edits. There's a lot of outlining, organization, bookkeeping and continuity work AI automates in the early draft/structural editing process.

Most writers can't even get a first draft of anything done, and labor under the mistaken assumption that a first draft is just a few minor edits away from being the final book. The reality is that a first draft might be 10% of the total time of the book, and you will do many rounds of rereading and major structural revision, then several rounds of line editing. AI is bad at line editing (though it's ok at finding things to nitpick), so even if your first draft and rough structural changes are 100% AI, you have basically a 0% chance of getting published unless you completely re-write it as part of the editing process.

antonvs | 4 hours ago

I suspect there are different definitions of "toil" being used here.

Google defined "toil" as, very roughly, all the non-coding work that goes into building, deploying, managing a system: https://sre.google/workbook/eliminating-toil/ , https://sre.google/sre-book/eliminating-toil/ .

Quote: "Toil is the kind of work tied to running a production service that tends to be manual, repetitive, automatable, tactical, devoid of enduring value, and that scales linearly as a service grows."

Variations of this definition are widely used.

If we map that onto your writing example, "toil" would be related to tasks like getting the work published, not the writing process itself.

With this definition of toil, you can certainly remove the toil without removing the creative work.

data-ottawa | 5 hours ago

I quite agree with this.

I’m working on library code in zig, and it’s very nice to have AI write the FFI interface with python. That’s not technically difficult or high risk, but it is tedious and boring.

Realistically having a helper to get me over slumps like that has been amazing for my personal productivity.

furyofantares | 5 hours ago

I love everything about coding. I love architecting a system, and I love tending all the little details. I love to look at the system as a whole or a block of code in isolation and find nothing I want to change, and take pride in all of it. I also love making products.

LLM-agents have made making products, especially small ones, a lot easier, but sacrifice much of the crafting of details and, if the project is small enough, the architecture. I've certainly enjoyed using them a lot over the last year and a half, but I've come to really miss fully wrapping my head around a problem, having intimate knowledge of the details of the system, and taking pride in every little detail.

xaviervn | 4 hours ago

Me too, and I'm glad to see that this point keeps being brought up. I noticed that what shapes my satisfaction (or dissatisfaction) about working with AI depends on whether have understanding of what's being built or not.

For a prototype, it's pretty amazing to generate a working app with one or two prompts. But when I get serious about it, it becomes such a chore. The little papercuts start adding up, I lose speed as I deal with them, and the inner workings of the app becomes a foreign entity to me.

It's counterintuitive, but what's helping me enjoy coding is actually going slower with AI. I found out that my productivity gains are not on building faster, but learning faster and in a very targeted way.

jeandejean | 5 hours ago

That's it? That post shouldn't be on HackerNews tbh...
I used to share this sentiment but the more I used AI for programming, the less I enjoyed it. Even writing "boring" code (like tests or summaries) by hand increased my understanding of what I wrote and how it integrates into the rest of the codebase, which I think is fun.

Letting a robot write code for me, however tedious it would be to write manually, made me feel like I was working in someone else's codebase. It reminds me of launching a videogame and letting someone else play through the boring parts. I might as well not be playing. Why bother at all?

I understand this behaviour if you're working for a company on some miserable product, but not for personal projects.

some-guy | 5 hours ago

I generally agree with you. As a recent father with a toddler, and two parents with a full time job, I’ve found that the only way I can make time for those personal side projects is to use AI to do most of the bootstrapping, and then do the final tweaks on my own. Most of this is around home automation, managing my Linux ISO server, among other things. But it certainly would be more fun and rewarding if I did it all myself.

nkassis | 4 hours ago

This feels like the same moment for me when I realized I couldn't keep using Gentoo and needed to move on to a Linux distribution that was ready to go without lots of manual effort. I have a family and kids I need those hours. I had the same feeling as OP of losing a fun learning activity. No longer progressing on Linux knowledge just maintaining. Granted it was good enough level to move on but it's still a loss.

I do the same as you with AI now, it's allowing me to build simple things quickly and revise later. Sometimes I never have to. I feel similarly that I'm no longer progressing as a dev just maintaining what I know. That might change I might adapt how I approach work and find the balance but for now it's a new activity entirely.

I've talked to many people over the years who saw coding as a get shit done activity. Stop when it's good enough. They never approached it really as a hobby and a learning experience. It wasn't about self progression to them. Mentioning that I read computer books resulted in a disgusted face "You can just google what you need when you need it".

Always felt odd to me, software development was my hobby something I loved not just a job. Now I think they will thrive in this world. It's pure results. No need to know a breath of things or what's out there to start on the right foot. AI has it all somewhere in it's matrix. Hopefully they develop enough taste to figure out what's good from bad when it's something that matters.

munk-a | 4 hours ago

I gain comprehension through the authoring process. I've always been weaker on the side of reviewing and only really gained an understanding of new tooling added by coworkers when I get to dig in and try to use it. This is absolutely a learning style thing and I have ADHD and have known since high-school that I am more engaged in the practical and have trouble with dry lecture style teaching - I have even excelled in pretty abstract and theoretical fields but it takes trying to work through problem solving, even if those problems are abstract and hard to mechanically represent.

So I am in the same boat, AI can write some good skeleton code for different purposes so I can get running faster but with anything complex and established it serves very little benefit. I'll end up spending more time trying to understand why and how it is doing something then I'd spend just doing it myself. When AI is a magical fix button that's awesome, but even in those circumstances I'm just buying LLM-debt - if I never need to touch that code again it's fine, but if I need to revise the code then I'll need to invest more time into understanding it and cleaning it up then I initially saved.

I'm not certain how much other folks are feeling this or if it's just me and the way my brain works, but I struggle to see the great savings outside of dead simple tasks.

empath75 | 4 hours ago

I think you just have to give up ownership of the _code_ and focus on ownership of the _product_.

contagiousflow | 4 hours ago

Do you think the product is not the code?

icedchai | 4 hours ago

From a user perspective, yes. The product is what it does, not how it does it.

jimbokun | 4 hours ago

But for the person building the product it very much is how it does it.

icedchai | 2 hours ago

I think we (developers) need to get over that. Code was always the means to an end, which is providing a product to solve a problem, not the end itself.

jimbokun | an hour ago

The code is still the means to the end. AIs still write code, that is compiled and deployed and operated in some manner.

munk-a | 4 hours ago

It isn't, no one is buying code on it's own - but it's a component of the product. I dislike the phrasing above since it assumes the two are distinct things.

jayd16 | 4 hours ago

Honestly, it never was. Personal projects not withstanding, if there is a product that should always be the focus. Code is only a means to that.
I don't care about the product as much as I care about the code.

bitwize | 4 hours ago

Well, that's the thing. You have to shut your programmer brain off and turn on your business brain. The code never really was that important. Delivering value to end users is the important thing, at least to the people that count.

Tim Bryce, one of the foremost experts on software methodology, hated programmers and considered them deeply sad individuals who had to compensate for their mediocre intelligence and narrow thinking by gatekeeping technology and holding the rest of the company hostage to them. And, he said upper management in corporate America agreed with him.

If you place a lot of value in being a good programmer, then to the real movers and shakers in society you are at best a tool they can use to get even richer. A tool that will soon be replaced by a machine. The time has come for programmers to level up their soft skills and social standing, and focus their intelligence on the business rather than the code. It sucks but that's the reality of the AI era.

LtWorf | an hour ago

you're a tool to them even if you are miserable instead of enjoying what you do.

jimbokun | 4 hours ago

If the product is software the code is the product.

kypro | 4 hours ago

My favourite code to write used to be small clean methods perhaps containing ~20 lines of logic. And I agree, it's fun coming up with ways to test that logic and seeing it pass those tests.

I'm not sure I'll ever write this kind of code again now. For months now all I've done is think about the higher level architectural decisions and prompt agents to write the actual code, which I find enjoyable, but architectural decisions are less clean and therefore for me less enjoyable. There's often a very clear good and bad way to right a method, but how you organise things at a higher level is much less binary. I rarely ever get that, "yeah, I've done a really good job there" feeling when making higher level decisions, but more of "eh, I think this is probably a good solution/compromise, given the requirements".

co_king_5 | 4 hours ago

> Even writing "boring" code (like tests or summaries) by hand increased my understanding of what I wrote and how it integrates into the rest of the codebase, which I think is fun.

Yes! Learning is fun!

onion2k | 4 hours ago

Why bother at all?

AI stops coding being about the journey, and makes it about the destination. That is the polar opposite of most people's coding experience as a professional. Most developers are not about the destination, and often don't really care about the 'product', preferring to care about the code itself. They derive satisfaction from how they got to the end product instead of the end product itself.

For those developers who just want to build a thing to drive business value, or because they want a tool that they need, or because they think the end result will be fun to have, AI coding is great. It enables them to skip over (parts of) the tedious coding bit and get straight to the result bit.

If you're coding because you love coding then obviously skipping the coding bit is going to be a bad time.

lelanthran | an hour ago

> For those developers who just want to build a thing to drive business value, or because they want a tool that they need, or because they think the end result will be fun to have, AI coding is great. It enables them to skip over (parts of) the tedious coding bit and get straight to the result bit.

Then they aren't programmers anymore, are they? We don't call people using no-code platforms "programmers" and we wouldn't trust them one bit to review actual code.

AI is simply the new no-code platform, except that the scope of what it can do is much larger while the reliability of what it produces is much lower.

Not to be that curmudgeon (who am I kidding), but it's made reviewing code very much less enjoyable, and I review more changes than I write. Engineers merrily sending fixes they barely understand (or, worse, don't think they need to understand) for the rest of us to handle, and somehow lines-of-code has become a positive metric again. How convenient!

munk-a | 4 hours ago

It has always been my opinion (and born out by our statistics internally, when counting self-review in the form of manual testing and automated test writing) that reviewing code (to the level of catching defects) often takes more time than actually building the solution. So I have a pretty big concern that the majority of AI code generation ends up adding time to tasks than it saves because it's optimizing the cheap tasks at the expense of the costly tasks.

pjm331 | 4 hours ago

as much as you or i may be against it, inevitably AI coding will move away from human review and toward more automated means measuring program correctness

this was already happening even before AI - human review is limited, linting is limited, type checking is limited, automated testing is limited

if all of these things were perfect at catching errors then we would not need tracing and observability of production systems - but they are imperfect and you need that entire spectrum of things from testing to observability to really maintain a system

so if you said - hey I'm going to remove this biased, error prone, imperfect quality control step and just replace it with better monitoring... not that unreasonable!

munk-a | 4 hours ago

I'm actually all for automated measures of program correctness and I think that manual testing is the last resort of tight budgets outside of highly complex integration issues. Adding more automated test cases that are built in to the CI pipline from the unit level to the highest levels (as long as they're not useless fluff) usually ensures a much lower level of defects. AI can help with that process, but only if we're diligent in checking that it isn't just building pages and pages of fluff ineffective tests - so we still end up needing to check the code and the tests that AI has written and I am still concerned that that ends up being more expensive in the long run.

skeeter2020 | 3 hours ago

But who will police the police? Does the Coast Guard* review my QA setup and tests?

* https://www.youtube.com/watch?v=Tk4yyqXi8Xc

sowbug | 4 hours ago

As the old saying goes, it's easier to write code than to read it.

skeeter2020 | 3 hours ago

Absolutely! When you review code you need to understand the problem space, the thought process that created the code and the concrete implementation. The second step has always been hard and AI makes it a magnitude harder IMO. Writing code was never the hard part.

Izkata | 3 hours ago

It also screws up code smells, disguising what used to be a "this looks weird, better investigate more in-depth" structure into something easily overlooked. So you have to be on guard all the time instead of being able to rely on your experience to know what parts to spend the extra effort on.

charcircuit | 4 hours ago

Have you considered using an AI agent to review the code?

tqlpasj | 5 hours ago

Yeah, yeah, lighthouseapp.io offers AI summaries and more AI is planned:

https://lighthouseapp.io/blog/introducing-lighthouse

It looks like a vibe coded website.

lbrito | 5 hours ago

>Outside the happy path

Uh, no. The happy path is the easy part with little to no thinking required. Edge cases and error handling is where we have to think hardest and learn the most.

righthand | 5 hours ago

Coding with AI is for those kids who were supposed to “stop playing games and clean their room right this minute”, but instead they shove all the crap in their closet and go back to playing games.

bitwize | 3 hours ago

Literally me as an adult, I feel seen
At what point do LLMs enable bad engineering practices, if instead of working to abstract or encapsulate toilsome programming tasks we point an expensive slot machine at them and generate a bunch of verbose code and carry on? I'm not sure where the tradeoff leads if there's no longer a pain signal for things that need to be re-thought or re-architected. And when anyone does create a new framework or abstraction, it doesn't have enough prior art for an LLM to adeptly generate, and fails to gain traction.

chrisweekly | 4 hours ago

Great Q, and your framing "there's no longer a pain signal for things that need to be re-thought or re-architected" perfectly encapsulates a concern I hadn't yet articulated so cleanly. Thanks for that!

michaelrpeskin | 4 hours ago

How much of "good engineering practices" exist because we're trying to make it easy for humans to work with the code?

Pick your favorite GoF design pattern. Is that they best way to do it for the computer or the best way to do it for the developer?

I'm just making this up now, maybe it's not the greatest example; but, let's consider the "visitor" pattern.

There's some framework that does a big loop and calls the visit() function on an object. If you want to add a new type, you inherit from that interface, put visit() on your function and all is well. From a "good" engineering practice, this makes sense to a developer, you don't have to touch much code and your stuff lives in it's own little area. That all feels right to us as developers because we don't have a big context window.

But what if your code was all generated code, and if you want to add a new type to do something that would have been done in visit(). You tell the LLM "add this new functionality to the loop for this type of object". Maybe it does a case statement and puts the stuff right in the loop. That "feels" bad if there's a human in the loop, but does it matter to the computer?

Yes, we're early LLMs aren't deterministic, and verification may be hard now. But that may change.

In the context of a higher-level language, y=x/3 and y=x/4 look the same, but I bet the generated assembly does a shift on the latter and a multiply-by-a-constant on the former. While the "developer interface", the source code, looks similar (like writing to a visitor pattern), the generated assembly will look different. Do we care?

LLMs have limited working memory, like humans, and most of the practices that increase human programming effectiveness increase LLM effectiveness too. In fact more so, because LLMs are goldfish that retain no mental model between runs, so the docs had better be good, abstractions tight, and coding practices consistent such that code makes sense locally and globally.

safetytrick | 3 hours ago

It's actually been useful for me to explain certain best practices now that I can show that the LLM cares.

Why is this name bad? Because an llm will get confused by it and di the wrong thing half the time.

skydhash | 4 hours ago

Code is a design tool, just like lines on an engineering drawing. Most times you do not care if it was with a pen or a pencil, or if it was printed out. But you do care about the cross section of the thing depicted. The only time you care about whether it’s pen or pencil is for preservation.

So I don’t care about assembly because it does not matter usually in any metric. I design using code because that’s how I communicate intent.

If you learn how to draw, very quickly, you find that no one talks about lines (which is mostly all you do), you will hear about shapes, texture, edges, values, balance…. It’s in these higher abstractions intent resides.

Same with coding. No ones thinks in keywords, brackets, or lines of code. Instead, you quickly build higher abstractions and that’s where you live in. The pros is that those concepts habe no ambiguity.

imzadi | 4 hours ago

When I tried using a coding agent it felt like the AI was stealing my dopamine hits.

butterisgood | 4 hours ago

It made coding way different for me. I'm able to get a proof-of-concept for an idea up pretty quick, and then I have to go back and decide if I like the style it produced.

I feel more like a software producer or director than an engineer though.

xyzsparetimexyz | 4 hours ago

Oink oink, time for your daily slop babe

cranberryturkey | 4 hours ago

The creative vs toil split resonates, but I think there's a third category everyone misses: the connective tissue. The glue code, the error handling, the edge cases that aren't creative but teach you how things actually break.

I run 17 products as an indie maker. AI absolutely helps me ship faster — I can prototype in hours what used to take days. But the understanding gap is real. I've caught myself debugging AI-generated code where I didn't fully grok the failure mode because I didn't write the happy path.

My compromise: I let AI handle the first pass on boilerplate, but I manually write anything that touches money, auth, or data integrity. Those are the places where understanding isn't optional.

slibhb | 4 hours ago

AI led to me writing code outside of work for the first time in years. I completed a small project that would've taken me months in a couple weeks.

I'm excited to work on more things that I've been curious about for a long time but didn't have the time/energy to focus on.

munk-a | 4 hours ago

AI also led me to experiment a bit more. In my case it helped remove the barrier to getting that initial bare-bones skeleton of code in a new environment by helping setting up libraries and a compile chain I was unfamiliar with and then giving me a baseline to build off of. Did you find that AI helped you evenly all the way through the experience or was it more helpful earlier or later on?

slibhb | 4 hours ago

It was probably most helpful early on when there was lots of code to write and stuff to configure. Context is an issue as time passes. But it's still quite helpful for tweaking things/adding features later on, as long as I provide it with the necessary context/point it to the right files to read.

MarkusQ | 4 hours ago

I suppose, in exactly the same way instant / frozen food makes cooking more enjoyable. If it was just a chore that you had to do, and now it's faster, sure, grab that cup-o-noodles. Knock yourself out.

Just don't expect to run a successful restaurant based on it.

Izkata | 3 hours ago

A decade or two ago I remember an experiment where canned food was presented in a restaurant setting and people couldn't tell it apart from the hand-hooked. The presentation was what mattered, as long as it didn't look like it was canned/frozen then they thought it tasted like restaurant quality.

MarkusQ | 2 hours ago

In many cases canned food is a lot closer to fresh than frozen or instant would be.

In any case, those are ingredients (analogous to...libraries I guess?) and not to the whole application. If you served someone a canned sandwich or canned sushi or some such, they'd notice.

IncreasePosts | 4 hours ago

AI made coding really enjoyable for me, for a subset of projects: Projects that I want, but don't really care about the design/implementation, or projects that has a lot of fiddly one-off configurations where it doesn't make sense to tuck in and learn all about the system if it is mostly set-it-and-forget-it. A lot of my home automation/home systems are now fully implemented by AI, because I don't really care how performant it is, or integrating all the various components, and it is very straight forward to tell if it works or if it doesn't work.

daveguy | 4 hours ago

> The only thing where I don’t trust it yet is when code must be copy pasted. I can’t trace if it actually cuts and pastes code, or if the LLM brain is in between. In the latter case there may be tiny errors that I’d never find, so I’m not doing that. But maybe I’m paranoid.

imo, this isn't paranoid at all, and it very likely filters through the LLM, unless you provide a tool/skill and explicit instructions. Even then you're rolling the dice, and the diff will have to be checked.

wy1981 | 4 hours ago

This level of detail isn't really helpful. I am working with AI and genuinely interested in learning more, but this offers very little.

More concrete examples to illustrate the core points would have been helpful. As-is the article doesn't offer much - sorry.

For one, I am not sure what kind of code he writes? How does he write tests? Are these unit tests, property-based tests? How does he quantify success? Leaves a lot to be desired.

lpeancovschi | 4 hours ago

Yes, more enjoyable and more human-replaceable!

HeavyStorm | 4 hours ago

Says someone who must not have enjoyed coding.

ertucetin | 4 hours ago

It is enjoyable if you are working on boring/enterprise software; it is not enjoyable when you are working on your own things.

inglor_cz | 4 hours ago

I enjoy one specific fact about programming with Claude.

My work often entails tweaking, fixing, extending of some fairly complex products and libraries, and AI will explain various internal mechanisms and logic of those products to me while producing the necessary artifacts.

Sure my resulting understanding is shallow, but shallow precedes deep, and without an AI "tutor", the exploration would be a lot more frustrating and hit-and-miss.

the_harpia_io | 4 hours ago

the copy paste concern is the most interesting bit honestly - even when it's not literally copy-pasting, AI error handling often looks correct but silently eats exceptions or returns wrong defaults. it gets the structure but misses what the code actually needs to do.

the boilerplate stuff is spot on though. the 10-type dispatch pattern is exactly where i gave up doing it manually

bitwize | 4 hours ago

I'm glad you enjoy it. I fucking hate it. Working directly with code is part of how I approach solving software problems.

What's worse, the more I rely on the bot, the less my internal model of the code base is reinforced. Every problem the bot solves, no matter how small, doesn't feel like a problem I solved and understanding I'd gained, it feels like I used a cheat code to skip the level. And passively reviewing the bot's output is no substitute for actively engaging with the code yourself. I can feel the brainrot set in bit by bit. It's like I'm Bastian making wishes on AURYN and losing a memory with every wish. I might get a raw-numbers productivity boost now, but at what cost later?

I get the feeling that the people who go on about how much fun AI coding is either don't actually enjoy programming or are engaging in pick-me behavior for companies with AI-use KPIs.

atonse | 4 hours ago

Not just coding. everything else I do.

I hate writing proposals. It's the most mind numbing and repetitive work which also requires scrutinizing a lot of details.

But now I've built a full proposal pipeline, skills, etc that goes from "I want to create a proposal" (it collects all the info i need, creates a folder in google drive, I add all the supporting docs, and it generates a react page, uses code to calculate numbers in tables, and builds an absolutely beautiful react-to-pdf PDF file.

I have a comprehensive document outline all the work our company's ever done, made from analyzing all past proposals and past work in google drive, and the model references that when weaving in our past performance/clients.

It is wonderful. I can now just say things like "remove this module from the total cost" and without having to edit various parts of the document (like with hand-editing code). Claude (or anything else) will just update the "code" for the proposal (which is a JSON file) and the new proposal is ready, with perfect formatting, perfect numbers, perfect tables, everything.

So I can stay high level thinking about "analyze this module again, how much dev time would we need?" etc. and it just updates things.

If you'd like me to do something like this with your company, get in touch :) I'm starting to think (as of this week) others will benefit from this too and can be a good consulting engagement.

yomismoaqui | 3 hours ago

I agree with the author but maybe it's bad to miss the pain you get on things like "propagating one property through the system on 5 different types in multiple layers".

These kind of pain points usually indicated too much of or a wrong architecture. Being able to fee these kind of things when the clanker does the work is a thing we must think about.

skeeter2020 | 3 hours ago

You know what's worse than writing boilerplate, or logging code, or unit tests, or other generic, typically low value code? Reviewing it. All the supporting comments here suggest they let AI write it and you're done. This is significantly more dangerous than writing it yourself and not having a second review step; at least you had human eyes on it once.

enduser | an hour ago

I've worked as both an IC and EM over the course of 25 years. The best part of being an IC is crafting solutions to single-human-sized problems—without needing to deal much with people. The best part of being an EM is directing the creation of larger solutions, sometimes vastly larger solutions, at the cost of dealing with people stuff.

AI takes the craft out of being an IC. IMO less enjoyable.

AI takes the human management out of being an EM. IMO way more enjoyable.

Now I can direct large-scope endeavors and 100% of my time is spent on product vision and making executive decisions. No sob stories. No performance reviews. Just pure creative execution.

cadamsdotcom | 28 minutes ago

> write the first test so the AI knows how they should be written, and which cases should be tested. Then I tell the AI each test case and it writes them for me.

This is too low level. You’d be better off describing the things that need testing and asking for it to do red/green test-driven development (TDD). Then you’ll know all the tests are needed, and it’ll decide what tests to write without your intervention, and make them pass while you sip coffee :)

> I don’t trust it yet is when code must be copy pasted.

Ask it to perform the copy-paste using code - have it write and execute a quick script. You can review the script before it runs and that will make sure it can’t alter details on the way through.