I think a lot of people relate with this but kind of sit with this silently for reasons the author mentioned:
“ Would initiating these discussions result in interpersonal stress? Should I just let things slide? Would I become known as a “difficult” coworker for pushing back on AI use? Does any of it really matter? Does anyone really care? “
I'm asking myself the same question for a different reason: nobody will even interview me. I've been out of work for a while. Savings are running out. I apparently don't even know how to look for a job anymore.
My response is probably controversial. But I genuinely think it’s generally helpful advice. Ofc I don’t have any other information than the comment about this person.
I decline to believe you actually do not understand how "as opposed to what, do nothing?" is freaking obtuse. I credit you with more basic functional intelligense than that. That unfortunately removes an excuse. When an idiot says something idiotic, well you don't really hold it against them.
I have no advice to offer, I only wish you good luck. I am still lucky enough to be employed, but when this whole parade ends, I have no idea what comes next - my only skill is programming and related knowledge work. I think the only path forward is to try to jump ship to another white or blue collar industry…
I thought along those lines as well. The only thing I could come up with that would be semi-viable was medical school, and I"m not sure I'd survive residency. I definitely would never be able to pay back the debt, if I had to take any.
The era of anyone interested in programming for fun being able to make upper 10% incomes is drawing to a close. You'll unfortunately have to join the rest of us who work for money and program for fun. I suggest engineering (the real kind, not software 'engineering')
Unfortunately, I have a visual-spatial processing disability. You don't want me near anything mechanical, and I can't do visualization-based tasks because I literally can't visualize. That eliminates most engineering jobs.
There's also the matter of going back to school, and the associated debt I'd have to take. I'd never be able to pay the loans off if I did that.
Electrical engineering doesn't need much in the way of mechanical aptitude, has a substantial overlap with what you already know depending on specialization, and might not have as much new schooling required as you would think.
Something like industrial controls engineering might be right up your alley.
Yeah. Got word I was being laid off in November. Officially because of restructuring, but after having had some conversations it's clear I've been replaced by a junior with a Claude subscription.
20 years coding experience. Gone through the sweaty junior years, senior, founding engineer, CTO (and back to software Engineering again because it's my preference) -- and now I can't even get an interview with a human.
Due to unfortunate life events my savings are now all but gone and I don't even know how if I will be able to keep a roof over our heads. It's messed up.
If anyone is hiring send me a message. I'm a .eu citizen but work have residency in and work out of Mexico.
I see this as a temporary phase driven by AI hype.
In the long run, strong senior specialists — in design, development, and other IT fields — will likely be more valuable than ever.
Meanwhile, those who rely entirely on AI without developing fundamentals may never reach that level.
AI isn’t really capable of creating truly complex solutions or top-tier UI/UX — it mostly recombines existing ideas.
So it’s probably better to focus on your craft and avoid burnout — that’s what will matter.
The worst part so far has been some people have Claude write tickets and they don’t check what the very detailed piece of crap ticket says. Just tell me the few pieces of true knowledge you know rather than a full page of AI slop that has multiple errors in it that causes me to waste hours trying to figure out what’s true
No comment on the ethics; however, I think when people's instincts to survive kick in, many of these larger goals get sidelined.
There's a growing belief that it's now or never as far as accumulating wealth, securing a house, etc. go because people think once AGI comes their chances of having the lives they want will diminish. The bay area has only gotten more expensive to live in, and that's where all of the AI folks are, so no surprise.
I think in general, if it were cheaper to live, we would see a shift in priorities, what people focus on, etc. More art, less grift.
Genuinely good people get caught up in rat races trying to reach their ceiling while they can. If they didn't feel that pressure, maybe they'd be doing something else.
You can just... not live in California. Most other places are doing just fine and experiencing the usual moderate economic instability that happens every decade or two along with the rest of the world.
If we do consider the ethics, there's a lot of contradictions built into why someone would want to live there so badly to do the kind of work the blog post is concerned with.
Their efforts are better rewarded moving their passion into an open source project while keeping a job in tech that they don't care so much about and are qualified for. This is a normal part of growing up. Some people switch careers while others stay in it while decoupling their passions from their paycheck.
I actually considered that, myself. The thing is, California is where the jobs are for me. If I move out of California, I may never be able to come back. That could cost me a lot.
Who cares about California? If you dont have family there, just head to Europe as fast as you can, one way ticket, don't ever pay the IRS to come back.
I don't think the now or never thinking is healthy, but I certainly understand the motivation. I myself have never really fit into a career path climbing the corporate ladder, and entrepreneurship is a skill that takes time to develop. When you're oscillating between stability and bleeding money, it's natural to want to go all in on an opportunity when it presents itself.
I genuinely enjoy software development, but if I could provide for my family, I’d also enjoy selling croissants at a local bakery or filling up shelves at the supermarket.
Long breaks help. Take your mind off of things that bothered you. Do things you enjoy. Which may include tech work, but on your own terms.
I wouldn't be surprised if you decide to not go back. The status quo of most organizations is grim. But there are still people who care about the same things as you. You can seek them out and work together, much like you did 15 years ago. This is more difficult now among the noise, but you can tune that out. The industry will never recover altogether, but this current period is a blip of high insanity, which will subside in a few years.
Can definitely relate. It is no more complicated than I really enjoyed designing and writing code by hand, and get very little joy out of agentic processes. I use the tools and see the velocity increase, but it has just become… bland work. I completely get others’ excitement around the tools and the newfound “super powers”, but it hasn’t much resonated with me.
That’s ok! I was fascinated by coding when many others weren’t and found a great career as a result. A different cohort will love Development 2.0.
The way AI is being used feels like it is proving that, in many orgs, what has always mattered has been the appearance of work, not results of work. Will we wake up in a few years and find out we’ve fired all the doers and are now overloaded with the fakers?
I find that to be a very defeatist take. It always mattered how much value you provide to the business. Writing pretty code or arguing about some implementation detail never really mattered. If you are good at coming up with solutions to problems AI is just one additional tool in your toolbox and personally it allows me to do much more than before.
There were fakers before, and there will be fakers after.
Are you willing to wake up at 3 AM when that "valuable" AI-written code pages on-call?
I agree there is some value in AI tools, but implementation details do matter. People shouldn't be pushing unread code to prod. That's how you end up with security holes and other bugs. That's how you end up dropping millions of orders on Amazon.com.
I think the last ten+ years has taught us that massive security breaches are more of an insurance claim problem and some $4/mo credit monitoring payouts.
And major corporations certainly don’t seem to care that much about leaving massive amounts of money on the table from jr level tech issues. I see it all the time. I mentioned a few from Walmart, Meta, and Amazon recently.
Everyone talks like these things matter, but the results say everyone is just playing pretend.
Excuse me? Amazon lost more money in one day than most companies have in revenue, from dropped orders. I would say that matters. Believe it or not, the systems we work on do things that matter in the real world.
Seems to be an instance of the prevention paradox: Security (in general) is taken seriously enough that major incidences are low enough that people think that security does not matter that much.
The quality of our work is too subordinated to business leaderships who see the forms of technical insurance we build into software development processes as fat, and are fundamentally opposed to doing things right. Besides solidarity this is the major reason for tech workers to unionize. We won't because we don't have any sense.
> Writing pretty code or arguing about some implementation detail never really mattered.
True, in the same sense that sharpening your tools if you're a trader doesn't matter to your customers: what matters is that the job you deliver is good.
Making sure you put all electrical wiring in conduits rather than buried in plaster is not what most customers care about, but it will mean easier repairs and quicker improvements in the future.
Writing good (not necessarily "pretty") code and arguing about implementation details means you will have an easier time delivering your work, both now and in the future. You have a better chance of delivering code that can be maintained and understood by yourself and others, including the people who come after you.
Furthermore, when done right, these discussions keep a trace for understanding bugs and for code archeology when in the future you're trying to understand how decisions were made and the tradeoffs considered, which could massively help refactors, rewrites and decisions to drop certain parts of the code base.
Of course, you can sharpen a tool too much or at the wrong angle, or you can make a mistake and fill up your conduits with plaster, but you stand a much better chance of ending with a better, cleaner, more maintainable and understandable product if you do practice those steps than if you skip them altogether.
Actually i think we will see a faker take over and then a doer conquest. All those going now take the recipe with them and are capable of cooking it elsewhere. Elsewhere being a place without ai management.
Imagine that you're given a business problem to solve. You represent the process of writing the code with a graph - each vertex is a git commit. We consider the space of all possible git commits, so the graph is infinite. All vertices are connected with directional edges, and each edge has a value "cost". If you are in commit A and you want to go to commit B, you have to pay the cost from A to B. Your goal is to find a relatively short path from empty git commit to any vertex which contains code that has some specific observable business properties.
You might notice that not everyone is equally smart, so when giving this task to real people, we'll associate "speed" with each person. The higher the speed, the lower the paid costs when traversing the graph. I'll leave the specifics vaguely undefined.
Since a part of the task is to discover information about the graph, we also need to specify that every person has some kind of heuristic function that evaluates how likely given node is to get you closer towards some vertex that can be considered a goal. Obviously, smarter people have heuristic functions that are more closer to ground truth, while stupid people are more biased towards random noise. This also models the fact that it takes knowledge to recognize what a correct solution is.
This model predicts what we intuitively think - smart specialists will quickly discover connections that take them towards the goal and pay low costs associated with them, while idiots will take the scenic route, but by and large will also eventually get to some vertex that satisfies the business requirements, even if it's a vertex that contains mostly low-quality code, because for idiots the cheap edges that seem good at first glance are the only edges they can realistically traverse.
Obviously, if you have a group of people working on the same task, you'll reach the business goal faster. Therefore, a group of people is equivalent to one person with higher speed, and some better heuristic.
This conclusion suddenly creates a well-known, but interesting situation - each smart specialist can be replaced by a group of idiots. Or, the way I heard it, "the theorem of interns - every senior can be replaced by a finite number of interns".
What AI does is it increases people's speed. Not the heuristic function, but the speed. Importantly, the better the heuristic function, the smaller the speed gains. Makes sense - an idiot who doesn't know shit and copy-pastes things from ChatGPT will have massive speed gains, while a specialist will only modestly benefit from AI.
From business perspective though, by having more idiots write more slop with more AI we traverse the graph significantly faster. Sure, we still take the scenic route, and maybe even with AI we take the really fucking long scenic route, but because the speed is so high, it doesn't matter.
And because AI supercharges idiots more than smart specialists, we have a situation where the skill of working with idiots is more valuable on the job market than the skill of doing your job right. Your goal isn't to find the shortest path, or the prettiest code, your goal is to prompt AI as quickly as possible to get you to any vertex that satisfies the business requirements.
Your graph model lack the aspect of increasing complexity. As you traverse the graph every available node gets increasingly more distant. In some areas of the graph less so than others, a good heuristic function not only identifies a single shortest path, but also dense areas of possible value in the graph.
The question is if blind speed scales quicker then distances grow.
That's true, and I guess the reason why we're building so many datacenters is to answer the question how far exactly will blind speed take us, assuming that we fail to make substantial improvements to AI architecture.
There is a shift to software mass production over the last decade(s). AI is now speeding up this process extremely. There will be most software produced with AI and "cog coders", similar to a production line in manufacturing.
Some few (good ones) will find niches and "hand craft" software, similar to today when you still can buy hand forged axes etc. Obviously the market for these products will be much smaller but it will exist.
I you love programming you should try to get into the second category. Be a master craftsman.
This happened once with open sores now this behavior has turned up to 11. People taking dependencies they don't even know what, full of incorrect code, vulns intentionally or not, delegate everything take no responsibility.
Obviously the author's experience is a nightmare but what was this place like pre-AI? I have a hard time believing people who are this willing to hand over all of their thinking to LLMs were doing anything productive beforehand.
I think you must be right to _some_ degree. The article illustrates that this org doesn’t know why they are doing certain things.
But there‘s something psychologically powerful happening with the interaction of AI. I think we overestimate our ability to be rational and underestimate how essily influenced we are.
> The point of a code review is not simply for good code to make it into a codebase, but to build institutional knowledge as people debate and iterate and compromise, slow as it may be.
I feel like this is a very profound insight.
Of course processes like this can become about the immediate utility. Reviewing is then checking work so, it can be merged and used.
But the process is more about us than the code. And we lose the deeper part when we only care about the superficial one.
> AI is all about losing every possible bit of friction, severely underestimating the value that friction brings.
And it's not just AI, the removal of friction seems to be pursued mindlessly in all areas, with not even an attempt to understand what value it might be providing.
It’s almost as though some people fall over themselves trying to achieve maximum possible speed without giving any thought to where they want to be heading.
It shows that previously he likely worked only at companies which catered to him, honestly.
That was pretty widespread during 2005-2015, but it's been dropping extremely quickly now.
Developers are generally seen as replaceable cogs. Middle management loves to talk about "scaling" - by which they don't mean scaling how devs understand it, but instead multiplying headcount - because surely throwing x-n devs at the same software will multiply the velocity by the same factor amiright?
The biggest value you can get is by having a very small team of extremely capable people (with extremely high bus factor) being fully in control of everything they do.
Realistically speaking, that'd be impossible to "scale" in the perspective of an MBA however, hence the industry at wide doesn't to that.
You may notice that some employers do, however.
You're just unlikely to get a job there, because their team is already established.
I'm on round 3 of arguing with my boss's LLM about a terrible PR he refuses to review manually. I can tell from the PR that it was 100% generated by Claude code because I've seen identical suggestions in PRs from juniors. But this man is my boss. He won't listen.
Honest to god, were the programming job market like 5% better than it is (so, y'know, years away) i would already have quit. I've been applying places but it's a slaughterhouse out there. I got ghosted after a fourth round interview at a non-tech company over the winter.
Shit sucks.
I'm immensely jealous of the author; i have savings as a safety net, but not enough to take a year off work. But this next year of my role is guaranteed to be hell and the last year of applying for jobs has not been better.
I'm beginning to think that the only reason code reviews still exist is that "all changes are reviewed before going into production" is probably a checkbox on some security certification checklist.
It’s for accountability. They still need a human to blame when it falls. Meanwhile, the message from management is with AI we expect you to 2x-3x speed of delivery now.
I work at MSFT and I feel burnt out too and am in a similar situation where I feel like resigning would be better for my mental health but AI isn’t a big contributing factor. I do have some arguments against speculative uses of AI though.
Experimenting with speculative uses is fine, technological breakthroughs require lot of iterations and some would naturally never make it but with the enormous amounts of capex that companies are investing, these have to impact the top line and eventually the bottom line as well. I just don’t see that happening now, I could be wrong.
1. To me speculative uses of AI like meeting notes summarisers seem to add little value if at all. First off, most meetings are performative work especially at big companies. Add to this, when someone just casually pastes the meeting notes from an AI summary and asks the meeting organiser to “pls check for correctness”, my blood just boils. Are we spending billions of dollars of capex for this ?
2. Every team builds their own “agent” for diagnosing incidents which is announced to huge fanfare but people rarely end up using it irl.
3. Devs and PMs chasing “volume” of work. You prompt GPT for an issue and it is bound to give you pages of text that you can use to show how much of output you can churn. I have seen excessively verbose design docs that only the writer (and prompter) could understand and all this was accepted because “Hey, I used AI for this and it must be good”.
There are legit uses of AI and I do have a 20$ Claude subscription which I like and use but at big companies they are shoving AI into every nook and cranny hoping it shows up in the top line and bottom line and so far it doesn’t add up.
Lot of these uses are driven by fear, by repeated exhortations from upper management about shoving AI into every nook and cranny when they are just as much clueless as us. People’s mortgages, their children’s education and their retirement, in short their whole livelihoods are at stake even more so when companies will happily lay off workers without a second thought. So people have to use AI even when it adds questionable value, if at all.
I am not resistant to change and am not an AI Luddite. I am happy to use AI to become a better developer but most current use cases seem to add questionable value.
CEO observes performative work, and his inference will be that means more people need to be fired. Let only the AI native, customer obsessed 10x engineers(/ AI swarm managers) remain.
"The psychic toll of AI" -- It's sad, but each of these scenarios (barring the AI notetaker, which I haven't found to be an issue personally but ymmv) are indicative more of the culture of the company than the tool itself. From my experience it seems like the most frontier companies have the best AI-use culture.
I work at a very 'AI-pilled' company, but:
- Everyone reads and reviews every PR and leaves human comments
- Documentation is written well and tended to by humans
- There's no 'AI mandate'
- Whether features are possible are first explored by an agent but manually traced by a human through the codebase
You can treat AI like a very powerful tool to augment you and run your agent swarms at the same time.
Odoo suffers from others issues though.
Not sure if this is still the case, but the mix of inline Python 2 Flask + XML was basically tech debt-as-a-service.
Also the very ugly death they gave OpenERP/Odoo on-premise.
It's Python 3, no Flask (but werkzeug) and XML templates.
It works for hundred thkusand clients, and you can install Odoo on premise as you like. I'm 90% dedicated to that.
So... explain the "tech debt" thing, as I don't get it. You don't need Rust or microservices for every use case. Don't be fooled by marketing style "old style technology" bias and set up an account. PostgreSQL with synchronous workers works perfectly for most people.
Another problem the author may be facing that if they decide to get back to the tech market and get a new job, it may be difficult with tech still going forward - not in a meaningful way, as computers still compute as before, but enough that lack of experience with a new tool or framework will make them unattractive compared to other candidates.
Otherwise, if they decide to go into another field that they will be starting from scratch in will pay only a small fraction and whatever lifestyle they were used to will have to change.
Probably. I hate the AI boom too and see no need to get all political, or even outrught blame the politicians. What'd you expect, politicians with a master degree in every field there is? Not gonna happen.
If we're putting the blame on anything, it's on us hacker types for going where the money flows and not fighting the corporate overlords nail and tooth.
I feel like all this hype around generated code overlooks a distinct opportunity for enterprising focus on excellent, clean, maintainable, curated code - baked by humans, for other humans.
What would that look like? In my experience, real production codebases tend to have lots of bugs. Most of them never get prioritized, because features matter more than fixing obscure bugs.
Indeed - one of my biggest pet peeves is when organizations chronically avoid budgeting the time and resources to deal with their technical debt. Or when they lack leadership that is confident and bold enough to make the hard decisions to do so (which requires experience and reputation), or suffer a culture that doesn't tolerate some degree of risk-taking, with contingencies (particularly in schedule and blast radius containment) to safely deal with occasional failure on the road to improvement.
I'd love to reinvent computing from the ground up, stripping away the many patchwork layers of complexity we've accreted over time and applying an obsession for making each individual component uncommonly robust and engineered for clarity. I feel that kind of project would be a great candidate for human-written code. I think AI tools would make a great sounding board / linter / reviewer in such a scenario, but since they were trained on existing examples and legacy patterns I'm not convinced they'd be as good as a human at the actual constructing, in terms of what I'm optimizing for.
I personally tend to favor longer lead times and slower public ship pace (but not slower betas or delay in customer feedback) in order to maintain a higher bar of quality. Even if saying so out loud risks branding me heretical by some corners of Silicon Valley!
We also haven't really seen how large volumes of generated sourcecode will stand up over time (like, decades) in terms of maintainability. My prediction is you'll encounter a lot more disposable software. That's fine for making general code more of a commodity (cheap and accessible), but where you get commodities you eventually find demand for more premium flavors of product. Those tend to derive from taste and opinion (attributes which, for example, were major success factors of the iPhone at its peak design).
The act of software development formalizes paradigms, surfaces unknowns and forces their resolution. Traditionally the work product gets better over time as you iterate. My own coarse rule of thumb is on average it takes until version 3 or so - i.e. 3 rewrites - until you to land at the kind of high caliber product that stems from really understanding the problem space and having worked in it extensively enough to have a good mental model and have uncovered the edge cases and hammered out an optimal solution.
While AI is famous for fast iteration, I expect in cases where the designers wielding the tool lack a deep understanding of what's going on, potentially exacerbated by never actually having to work with the codebase, it may actually turn out to impede their ability to reach that plateau. Not saying this will be true for all use cases, just that the tool makes it seductively easy to fall into that trap.
While I certainly relate to some of your points, and I'm not an AI maximalist by any means, a few thoughts:
> You join a meeting with a coworker. Your coworker has enabled an AI tool to automatically take notes and summarize the meeting. They do not ask for consent to turn it on. The tool mischaracterizes what you discuss.
Asking for consent to what is more or less meeting transcription (already enabled, presumably) seems a little odd. If you don't like it, why not just talk to the coworker and ask them not to use it? Offer to take notes yourself, perhaps.
> A team lead adds an AI chatbot to a Slack channel. Anyone can tag the bot to answer questions about the company’s products. Coworkers tag the chatbot many times a day. You never see someone check that the bot’s responses are correct.
Why would that happen in the Slack channel? Presumably you'd be googling it or reading documentation to do this, not posting in the channel.
> An engineer adds 12,000 lines of code affecting your app’s authentication. They ask that it be reviewed and merged same-day. Another engineer enlists a “swarm” of AI agents to review the code. The code merges with no one having read the full set of changes.
This is an insanely reckless thing to do with or without AI. If this actually happened at your company...I think there were deeper issues than overuse of AI.
> One of your pull requests has been open for a few days. You ask other engineers to leave a code review. Minutes later, an engineer pastes a review that was generated by an AI tool. There are no additional thoughts of their own.
Again, I think you should communicate with your coworkers on this. Possibly even bring it up in 1 on 1s with your manager. Not "I want to discourage use of AI" but "copying and pasting AI responses shows a lack of respect for others' time" and "lack of due diligence," show a horror story of an AI deleting someone's PROD database, etc. it's a useful but imperfect tool, not a replacement for thought.
This report lists failures of some AI systems. They look consequential - but the company does not seem to care. This is very strange - how can it be? I really like AI products they help me all the time - but I know I need to take into account their failure modes and be careful. But lots of organisations don't seem to do that calculation. Will competition root them out? I don't know - I am so enthusiastic about AI - but ever after the LangChain situation I can see that what is adopted is always something that has a lot of flows. The more careful developers that notice the flaws and try to find true workarounds fail because it takes time to do the design well. It is not new thing - there were Betamax mourners for decades - but it seems that the hype machine is now more and more powerful.
What I meant was how LangChain dominated the llm frameworks scene because it loaded VC money. It was just at the beginning - now it has normalised - but I believe it did a lot of damage at that early stage by sucking all oxygen.
I want to zoom in on the rise of AI notetakers. AI that generates transcripts alongside recorded video that you can watch later? Amazing. I can catch up later and find people asyhc if I need more info; the videos are discoverable/shareable and anyone who needs to be in the know can be. AI notetakers that give you a summary and nothing else? Useless. These generat concepts of overviews and tend to miss small, but, key details.
I'd rather (and often do) take notes manually than turn on the notetaker.
- 0:00 - Introductions
- 3:30 - Joe gives a summary of the problem and shows diagrams
- 7:52 - Kim asks clarification questions and introduces relevant infrastructural concepts
- 10:25 - People waffling about unrelated stuff
...
* Put the video and the transcript on the same GUI, where I can shuffle through the timeline, choose chapters or click the transcript to be taken to the relevant part of the video.
* Bonus points if it highlights the relevant part of the summary as the video is playing.
Hey OP, I quit my job and said "screw it" at the start of the Year for very similar reasons.
I had a "good" job, it was extremely stable and in the public sector, the work hypothetically mattered... I was miserable because it didn't matter. If I would have died in my study, the system would have happily churned on accomplishing nothing without me. There were so many many obstacles to accomplishing anything too, like I'm all about "perfect shouldn't be the enemy of good" - but hypothetically we should do something. I went on vacation in November and when I got back the latest ServiceNow update nuked a bunch of the changes I had worked for months trying to get done.
I quit at the start of the year and honestly, it's been great? Not fast, not suddenly lucrative, but I've been taking it slow. I'm literally building little vibe-engineered tools for local companies. I can now do what would have taken me a team to do by myself, it is paying (albeit slowly), it's fun, and I have time to do the things I care about in this life.
Don't work for the man. Your job cannot love you back, in fact, it actively hates you.
I have some of my old contacts from my prior life flying airplanes for a living. I started there because I know the field extremely well. These are my first customers so far.
The first thing was just some really simple stuff a bush airline I used to work for needed too, like, their software is through a DB run by this other company, they wanted a status board customers could view. That shouldn't be a huge lift, but the company that runs the enterprise software doesn't have the time to build it.
I sent a series of emails, got permission to hit the API, and was able to connect things so now this little bush airline has a customer facing schedule app and people don't call the office 30 times an hour to see if the flight is late or on time or early. Even in the middle of nowhere, if they have Wifi the can check the flight schedule on their phone. That has spread to "hey, do you think you could use this data to auto-populate flight and duty logs?" Yup, not a huge deal. Then onto the next one. Every month it seems I take on a new project for them and the scope of their tooling keeps growing and the recurring costs I charge to maintain things is low enough where I'm worth it. There's a dashboard of data science stuff, then a compliance auditing tool, and the list of bespoke features that are critical to them continues to grow, and they continue to pay me. It's pretty cool.
This has lead to another customer pinging me that wants me to work on an app for their factory floor to help their technicians. Nothing crazy, just a kind of wrapper over USB tool they have and a CRUD app. 99% of the real work is going to be testing out like 30 different layouts and making sure that it works properly in practice, but a big company would never bother to do this. I will go down to their factory this week, set up computer, and talk with their technicians while I vibe code it out with Codex and draw process diagrams and think. 90% of it is really just thinking about what's a prudent choice.
The SaaS the first company is paying for is incredibly necessary to run their business, those guys will probably have their hooks into that operation for many more years because of the inertia to change, but there is tons of room to fix some of the little small annoyances that not having bespoke custom software creates. Also, the software they are kind of locked into is 10s of thousands of dollars a month. I reckon in the long run I'll end up trying to build a replacement for it entirely then charging way less to give them exactly what they need.
Then there's the existential angst of vibe coding this stuff. The truth is, I could write all this code myself. It's mostly Python, and JS, but it would take me a month to do what I can do in a week and I'd be working myself to the bone. Instead, this is more like an extremely fun part-time job that's growing in scope and pay but not growing in time required of me. Seriously, these tools are cool! They're like I have a team of idiot savants/interns working for me but the entire company so far is literally just me and my wife (and she isn't really involved in the technical stuff at all). Codex is dumb and does not understand the use case at all, but good lord does it churn out boilerplate code that solves real engineering problems for customers. My job is largely playing "software plumber foreman" and making sure all the lego pieces fit together nicely and that they're good architectural choices.
For example, I was skimming the code base last week and noticed a ton of just unused code from an early iteration. I spent a bunch of time pruning that as a human, then also having codex refactor code smells I didn't like. "This file is ridiculous, it's like a monolith of 30 different concepts hammered into one place - refactor all this stuff and spread it out, move function X to a separate file, use a functional style" etc. Stuff like that is kind of mandatory, otherwise your codebase will give you a stroke and you can grow it to an extraordinary size that will hurt your ability to iterate because you'll be running into context length issues. But the robot doesn't do too horrible of a job.
I could write all of the code, but the customers don't care if it's written by a human or not? They just want it to work. So I spend a lot of the time coming up with test-cases, then interactively evaluating what the robot is building? Kind of like a really slow REPL? But I'm definitely less of an engineer and more of an architect now. That pains me a bit? But all things must come to an end.
One thing I'd say is important if you're going to do this... use the dumbest possible solution you can. You'll need to specify that to these tools otherwise they'll build you a cathedral? You probably do not need some monster system with 80 layers of abstraction. KISS is important.
Thank you for the detailed response! My background is Ruby on Rails web development, but I started vibe engineering add-ons to my wife's dental practice software lately. Tools to reconcile payments and open invoices, by reading straight from DBF database files of the ancient Windows desktop app. Windows tray apps cross-compiled in Go from my Mac. Things that would have taken me weeks to learn the boilerplate previously. Only possible since about December. Wild times.
I really don't see this getting better from the sound if it, at-least from all the headlines present at the moment. The spending taking place from these big tech companies is alarming, not only is it centralized to single category " well by a large percentage". We still don't have a clear picture of the landscape for tech yet, yes there are some great tech innovation taking place in the US.
Being cutoff from China " A market that is also advancing in the same sectors as the US.Not allowing competition to enter the west will cause a recipe for disaster in the future. The current government is not "focused" on growth, despite the contrary to what's being said publicly. Where this will take the US is a place were stagnation is okay, so to make up for it there is a surge in investments in AI craze at the moment. The feedback is required in order to grow that goes for companies too not just the junior-varsity wrestler at you local high school. I mean taking abundance of data to utilize a summarize tool so that it can auto complete a prompt was bound to happen sooner or later, take elastic search for example, it's a search bar that as you type shows what that database has to offer with either a weighted response or indexed response depending on setting. This tool also shows images and information in regards to the search query. All that was needed to happen in that scenarios was something to compute this mess of data in abundance and project a response from it not just a search result. Marvelous you might say, but it has been around for a while now.The idea was there, it just needed the actor to execute it. The firings alone tell you the health and implications of these actions taking place. There was promises behind these investments that this war is interrupting or severing the deals even post-conflict.
The DotCom bubble was push on society to use the web and to digitize some parts of our lives, which the few companies that survived DotCom era are whats driving the push to the next era of tech or digital. It seems the AI idea is born without a guardian nor ownership, but to leave the courage to act upon it is open to any takers. The overwhelming spillover of data had to go somewhere. The useless data " how fast does a 2001 Porsche 911 go?" was tiresome to search for anymore.
The education system is already fallen apart in the US and this only makes things worse. Where is education heading with all of the adoption of AI all around us, how will you argue with your children, how will you learn new things? I don;t think I'm the only one thinking this at the moment by all means. The solution? well I'm, not sure if there is a solution to this? Companies want to see results from their spending and they will not stop until that is evident.
Automation seems like a very surface-level reading of this article.
Outsourcing your thinking, especially uncritically, is. There is a very obvious cognitive bias in the most vehement AI advocates where the one time a tool worked really well for them makes it worth the dozen of times it blows up in your face and makes that someone else's problem. The gain is romanticized and the losses set aside, without checking the balance or how badly the losses wear on morale.
I’m not part of the owner class so what tech jobs has and always will be is a paycheck. Why should I be excited about automating myself to homelessness
I want to focus on the "colleagues submit thousands of AI generated lines of code for review" comment.
Humanity developed Code and programming languages for people. They are supposed to provide sufficient expressiveness so that we people can understand what is happening, and 0 ambiguity, so that the machine can perform is instructions.
But computer code has been a way to communicate among us people on our intentions (what we intend the machine to do). Otherwise, we would still be writing in assembler.
But now, computers are generating code, A LOT of code. So much, that it's becoming more and more difficult to stay on top with our verbose languages.
We will need to develop a better way for the computers to a) produce the instructions to perform the tasks we tell them to , b) produce reports or some accessible way for us people to understand and share what the instructions are doing.
A lot of this is about knowledge debt if I’m reading it correctly (people not knowing things that they should know, or knowing the wrong things). In my last few jobs, I’ve maintained an Anki deck about facts relevant to my job (who certain people are, how certain systems work, details of the programming languages we use, etc.)
I’ve started kind of a funny rule, which is that when I make a change now, I can use Claude or not. But if I use Claude, some cards have to go into the deck. Both about how the implementation works, and also about anything within the implementation I didn’t know about. It does force you to double-check things before committing them to memory.
Before LLMs, a friend of mine lamented that all the juniors at his gig were really fast at producing buggy code. The greater lament was that his bosses loved it. And as a dev, you're getting paid to do what your bosses want.
LLMs can really help you get what your bosses want a lot faster.
As an older dev, myself, I'd already been bitching about the state of software quality before all of this. Companies just didn't give a shit. Sure, people within them did, but as a whole companies will do the bare minimum to not lose your business (because that's what's best for the bottom line). Can't really fault them for their nature.[1]
And then I step back and look at something like Linux or GNU. Perfect and bug-free? Certainly not. But they're damn fine pieces of software. Many open-source projects have historically been damn fine pieces of software. Because they don't care if they lose your "business". They just want to build something cool that they can be proud of.[2]
It's why so many of us agonize over the details of the things we produce and give away for free. It might not even net us another user, but we have pride in our craft and want to do the best we possibly can.
But that way of thinking is a money loser, at least in the short-term. And companies live in the short-term.
So what's going to stop software from just collapsing into a massive pile of crap?
I don't know. Maybe it just has to get so bad that people start going to the marginally-better competition. Isn't exactly a great consolation to me, that.
[1] Small companies are often idealistic and try to do the Right Thing, admittedly. But big ones who tend to be market leaders tend to not.
[2] Insert the entire GNU philosophy here because I just glossed over it completely and I don't want to get called out on it. :)
I'm going through something similar. All the symptoms described in the article are present in the company where I work. But I don't blame AI. AI is just a tool. I blame the company culture, because it's the source of those problems.
Exactly, although it increasingly is the default culture at too many companies. As you noted, AI is not the problem here. Clueless VCs encourage clueless bosses to encourage clueless engineers to produce buggy garbage that lives long enough to pass the buck to the next clueless VC. No human in this chain cares about the virtues of good engineering. It is a grift all around, and grift ultimately results in systemic doom.
coinfused | a day ago
“ Would initiating these discussions result in interpersonal stress? Should I just let things slide? Would I become known as a “difficult” coworker for pushing back on AI use? Does any of it really matter? Does anyone really care? “
brewcejener | a day ago
Paul-Craft | a day ago
baxtr | a day ago
lazyasciiart | a day ago
baxtr | a day ago
Brian_K_White | a day ago
baxtr | a day ago
lazyasciiart | a day ago
baxtr | 20 hours ago
Brian_K_White | 19 hours ago
Paul-Craft | a day ago
baxtr | a day ago
lazyasciiart | a day ago
baxtr | 20 hours ago
Look up the work by Seligman et al. on resilience.
oldmanhorton | a day ago
Paul-Craft | a day ago
idiotsecant | a day ago
Paul-Craft | a day ago
There's also the matter of going back to school, and the associated debt I'd have to take. I'd never be able to pay the loans off if I did that.
idiotsecant | 18 hours ago
Something like industrial controls engineering might be right up your alley.
nso | a day ago
20 years coding experience. Gone through the sweaty junior years, senior, founding engineer, CTO (and back to software Engineering again because it's my preference) -- and now I can't even get an interview with a human.
Due to unfortunate life events my savings are now all but gone and I don't even know how if I will be able to keep a roof over our heads. It's messed up.
If anyone is hiring send me a message. I'm a .eu citizen but work have residency in and work out of Mexico.
anal_reactor | a day ago
baCist | a day ago
In the long run, strong senior specialists — in design, development, and other IT fields — will likely be more valuable than ever. Meanwhile, those who rely entirely on AI without developing fundamentals may never reach that level.
AI isn’t really capable of creating truly complex solutions or top-tier UI/UX — it mostly recombines existing ideas.
So it’s probably better to focus on your craft and avoid burnout — that’s what will matter.
coffeebeqn | a day ago
dfee | a day ago
cbreynoldson | a day ago
I think in general, if it were cheaper to live, we would see a shift in priorities, what people focus on, etc. More art, less grift.
Genuinely good people get caught up in rat races trying to reach their ceiling while they can. If they didn't feel that pressure, maybe they'd be doing something else.
sublinear | a day ago
If we do consider the ethics, there's a lot of contradictions built into why someone would want to live there so badly to do the kind of work the blog post is concerned with.
Their efforts are better rewarded moving their passion into an open source project while keeping a job in tech that they don't care so much about and are qualified for. This is a normal part of growing up. Some people switch careers while others stay in it while decoupling their passions from their paycheck.
Paul-Craft | a day ago
lordkrandel | a day ago
MikeNotThePope | a day ago
serial_dev | a day ago
imiric | a day ago
Long breaks help. Take your mind off of things that bothered you. Do things you enjoy. Which may include tech work, but on your own terms.
I wouldn't be surprised if you decide to not go back. The status quo of most organizations is grim. But there are still people who care about the same things as you. You can seek them out and work together, much like you did 15 years ago. This is more difficult now among the noise, but you can tune that out. The industry will never recover altogether, but this current period is a blip of high insanity, which will subside in a few years.
Good luck!
LVB | a day ago
That’s ok! I was fascinated by coding when many others weren’t and found a great career as a result. A different cohort will love Development 2.0.
erentz | a day ago
dewey | a day ago
There were fakers before, and there will be fakers after.
Paul-Craft | a day ago
I agree there is some value in AI tools, but implementation details do matter. People shouldn't be pushing unread code to prod. That's how you end up with security holes and other bugs. That's how you end up dropping millions of orders on Amazon.com.
dd8601fn | a day ago
And major corporations certainly don’t seem to care that much about leaving massive amounts of money on the table from jr level tech issues. I see it all the time. I mentioned a few from Walmart, Meta, and Amazon recently.
Everyone talks like these things matter, but the results say everyone is just playing pretend.
Paul-Craft | a day ago
dd8601fn | a day ago
bulbar | a day ago
ngcazz | 4 hours ago
dewey | a day ago
cassianoleal | a day ago
True, in the same sense that sharpening your tools if you're a trader doesn't matter to your customers: what matters is that the job you deliver is good.
Making sure you put all electrical wiring in conduits rather than buried in plaster is not what most customers care about, but it will mean easier repairs and quicker improvements in the future.
Writing good (not necessarily "pretty") code and arguing about implementation details means you will have an easier time delivering your work, both now and in the future. You have a better chance of delivering code that can be maintained and understood by yourself and others, including the people who come after you.
Furthermore, when done right, these discussions keep a trace for understanding bugs and for code archeology when in the future you're trying to understand how decisions were made and the tradeoffs considered, which could massively help refactors, rewrites and decisions to drop certain parts of the code base.
Of course, you can sharpen a tool too much or at the wrong angle, or you can make a mistake and fill up your conduits with plaster, but you stand a much better chance of ending with a better, cleaner, more maintainable and understandable product if you do practice those steps than if you skip them altogether.
cineticdaffodil | a day ago
anal_reactor | a day ago
Imagine that you're given a business problem to solve. You represent the process of writing the code with a graph - each vertex is a git commit. We consider the space of all possible git commits, so the graph is infinite. All vertices are connected with directional edges, and each edge has a value "cost". If you are in commit A and you want to go to commit B, you have to pay the cost from A to B. Your goal is to find a relatively short path from empty git commit to any vertex which contains code that has some specific observable business properties.
You might notice that not everyone is equally smart, so when giving this task to real people, we'll associate "speed" with each person. The higher the speed, the lower the paid costs when traversing the graph. I'll leave the specifics vaguely undefined.
Since a part of the task is to discover information about the graph, we also need to specify that every person has some kind of heuristic function that evaluates how likely given node is to get you closer towards some vertex that can be considered a goal. Obviously, smarter people have heuristic functions that are more closer to ground truth, while stupid people are more biased towards random noise. This also models the fact that it takes knowledge to recognize what a correct solution is.
This model predicts what we intuitively think - smart specialists will quickly discover connections that take them towards the goal and pay low costs associated with them, while idiots will take the scenic route, but by and large will also eventually get to some vertex that satisfies the business requirements, even if it's a vertex that contains mostly low-quality code, because for idiots the cheap edges that seem good at first glance are the only edges they can realistically traverse.
Obviously, if you have a group of people working on the same task, you'll reach the business goal faster. Therefore, a group of people is equivalent to one person with higher speed, and some better heuristic.
This conclusion suddenly creates a well-known, but interesting situation - each smart specialist can be replaced by a group of idiots. Or, the way I heard it, "the theorem of interns - every senior can be replaced by a finite number of interns".
What AI does is it increases people's speed. Not the heuristic function, but the speed. Importantly, the better the heuristic function, the smaller the speed gains. Makes sense - an idiot who doesn't know shit and copy-pastes things from ChatGPT will have massive speed gains, while a specialist will only modestly benefit from AI.
From business perspective though, by having more idiots write more slop with more AI we traverse the graph significantly faster. Sure, we still take the scenic route, and maybe even with AI we take the really fucking long scenic route, but because the speed is so high, it doesn't matter.
And because AI supercharges idiots more than smart specialists, we have a situation where the skill of working with idiots is more valuable on the job market than the skill of doing your job right. Your goal isn't to find the shortest path, or the prettiest code, your goal is to prompt AI as quickly as possible to get you to any vertex that satisfies the business requirements.
CuriousSkeptic | a day ago
The question is if blind speed scales quicker then distances grow.
anal_reactor | a day ago
biglyburrito | a day ago
fileeditview | a day ago
Some few (good ones) will find niches and "hand craft" software, similar to today when you still can buy hand forged axes etc. Obviously the market for these products will be much smaller but it will exist.
I you love programming you should try to get into the second category. Be a master craftsman.
casey2 | a day ago
somesortofthing | a day ago
dgb23 | a day ago
But there‘s something psychologically powerful happening with the interaction of AI. I think we overestimate our ability to be rational and underestimate how essily influenced we are.
dgb23 | a day ago
I feel like this is a very profound insight.
Of course processes like this can become about the immediate utility. Reviewing is then checking work so, it can be merged and used.
But the process is more about us than the code. And we lose the deeper part when we only care about the superficial one.
nunez | a day ago
Griffinsauce | a day ago
AI is all about losing every possible bit of friction, severely underestimating the value that friction brings.
palmotea | a day ago
And it's not just AI, the removal of friction seems to be pursued mindlessly in all areas, with not even an attempt to understand what value it might be providing.
tempodox | 21 hours ago
Henchman21 | 18 hours ago
People in tech seem to almost NEVER consider "Should we?"
fuzzythinker | 7 hours ago
ffsm8 | a day ago
That was pretty widespread during 2005-2015, but it's been dropping extremely quickly now.
Developers are generally seen as replaceable cogs. Middle management loves to talk about "scaling" - by which they don't mean scaling how devs understand it, but instead multiplying headcount - because surely throwing x-n devs at the same software will multiply the velocity by the same factor amiright?
The biggest value you can get is by having a very small team of extremely capable people (with extremely high bus factor) being fully in control of everything they do.
Realistically speaking, that'd be impossible to "scale" in the perspective of an MBA however, hence the industry at wide doesn't to that.
You may notice that some employers do, however.
You're just unlikely to get a job there, because their team is already established.
Henchman21 | 18 hours ago
Brooks, Frederick P., Jr., 1931-2022. The Mythical Man-Month : Essays on Software Engineering. Reading, Mass. :Addison-Wesley Pub. Co., 1982.
Those who ignore history are doomed to repeat it?
queenkjuul | a day ago
Honest to god, were the programming job market like 5% better than it is (so, y'know, years away) i would already have quit. I've been applying places but it's a slaughterhouse out there. I got ghosted after a fourth round interview at a non-tech company over the winter.
Shit sucks.
I'm immensely jealous of the author; i have savings as a safety net, but not enough to take a year off work. But this next year of my role is guaranteed to be hell and the last year of applying for jobs has not been better.
teeray | 13 hours ago
beef234 | 7 hours ago
ngcazz | 4 hours ago
AbbeFaria | a day ago
Experimenting with speculative uses is fine, technological breakthroughs require lot of iterations and some would naturally never make it but with the enormous amounts of capex that companies are investing, these have to impact the top line and eventually the bottom line as well. I just don’t see that happening now, I could be wrong.
1. To me speculative uses of AI like meeting notes summarisers seem to add little value if at all. First off, most meetings are performative work especially at big companies. Add to this, when someone just casually pastes the meeting notes from an AI summary and asks the meeting organiser to “pls check for correctness”, my blood just boils. Are we spending billions of dollars of capex for this ?
2. Every team builds their own “agent” for diagnosing incidents which is announced to huge fanfare but people rarely end up using it irl.
3. Devs and PMs chasing “volume” of work. You prompt GPT for an issue and it is bound to give you pages of text that you can use to show how much of output you can churn. I have seen excessively verbose design docs that only the writer (and prompter) could understand and all this was accepted because “Hey, I used AI for this and it must be good”.
There are legit uses of AI and I do have a 20$ Claude subscription which I like and use but at big companies they are shoving AI into every nook and cranny hoping it shows up in the top line and bottom line and so far it doesn’t add up.
Lot of these uses are driven by fear, by repeated exhortations from upper management about shoving AI into every nook and cranny when they are just as much clueless as us. People’s mortgages, their children’s education and their retirement, in short their whole livelihoods are at stake even more so when companies will happily lay off workers without a second thought. So people have to use AI even when it adds questionable value, if at all.
I am not resistant to change and am not an AI Luddite. I am happy to use AI to become a better developer but most current use cases seem to add questionable value.
jimmydoe | a day ago
Paul-Craft | a day ago
trenchgun | 7 hours ago
ej88 | a day ago
I work at a very 'AI-pilled' company, but:
- Everyone reads and reviews every PR and leaves human comments
- Documentation is written well and tended to by humans
- There's no 'AI mandate'
- Whether features are possible are first explored by an agent but manually traced by a human through the codebase
You can treat AI like a very powerful tool to augment you and run your agent swarms at the same time.
maplethorpe | a day ago
lordkrandel | a day ago
manytimesaway | 22 hours ago
Also the very ugly death they gave OpenERP/Odoo on-premise.
lordkrandel | 15 hours ago
ej88 | 17 hours ago
- mandatory ai usage
- ai usage tied to kpis or performance reviews
- trainings on how to use claude code
- restrictions on what tools you can use
- layoffs
- engineers still typing every line of code by hand
AlexeyBelov | 4 hours ago
spaqin | a day ago
Otherwise, if they decide to go into another field that they will be starting from scratch in will pay only a small fraction and whatever lifestyle they were used to will have to change.
bad_username | a day ago
People caught up in this line of beliefs generally tend to be more neurotic and unhappy about most things.
tosti | a day ago
If we're putting the blame on anything, it's on us hacker types for going where the money flows and not fighting the corporate overlords nail and tooth.
AlexeyBelov | 4 hours ago
ngcazz | 3 hours ago
rkagerer | a day ago
Paul-Craft | a day ago
rkagerer | a day ago
I'd love to reinvent computing from the ground up, stripping away the many patchwork layers of complexity we've accreted over time and applying an obsession for making each individual component uncommonly robust and engineered for clarity. I feel that kind of project would be a great candidate for human-written code. I think AI tools would make a great sounding board / linter / reviewer in such a scenario, but since they were trained on existing examples and legacy patterns I'm not convinced they'd be as good as a human at the actual constructing, in terms of what I'm optimizing for.
I personally tend to favor longer lead times and slower public ship pace (but not slower betas or delay in customer feedback) in order to maintain a higher bar of quality. Even if saying so out loud risks branding me heretical by some corners of Silicon Valley!
rkagerer | a day ago
The act of software development formalizes paradigms, surfaces unknowns and forces their resolution. Traditionally the work product gets better over time as you iterate. My own coarse rule of thumb is on average it takes until version 3 or so - i.e. 3 rewrites - until you to land at the kind of high caliber product that stems from really understanding the problem space and having worked in it extensively enough to have a good mental model and have uncovered the edge cases and hammered out an optimal solution.
While AI is famous for fast iteration, I expect in cases where the designers wielding the tool lack a deep understanding of what's going on, potentially exacerbated by never actually having to work with the codebase, it may actually turn out to impede their ability to reach that plateau. Not saying this will be true for all use cases, just that the tool makes it seductively easy to fall into that trap.
arcfour | a day ago
> You join a meeting with a coworker. Your coworker has enabled an AI tool to automatically take notes and summarize the meeting. They do not ask for consent to turn it on. The tool mischaracterizes what you discuss.
Asking for consent to what is more or less meeting transcription (already enabled, presumably) seems a little odd. If you don't like it, why not just talk to the coworker and ask them not to use it? Offer to take notes yourself, perhaps.
> A team lead adds an AI chatbot to a Slack channel. Anyone can tag the bot to answer questions about the company’s products. Coworkers tag the chatbot many times a day. You never see someone check that the bot’s responses are correct.
Why would that happen in the Slack channel? Presumably you'd be googling it or reading documentation to do this, not posting in the channel.
> An engineer adds 12,000 lines of code affecting your app’s authentication. They ask that it be reviewed and merged same-day. Another engineer enlists a “swarm” of AI agents to review the code. The code merges with no one having read the full set of changes.
This is an insanely reckless thing to do with or without AI. If this actually happened at your company...I think there were deeper issues than overuse of AI.
> One of your pull requests has been open for a few days. You ask other engineers to leave a code review. Minutes later, an engineer pastes a review that was generated by an AI tool. There are no additional thoughts of their own.
Again, I think you should communicate with your coworkers on this. Possibly even bring it up in 1 on 1s with your manager. Not "I want to discourage use of AI" but "copying and pasting AI responses shows a lack of respect for others' time" and "lack of due diligence," show a horror story of an AI deleting someone's PROD database, etc. it's a useful but imperfect tool, not a replacement for thought.
pbgcp2026 | a day ago
zby | a day ago
Paul-Craft | a day ago
zby | a day ago
nunez | a day ago
I want to zoom in on the rise of AI notetakers. AI that generates transcripts alongside recorded video that you can watch later? Amazing. I can catch up later and find people asyhc if I need more info; the videos are discoverable/shareable and anyone who needs to be in the know can be. AI notetakers that give you a summary and nothing else? Useless. These generat concepts of overviews and tend to miss small, but, key details.
I'd rather (and often do) take notes manually than turn on the notetaker.
cassianoleal | a day ago
* Cut the video down into chapters, e.g.
* Put the video and the transcript on the same GUI, where I can shuffle through the timeline, choose chapters or click the transcript to be taken to the relevant part of the video.* Bonus points if it highlights the relevant part of the summary as the video is playing.
piloto_ciego | a day ago
I had a "good" job, it was extremely stable and in the public sector, the work hypothetically mattered... I was miserable because it didn't matter. If I would have died in my study, the system would have happily churned on accomplishing nothing without me. There were so many many obstacles to accomplishing anything too, like I'm all about "perfect shouldn't be the enemy of good" - but hypothetically we should do something. I went on vacation in November and when I got back the latest ServiceNow update nuked a bunch of the changes I had worked for months trying to get done.
I quit at the start of the year and honestly, it's been great? Not fast, not suddenly lucrative, but I've been taking it slow. I'm literally building little vibe-engineered tools for local companies. I can now do what would have taken me a team to do by myself, it is paying (albeit slowly), it's fun, and I have time to do the things I care about in this life.
Don't work for the man. Your job cannot love you back, in fact, it actively hates you.
lorenzk | 8 hours ago
Sounds interesting. Care to elaborate?
piloto_ciego | 5 hours ago
The first thing was just some really simple stuff a bush airline I used to work for needed too, like, their software is through a DB run by this other company, they wanted a status board customers could view. That shouldn't be a huge lift, but the company that runs the enterprise software doesn't have the time to build it.
I sent a series of emails, got permission to hit the API, and was able to connect things so now this little bush airline has a customer facing schedule app and people don't call the office 30 times an hour to see if the flight is late or on time or early. Even in the middle of nowhere, if they have Wifi the can check the flight schedule on their phone. That has spread to "hey, do you think you could use this data to auto-populate flight and duty logs?" Yup, not a huge deal. Then onto the next one. Every month it seems I take on a new project for them and the scope of their tooling keeps growing and the recurring costs I charge to maintain things is low enough where I'm worth it. There's a dashboard of data science stuff, then a compliance auditing tool, and the list of bespoke features that are critical to them continues to grow, and they continue to pay me. It's pretty cool.
This has lead to another customer pinging me that wants me to work on an app for their factory floor to help their technicians. Nothing crazy, just a kind of wrapper over USB tool they have and a CRUD app. 99% of the real work is going to be testing out like 30 different layouts and making sure that it works properly in practice, but a big company would never bother to do this. I will go down to their factory this week, set up computer, and talk with their technicians while I vibe code it out with Codex and draw process diagrams and think. 90% of it is really just thinking about what's a prudent choice.
The SaaS the first company is paying for is incredibly necessary to run their business, those guys will probably have their hooks into that operation for many more years because of the inertia to change, but there is tons of room to fix some of the little small annoyances that not having bespoke custom software creates. Also, the software they are kind of locked into is 10s of thousands of dollars a month. I reckon in the long run I'll end up trying to build a replacement for it entirely then charging way less to give them exactly what they need.
Then there's the existential angst of vibe coding this stuff. The truth is, I could write all this code myself. It's mostly Python, and JS, but it would take me a month to do what I can do in a week and I'd be working myself to the bone. Instead, this is more like an extremely fun part-time job that's growing in scope and pay but not growing in time required of me. Seriously, these tools are cool! They're like I have a team of idiot savants/interns working for me but the entire company so far is literally just me and my wife (and she isn't really involved in the technical stuff at all). Codex is dumb and does not understand the use case at all, but good lord does it churn out boilerplate code that solves real engineering problems for customers. My job is largely playing "software plumber foreman" and making sure all the lego pieces fit together nicely and that they're good architectural choices.
For example, I was skimming the code base last week and noticed a ton of just unused code from an early iteration. I spent a bunch of time pruning that as a human, then also having codex refactor code smells I didn't like. "This file is ridiculous, it's like a monolith of 30 different concepts hammered into one place - refactor all this stuff and spread it out, move function X to a separate file, use a functional style" etc. Stuff like that is kind of mandatory, otherwise your codebase will give you a stroke and you can grow it to an extraordinary size that will hurt your ability to iterate because you'll be running into context length issues. But the robot doesn't do too horrible of a job.
I could write all of the code, but the customers don't care if it's written by a human or not? They just want it to work. So I spend a lot of the time coming up with test-cases, then interactively evaluating what the robot is building? Kind of like a really slow REPL? But I'm definitely less of an engineer and more of an architect now. That pains me a bit? But all things must come to an end.
One thing I'd say is important if you're going to do this... use the dumbest possible solution you can. You'll need to specify that to these tools otherwise they'll build you a cathedral? You probably do not need some monster system with 80 layers of abstraction. KISS is important.
lorenzk | 4 hours ago
Good luck!
mahirsaid | a day ago
Being cutoff from China " A market that is also advancing in the same sectors as the US.Not allowing competition to enter the west will cause a recipe for disaster in the future. The current government is not "focused" on growth, despite the contrary to what's being said publicly. Where this will take the US is a place were stagnation is okay, so to make up for it there is a surge in investments in AI craze at the moment. The feedback is required in order to grow that goes for companies too not just the junior-varsity wrestler at you local high school. I mean taking abundance of data to utilize a summarize tool so that it can auto complete a prompt was bound to happen sooner or later, take elastic search for example, it's a search bar that as you type shows what that database has to offer with either a weighted response or indexed response depending on setting. This tool also shows images and information in regards to the search query. All that was needed to happen in that scenarios was something to compute this mess of data in abundance and project a response from it not just a search result. Marvelous you might say, but it has been around for a while now.The idea was there, it just needed the actor to execute it. The firings alone tell you the health and implications of these actions taking place. There was promises behind these investments that this war is interrupting or severing the deals even post-conflict.
The DotCom bubble was push on society to use the web and to digitize some parts of our lives, which the few companies that survived DotCom era are whats driving the push to the next era of tech or digital. It seems the AI idea is born without a guardian nor ownership, but to leave the courage to act upon it is open to any takers. The overwhelming spillover of data had to go somewhere. The useless data " how fast does a 2001 Porsche 911 go?" was tiresome to search for anymore.
The education system is already fallen apart in the US and this only makes things worse. Where is education heading with all of the adoption of AI all around us, how will you argue with your children, how will you learn new things? I don;t think I'm the only one thinking this at the moment by all means. The solution? well I'm, not sure if there is a solution to this? Companies want to see results from their spending and they will not stop until that is evident.
optimism is clearer without fog.
barrkel | a day ago
This is what tech has always been. A never (yet) ending race to automate. Our job will be done when there's nothing left to automate.
_cenw | a day ago
Outsourcing your thinking, especially uncritically, is. There is a very obvious cognitive bias in the most vehement AI advocates where the one time a tool worked really well for them makes it worth the dozen of times it blows up in your face and makes that someone else's problem. The gain is romanticized and the losses set aside, without checking the balance or how badly the losses wear on morale.
coffeebeqn | a day ago
SanjayMehta | a day ago
What does Trump have to do with AI?
ngcazz | 3 hours ago
ludicrousdispla | a day ago
xtracto | a day ago
Humanity developed Code and programming languages for people. They are supposed to provide sufficient expressiveness so that we people can understand what is happening, and 0 ambiguity, so that the machine can perform is instructions.
But computer code has been a way to communicate among us people on our intentions (what we intend the machine to do). Otherwise, we would still be writing in assembler.
But now, computers are generating code, A LOT of code. So much, that it's becoming more and more difficult to stay on top with our verbose languages.
We will need to develop a better way for the computers to a) produce the instructions to perform the tasks we tell them to , b) produce reports or some accessible way for us people to understand and share what the instructions are doing.
msteffen | 23 hours ago
I’ve started kind of a funny rule, which is that when I make a change now, I can use Claude or not. But if I use Claude, some cards have to go into the deck. Both about how the implementation works, and also about anything within the implementation I didn’t know about. It does force you to double-check things before committing them to memory.
beej71 | 21 hours ago
LLMs can really help you get what your bosses want a lot faster.
As an older dev, myself, I'd already been bitching about the state of software quality before all of this. Companies just didn't give a shit. Sure, people within them did, but as a whole companies will do the bare minimum to not lose your business (because that's what's best for the bottom line). Can't really fault them for their nature.[1]
And then I step back and look at something like Linux or GNU. Perfect and bug-free? Certainly not. But they're damn fine pieces of software. Many open-source projects have historically been damn fine pieces of software. Because they don't care if they lose your "business". They just want to build something cool that they can be proud of.[2]
It's why so many of us agonize over the details of the things we produce and give away for free. It might not even net us another user, but we have pride in our craft and want to do the best we possibly can.
But that way of thinking is a money loser, at least in the short-term. And companies live in the short-term.
So what's going to stop software from just collapsing into a massive pile of crap?
I don't know. Maybe it just has to get so bad that people start going to the marginally-better competition. Isn't exactly a great consolation to me, that.
[1] Small companies are often idealistic and try to do the Right Thing, admittedly. But big ones who tend to be market leaders tend to not.
[2] Insert the entire GNU philosophy here because I just glossed over it completely and I don't want to get called out on it. :)
attendant3446 | 21 hours ago
OutOfHere | 20 hours ago