Thanks for this, going to steal a lot of this. I would install your plugin, but I worry about being able to delete it later. I also think that each one of these is better served customized to a developer. That said, I'm still going to grab some of these, thanks!
I've been using Agent Skills on a new side project and I'm really impressed so far! It really holds my hand a lot of the way and really lets me focus on developing a product instead of figuring out how to build it. I get to focus much more energy on high level architecture and product design.
Very grateful for this repository and everyone who contributed to it!
I would love to know how many people are actually using superpowers.
I showed up on the agentic dev scene prior to superpowers, and I am getting concerned that >50% of my self-rolled processes are now covered by superpowers.
I no longer trust gh stars, can anyone chime in? Is superpowers now truly adopted?
If it is truly valuable, why hasn't Boris integrated the concepts yet?
I just removed superpowers from my own setup. In my opinion, given the quality of the planning modes in both claude code and codex, superpowers was really just slowing things down and burning more tokens than vanilla.
I used superpowers - but it burns waay more tokens for basically the same outcome as a single line that states
"Please do planning and ask any required questions before implementing.
[my prompt]"
On the latest models and with a decent harness, the planning modes are quite good, and the single sentence telling it to ask you questions lets the model pick the right thing to ask about, instead of wasting a bunch of time/tokens on predefined skills that try to force basically the same result.
It does introduce a second set of required interactions, but you can have another agent be your "questions answerer" if you need it (result quality goes down a bit vs answering myself, but still quite good, especially if you spend a bit of time on the answerer prompt)
Basically - things are moving fast enough I'm not convinced buying into superpowers/agentskills/[daily prompt magic beans]/etc tooling really makes sense.
I'd stick to the defaults in the harness for most cases, and then work on being clear with the ask.
I adopted superpowers, but then adapted it. I've changed some things, added some things. I suspect that my set of agent skills is probably overlapping with OP's by quite a lot now.
I also found that I have different skills for different tasks; at work security is a huge concern and I over-emphasise security in the skills. At play I'm less bothered about security and so the skills I've written to help me build stupid one-shot exploratory websites are less about security and more about refactoring and exploring concepts.
I've used it off and on over the last month or so. For more complicated tasks (30+ minutes) it works well, and seems to replace a lot of prompting that I'd normally need to do (e.g. asking questions about requirements, creating specs and implementation plans, staying on task). For simple tasks, it tries to do too much and gets in the way.
>I would love to know how many people are actually using superpowers.
I use them on and off. Also Get Shit Done and Compound Engineering. The best results I got with Compound Engineering but it burns tokens like crazy, especially in the review phase where it does reviews with 5 - 12 agents in parallel - and I like to do a lot of reviews for both the plan and documentation and code.
For some lighter tasks, builtin Claude Code skills like plan mode are enough.
A lover of Superpowers here, using it for about 2+ months now.
It allows you to explore the problem space upfront, it questions your assumptions, asks more probing questions to confirm what it’s found in the code, and by the time you’re ready to implement, it knows exactly what needs to happen.
This kind of "overprompting" is one technique that even the best skills/agents use to compensate for under-invocation, which happens when more demure advisory language tends to be rationalized away by LLMs.
It shouldn't be your default, but should absolutely be tried when your skill/agent test suite displays evidence that it's not being reliably invoked without it.
> This isn’t a coincidence. It’s the same SDLC every functioning engineering organisation runs, just in different vocabulary. [...] Amazon calls it the working-backwards memo and the bar raiser. Every healthy team has some version of this loop.
This (sdlc == working backwards & bar raiser) is so horribly wrong, that I hope this was an LLM hallucination.
In general, I'm starting to see these agent scaffolding systems as an anti-pattern: people obsess over systems for guiding agents and construct elaborate rube-goldberg machines and then others cargo-cult them wholesale, in an effort to optimize and control a random process and minimize human involvement.
The problem is it’s so rarely A/B tested, definitely not at scale. An engineer, who writes all these my-workflow-but-for-agents skills, proceeds to get the good outcome, while also seeing affirmations that the agent did follow the prescribed processes - that is considered a victory. In reality the outcome could’ve been just as good if they fed Claude a spec + acceptance criteria, or even a basic prompt for the simpler tasks.
But I don't expect anyone to every use my stuff. It's complicated as hell. But it's for me, and it works without me having to remotely think about the complexity.
This is how similarly we collectively approach Taylorism, isn't it? However, the world favors capitalism, of which Taylorism becomes a handy scaffolding.
All of these articles about setting up the perfect agent environments with skills, plugins, MCP servers, markdown files, etc. etc. reminds me so much of the culture around setting up the perfect "productivity stack". You need the perfect note-tacking app, ticketing app, calendar integrations, yada yada before you can really do anything meaningful. The reality is that you're going to get beat by someone with a few things written down on a piece of paper who is just getting stuff done.
I'm notorious for taking poetic license with naming—that's how we end up with `class Escutcheon`, or variables `recto` and `verso` where applicable in eg PDF generation.
But as much pleasure as I derive from novelty and specificity, my colleagues have oft expressed perplexity—whereas the terms which LLMs produce hew closer to the manifold (by definition!) and raise fewer eyebrows.
The best way to prompt an LLM is to describe the outcome you want, that's it. They are trained as task completers. A clear outcome is way better than a process.
If the LLM fails, either you didn't describe your outcome sufficiently or is misinterpreted what you said or it couldn't do it (rare).
Common errors should be encoded as context for future similar tasks, don't bloat skills with stuff that isn't shown to be necessary.
I agree that many skills are overblown and unnecessary. But there's a lot of value in giving AI the right process. See how much more effective Claude can be for moderate or large changes when using the superpowers skill.
a skill is just reusuable/shareable context. It's just text, really. It's useful for things like documentation on how to use an API (this works better than MCP in my opinion), or a non consensus way of doing something. For example, you can use remotion to generate video. There are useful remotion skills that allow you to reliably generate specific types of videos. Captions of a certain style, for example.
If there is anything we have learned in decades of Software engineering, it's "A clear outcome" is not easy to describe. In many cases, it's impossible unless people from 4 different domains collaborate. That's why process matters. It allows for software to be built is a "semi-standardized" way that can allow iterations to get us closed towards the expected outcome, that might emerge over time.
Yes, not everything I use LLMs for going to have the same level of ambiguity or complex requirements. Optimizing by choosing to skip over parts of the process is exactly Addy is talking in this article.
> The best way to prompt an LLM is to describe the outcome you want, that's it. They are trained as task completers. A clear outcome is way better than a process.
This is not true for anything complex. They’re instruction followers, of which task completion is just one facet.
They’re also extremely eager to complete tasks without enough information, and do it wrongly. In the case of just describing task completion, despite your best efforts, there are always some oversights or things you didn’t even realize were underspecified.
So it helps a lot to add some process around it, eg “look up relevant project conventions and information. think through how to complete the task. ask me clarifying questions to resolve ambiguities. blah blah”. This type of prompt will also help with the new Opus 4.7 adaptive thinking to ensure it thinks through the task properly.
Agreed, and further, I'd argue the OP's division of LLM instructions into either process or outcome specification is a false dichotomy. My agentic process specification is about automatically specifying the outcomes that I would otherwise repeatedly have to tell the LLM to consider, like making sure test coverage is maintained, or that decisions are documented on the original Github issue. Or it's about correcting common failure modes, like when the agent spends an enormous amount of time running repo-wide tests while debugging a focused change, because the agent doesn't consistently optimize around the time-to-implement as an outcome. Arguably part of addressing those failure modes boils down to pure process in the sense that I specify a logical order for achieving the outcomes, e.g. creating a plan before implementing. But that is mostly to organize approval gates for my convenience, rather than structuring the agent's work per se.
That seems a bit reductive. Even with humans, there’s a range of interpretations and ways that something can be built or a task completed. Engineers remember stuff so you don’t have to keep repeating yourself. Skills are a way to describe your outcome without similar repetition.
This seems like common sense but it does not work in practice.
Prompting is just the first part. To get the outcome, you need to have other systems to steer the agent as it get things wrong. Proper deterministic tests work. But there is also stuff that need to happen during the LLM execution like cyclic detection etc. All of this adds up.
You cannot just prompt an LLM an hope for a good outcome. It might work in small isolated scenarios but it just does not work consistently enough to call it reliable.
Without further guardrails enforced by the process or the harness, LLMs do not have sufficient capabilities to complete a task up to a certain standard.
I have written zero skills, so not sure how normal it is. I counted the words in couple of them and they seem to be around 2k range. So 5 skills would be around 10K. Even at a small LLM context of 128k, that's still around 10%. And for a 1M context window like the big ones, it barely registers.
I quickly skimmed and it looks like at least a few of them are intended to be more like system prompts for a tightly scoped sub agent than a skill as such. I agree, I wouldn't want to use a lot of of these in a longer-running work session.
I have been successful with short and focused skills so far. I treat them as a reusable snippet of context, but small ones. For example a couple of paragraphs at most about how to use Python in my project and how to run unit tests. I also have several short "info" skills that don't actually provide the agent instructions, they merely contain useful contextual information that the agent can choose to pull in if needed.
Even having too many skills can be an issue because the list of skill names and their descriptions all end up in the context at some point.
> it would only take a couple of these to really fill the context alot.
Only skill front-matter (name, description, triggers etc) are loaded within context by default, so this isn't likely to happen without 1000s of skills.
The reason they are long is because these skills are produced mostly by Claude Code and Opus and no sensible human will read these files, let alone build a mental model around them. There is just layers of assumptions that this works - when in reality it doesn't and it is wasteful.
Here is a fun experiment.
Ask any LLM to write something vaguely familiar. For example, ask it "write a fib". Since almost all LLMs are fine tuned on code, I find that all of them will respond with a fibonacci sequence algorithm even-though to a non-programmer "write a fib" means to write an unimportant lie.
So there is compression. You can express an outcome in just 3 vague tokens without going into details what exactly is a fibonacci sequence.
That should be enough to understand that the length of the prompt does not matter. What matters is the right words, frequency and order. You can write two page prompt or two sentence prompt and both can have the same outcome.
I just tried it with Gemini pro. I think this answer is about as good as you can expect for such an ambiguous question.
Write a fib
Since "fib" can mean a couple of different things, I've got you covered for both!
1. A Little Lie (A Fib)
"I'm actually typing this to you from a sunny beach in the Bahamas, sipping a piña colada." (Since I'm an AI, that is definitely a fib!)
2. The Fibonacci Sequence
If you meant the classic programming exercise, here is a Python function...
I stand to be corrected. Though I tried again just now and this is what Gemini Pro produced:
> I'm assuming you mean a Fibonacci sequence generator! I'll write a Python script that includes both an iterative and a recursive way to generate Fibonacci numbers.
In my opinion, if a tool that's designed to be an answer machine had to give exactly one response to "write a fib", the correct choice is the Fibonacci sequence. You're probably underestimating how many programming students might type a query like that in.
If you want a lie, then the normal grammar in English is to say "tell a fib". I bet every llm you test that on will respond by telling you a small lie, or at least note the ambiguity and then say it's going to revert to Fibonacci because that's more in line with what it's designed to do.
Working on reading through Agent Skills, it seems we've converged on a lot of the same points, and I've never seen it, so trying to get an understanding of it.
Edit 1: I don't like all the commands. I just rely on a single router to automatically decide what I want, and that feels like the most reasonable way to me to communicate with it.
I don't want to remember things. And that's the way for me to scale the number of skills and activities. I don't have to think about them.
I personally wouldn't call theirs an intelligent router. They are dancing between a few different skills. We have extremely different setups there.
But of course, I'm using way more context to get it done. I'm even sending it out to Haiku to build the route choices.
I choose to use tokens to make things better for myself, not everyone would make the same choice, so I certainly see why they are using a few skills, and composing them.
Edit 3: This is much easier for a user to wrap their head around because there's much less.
I am only focused on the best improvements I can make that show value for my use cases. This is straight foward to reason about.
This seems like a nice way to get the best concepts for people trying to understand them. I commend them for a clean, simple approach.
Edit 4: Yeah, I think there are some things I can learn from them which is always good.
I especially like simple decisions like collapsing the install details for each harness in the readme.
I'm going to read over the entire thing and look for opportunities to improve my stuff.
We are all working together, learning, testing, building, trying to find the best way to implement things.
Everyone who writes this kind of stuff skips the boring parts: science and engineering.
Yep, benchmarks, comparisons of with/without, samples of generated code with/without. This kind of stuff matters, and you may be making your agent stupider or getting worse results without real analysis.
Also this prose reads like the author has drunk the Google kool-aid and not much else.
Trick is to not burn too much time worrying about the perfect skills and this and that. See a lot of people filling skills with LLM junk, or overdoing rules that start confusing the LLM. Just try Vanilla, see something you don't like? Then you make a skill and funnel the LLM to use it for the style of task it's working on. E.g. database work is a mixed bag with LLMs, they tend to do work in totally different styles if you leave them unconstrained.
Agents are unbelievably useful at helping takeover and refactor messy codebases though. I just started taking over this monstrous nightmare of a codebase, truly ancient code the bulk of it written over 10+ years ago in PHP. With the use of Claude / Codex I was able to port over the vast majority of the existing legacy storefront and laid the groundwork for centralizing the 10-20k LOC mega-controller logic over to reusable repo/service patterns.
Just shit that would've taking years previously, is achievable in under a month.
Everything needs an element of human touch, I would somehow only run vanilla things. But if, let’s say, I’m creating backup scripts, I meticulously outline the plan.
i treat it like Minecraft automation - it's just for funsies and to pass the time haha
I don't think agentic workflows are there yet, but implementing skills to manually call and use while working side by side with an AI is definitely nice - our company is focused a lot on sandboxing right now and having safe skills
I don't think we've gotten feature development well yet, but the review skills + grafana skills they wrote have been pretty solid
I’m a bit curious with these takes. Arguing in good faith - is the general assumption that people who use AI/agents/harnesses don’t ship features? We’ve been all in Claude Code since ~Septemberish, and have been able to successfully track the boost. Like the features that we ship that get used in production. Both from infrastructure side, and business logic implementations. Frontend and backend.
I don’t think people are wasting too much time. Although, I do agree most of these posts are just bs, including this one. But AI-development has been a thing across a lot of companies in the world.
Ignore the people who haven't found out how to use ai yet or don't want to.
AI is a powerful tool. Depending on what I need I use chatgpt, in-ide agents, or a platform like Devin.ai.
I use it when it helps me advance my goals. I don't when it doesn't. Sometimes it misses the mark and I scale back and have it do a specific piece and I'll do the rest.
Sometimes I use it to analyze the code base in seconds vs minutes. Sometimes I use it to pinpoint a bug fast.
Ive solved customer issues in seconds and minutes with it vs hours.
I worked on a banking app with deeply domain specific data issues. AI was not very helpful on that team. My current work on consumer web apps mean my problems are more mundane and AI is a big accelerant.
Being and engineer means solving the problems with the right tools with the right tradeoffs as well. It's why I use an idea vs notepad, I use chatgpt for one-off scripts and "chat", and i use agentic workflows for big, repetitive, or "boring" low-stakes tasks.
Not the same person, but it really depends on projects. E.g. I have some projects that involve working to large specification sets where we can measure rate of delivery against the spec. If your spec is fuzzy and incomplete, then it gets hard, but then you have little insight into human productivity for those projects either.
For my team, it has been easy. We deal with infrastructure for the entire org, so have tickets created for every request. We also gave our own backlog for internal project, so can see burn rate, and etc. Team hasn’t changed, a lot of similar/same tasks that have taken half a day has been completely automated to a point where we just do PR review after an initial ticket is created by other teams.
There are a lot of little things we’ve tracked, and it’s just faster to implement things now. To be fair, everyone on my team has decade+ professional experience (many more non-prodessional), and we understand limitations of AI fairly well.
What is your definition of faster to implement? Is it producing a plausible implementation, or is it faster at producing a correct and high quality implementation? Are you including time spent refactoring and fixing bugs in your metrics? If not, I think you are tracking a gut feeling rather than cold hard facts. I’m not saying this is easy to track, just saying that it’s hard to know for sure that you are really more productive with AI.
> to be fair, everyone on my team has decade+ professional experience (many more non-prodessional), and we understand limitations of AI fairly well.
I see this appear quite often in discussions on productivity, to the point that a conclusion may be made regarding its centrality for productivity gains.
I can take on a slightly weaker form in good faith: professionally it’s a non-starter until private, open source inference can be self-hosted and the ROI is clear enough to invest in that.
And on the ROI side, trying things out regularly, I haven’t seen the positive ROI in the limited time I’ve dedicated to exploring the tools. I’ve restricted experimenting to 4 hours per month, because spending more than 2.5% of the month chasing productivity improvements that realistically seem to be 10-20%, will quickly eat into those gains. After accounting for token costs, it ends up being a wash.
The poster provided numbers and thresholds they used to evaluate the utility of a business product.
With infinite time anything is possible, but since we live within constraints, discussing practical, real world thresholds or evaluation methods is a worthwhile use of our time.
I think I should also clarify, I work in the training of encoder-decoder transformer models. Before the ChatGPT era I worked on on encoder-only transformer models. I'm not unfamiliar with the literature and general discourse. I just do not use LLMs for programming.
I suspect some devs don't want AI to succeed - and it's understandable, as it will fundamentally change the way they work, and possibly put them out of a job as we need less developers.
So they convince themselves AI can't work because they don't want it to.
I can understand skepticism to a degree, and even fundamentally believing that AI is bad for all sorts of reasons, but I am becoming more and more perplexed at the certainty behind statements like this one. How are you so certain that AI development is this doomed? It just hasn't matched my experience at all, and I wonder what your experience is that has driven you to this level of certainty about the certain doom of AI coding?
Is it just a philosophical belief that AI is morally bad? Or have you actually used AI to build things and feel confident that you have explored the space enough to come to such a strong conclusion?
I have been writing code every day for over 30 years, and have been doing it professionally for over 20. I have seen fads come and go, and I have seen real developments that have changed the way I do what I do numerous times. The more experience and the more projects I create with AI, the more certain I am that this is a lasting and fundamental change to how we produce software, and how we use computers generally. I have seen AI get better, and I have seen myself get more proficient at using it to get real work done, work that has already been tested with real world, production, workloads.
You can hate that it is happening, and hate the way working with AI feels, but that doesn't mean it is not providing real value for people and doing real work.
I dont know any serious engineers thay are doing real work with AI agents. I know some that are building features for web applications and just punching a clock, but I don't think that constitutes real work or provides much value to the world.
I like thinking, solving problems and typing out code myself. Im going to keep putting tons of care into my craft and I promise I'll have more impact than the guy running 3 agents to build the 500th version of some web concept.
Rolex has a much bigger impact on the world than white label mass manufacturers in China.
It is real work, just 90% of it is either net negative for society or provides nuetral value. Most web applications that are piling on features now because they have agents, are piling on features that we never needed in the first place, hence why they weren't prioritized previously. Junk junk junk.
Initial drop, as people learn to use the tools, and while they keep babysitting their harnesses. Then significant boost once people start getting used to running the agents in the background, especially once they start running multiple sessions in parallel. I'd say you need a ~6 month push of getting people trained if they are not used to this way of working, and to customised setups etc. for your organisation, and then you start seeing significant payoff.
I've tried these larger agent skillsets in the past and felt it was a waste of time because it was just doing too much. Just like vim it's often better to pick and choose from the community instead of installing skills like they are an IDE. Skills are way too personal because every dev and dev team is different. So better to treat these as a reference for your own config rather than bulk install someone else's config.
Same for MCPs and system instructions, there are a lot of people that just install everything without understanding it, cluttering their context, wasting >50k tokens for these tools they don't need and then complain that they need to pay >100$ per month because they reach their limits too fast.
Because we've been automating large parts of our former jobs for decades. Otherwise we'd all be trying to build things in the least efficient way possible to maximize how long the job takes, which IMO isn't a great idea.
Humans have been minimizing how much work is needed to get a certain level of output for as long as we can track. It's civilization. Should we go back to farming by hand with hoes, to maximize labor used? Go back to streetlights that are individually lit? The society that falls behind on automation becomes poorer, and eventually just dies, as even the people born there tend to choose to leave to higher productivity places. It happened to eastern europe, it happens to the Amish. To any poor society which gets emigration. Doing more with less has always been exciting.
Because usually the people who lose their jobs are people who do not adapt to the market.
Right now it's not clear in which direction everything is involving and that's why people experiment with handing all their data to random agents, figuring out how to store and access context, re-use prompts and other attempts to harness this tech. Most of these will maybe be useless in a year as they might be deeply integrated into the next wave of models but staying on top of the development has always been part of the fun of working in this field.
People are building bots to do the most legible thing possible which is feature in X amount of time. But it doesn't matter if the bottleneck is human thinking time required to output quality code rather than X amount of code written.
I am so much faster with the bots. If you're not faster with the bots then either you write very very little code, or you're doing it very wrong. Tactically they outsmart me 10-100x if you account for the write speed. Even if you just consider the knowledge of languages, libraries, patterns they clearly outperform me. Strategically I do not trust them at all, poor things suck at it, mainly because they always try to take the shortest possible path to the current destination.
And if you think that your personal protest against the automation will in any way affect the direction in which the industry goes then you're delusional. You would have to start something like a political party and collect way more people.
Wake me up when LLMs help me write better code and let me understand the codebase, and not before. Not faster, not more productive, but a more comprehensible codebase that I can reason in my own head.
Otherwise, if they write so much better code, than it's pointless to have a human in the loop.
You will develop quite a lot of illnesses sleeping for this long, but your choice I suppose. Who knows, maybe it happens as soon as next year. I would strongly suggest living a life, any life really, instead of waiting like that.
I don't understand this thinking as a computer programmer. My whole life has been about getting a computer to do work so humans don't have to anymore. Every single piece of software written is supposed to take away work from someone.
Do you feel this way about every automation you create? I do know some old school sys admins who felt this way about a lot of infrastructure automation advancements, and didn't like that we were creating scripts and systems to do the work that used to be done by hand. My team created an automated patching system at a job that would automatically run patching across our 30,000 servers, taking systems in and out of production autonomously, allowing the entire process to be hands free. We used to have a team whose full time job was running that process manually. Did we take their jobs by automating it?
Sure, in a sense. But there was other work that needed to be done, and now they could do it.
The whole reason I like programming and computers and technology is precisely because it does things for us so we don't have to do it. My utopia is robots doing all the hard work so humans can do whatever we want. AI is bringing us one step closer to that, and I would rather focus on trying to figure out how we can make sure the whole world can benefit from robots taking our jobs (and not just the rich owners), rather than focus on trying to make sure we leave enough work for humans to stay busy doing shit they don't actually want to do.
The problem is that people are holding AI wrong. They're using AI as the engine of their solution without realising the true solution:
Use AI to create the engine. After that running the engine itself costs as much as keeping the computer running it online. No API costs for 3rd party LLM providers needed.
It's likely the people that were not good developers that suddenly got accelerated "to the top" that seem the most for it. All of the good devs I know have been a bit more cautious on the uptake.
I think it's more subtle than that. There are a lot of measures of what a 'good developer' is, and one of them is 'shipping things'. AI is specifically accelerating that part of the industry - it's much easier to ship code faster now. If you're in a domain that doesn't need quality (easy horizontal scaling, bugs rarely have a critical impact, customers are relatively loyal) then AI is proving that shipping features is more important than code quality.
If you're in a part of the software industry that needs well-optimized and bug-free code then it's less useful. The problem for devs is that those parts of the industry are much smaller.
Funny, I know quite a few extremely talented programmers who cautiously approached the topic, and found that, with proper use, they've found LLMs to be extremely useful. Just a matter of understanding where the boundaries are, and using them responsibly. It's not a magic genie, it augments their existing skill.
Survival instincts. If everyone and everything around you (your job included) is shouting "use AI" it's difficult to take any stand or introduce caution. I think it's less about being excited, more about hoping to not miss the wave and get "left behind."
I think both groups (pro vs anti) will be a bit surprised when the long-term data shows productivity gains were modest on average and producing quality software still needs care/human attention, even with the support of advanced, frontier models. Same job as before, now we just have a power drill instead of a screwdriver. Some people build houses that stand for hundreds of years, others less so.
We've been automating stuff for 60 years, and it only leads to more automation.
At the end of the day, the more automation, the more people you need making sure things work.
There's always going to be a minimal bottleneck for how much an engineer can oversee if they need to do zero implementation.
We're not as far from that point as people think.
Most languages most things are developed in are 10x more expensive than languages of yore.
Rust has a bad reputation for being hard, but it is actually quite expressive.
Less than 50% of what engineers do is code.
IBM was famous, in the early 2000s, for the average dev writing one line of code per day on average.
We're just going to move to a world where the average dev spends <10% of their time coding, but there's likely to be x times more work, so it mostly evens out.
There’s so many ways, many redundant, to set up agents for software development that beyond personal/team/org needs+tastes, I need to look into setting up some benchmarks to evaluate what set up is optimal or whether the differences are even worth it.
Recently I have got an access(enterprise)to the latest ChatGPT module with an ability to write skills to automate repeatable taks. Without any prior knowledge I just started tinkering and now after creating and testing multiple skills in real business environment I can confidently say writing a good skill is a skill itself. As the author mentioned it's not an essay but a specific instructions sets organised in steps and in a concise manner.
I wish this fucking meme of "post the prompt" would die. Very little work is one-shotted, very little has a singular "the prompt", most is iterated until it's close to the vision of what the author actually set out to write.
Snake oil. Good to read for sure. Seems all plausible too. But snake oil nevertheless.
Here's why: The slot machine can drop any hard requirement that you specifically in your AGENTS.md, memory.md or your dozens of skill markdowns. Pretty much guaranteed.
These harnesses approaches pretend as if LLMs are strict and perfect rule followers and the only problem is not being able to specify enough rules clearly enough. That's fundamental cognitive lapse in how LLMs operate.
That leaves only one option not reliable but more reliable nevertheless: Human review and oversight. Possibly two of them one after the other.
Everything else is snake oil but at that point, you also realize that promised productivity gains are also snake oil because reading code and building a mental model is way harder than having a mental model and writing it into code.
Everything you say is all possible, and in theory I agree with you.
However, I have been using spec-kit (which is basically this style of AI usage) for the last few months and it has been AMAZING in practice. I am building really great things and have not run into any of the issues you are talking about as hypotheticals. Could they eventually happen? Sure, maybe. I am still cautious.
But at some point once you have personally used it in practice for long enough, I can't just dismiss it as snake oil. I have been a computer programmer for over 30 years, and I feel like I have a good read on what works and what doesn't in practice.
We can build all the scaffolding around but I assure you that the LLMs aren't perfect rule following machines is the fundamental problem here and that would remain.
Give it a few more months and I'm sure you'll see some of what I see if not all.
I'm saying all the above having all sorts of systems tried and tested with AI leading me to say what I said.
I have been doing this for 6 months or so now, and I am not sure that even if you have a lot more experience than me that it would make your assessment more accurate, since that just means you have more experience with prior generations of the models. What I have experienced is that the AI has been getting better and better, and is making fewer and fewer mistakes.
Now, part of that is my advancements as well, as I learn how to specify my instructions to the AI and how to see in advance where the AI might have issues, but the advancements are also happening in the models themselves. They are just getting better, and rapidly.
The combination of getting better at steering the AI along with the AI itself getting better is leading me to the opposite conclusion you have. I have production systems that I wrote using spec-kit, that have been running in production for months, and have been doing spectacularly. I have been able to consistently add the new features that I need to, without losing any cohesion or adherence to the principals i have defined. Now, are there mistakes? Of course, but nothing that can't be caught and fixed, and not at a higher rate than traditional programming.
i think it depend on your goals and also your preference / expectation how your experience with LLMs is. i dont mind if they hallucinate. even if i have mental model of code i wont write it myself perfectly either.
the only downside i see is getting out of practice, which is why for my passion projects i dont use it. work is just work and pressing 1 or 2 and having 'good enough' can be a fine way to get through the day. (lucky me i dont write production code ;D... goals...)
Humans also drop any hard requirements you specify regularly, and similarly require review. Nevertheless we manage to increase reliability of human output through processes and reviews, and most of the methods we use for harnesses are taken from experience with how to reduce reliability issues in humans, who are notoriously difficult to ensure delivers reliably.
The primary way to increase reliability is to automate. Instead of humans producing some output manually, humans producing machines which produce that output.
I've seen a disturbing trend where a process that could've been a script or a requirement that could've been enforced deterministically is in fact "automated" through a set of instructions for an LLM.
Sure, when that is possible. However, there are lots of processes we don't know how to automate in a deterministic way. Hence the vast amount of investment in building organisations of people with mechanism to make peoples output more reliable through structure, reviews, and so on.
Large parts of human civilization rests on our ability to make something unreliable less unreliable through organisational structure and processes.
We resolve that through liability, penalties, trust, responsibility, review and oversight.
At the end of the day, if I am spending X$s for automation, I want to be able to sleep at night knowing my factory will not build a WMD or delete itself.
If its simply a tool that is a multiplier for experts, then do I really need it? How much does it actually make my processes more efficient, faster, or more capable of earning revenue?
There is a LOT that is forgiven when tech is new - but at some point the shiny newness falls off and it is compared to alternatives.
Liability, penalties, trust, and responsibility are means we use to try to influence the application of the processes that do. They do not directly affect reliability. They can be applied just as much to a team using AI as one that does not.
Review and oversight does address reliability directly, and hence why we make use of those in processes to improve the reliability of mechanical processes as well, and why they are core elements of AI harnesses.
> If its simply a tool that is a multiplier for experts, then do I really need it? How much does it actually make my processes more efficient, faster, or more capable of earning revenue?
You can ask the same thing about all the supporting staff around the experts in your team.
> There is a LOT that is forgiven when tech is new - but at some point the shiny newness falls off and it is compared to alternatives.
Only teams without mature processes are not doing that for AI today.
Most of the deployments of AI I work on are the outcome of comparing it to alternatives, and often are part of initiatives to increase reliability of human teams jut as much as increasing raw productivity, because they are often one and the same.
> Liability, penalties, trust, and responsibility are means we use to try to influence the application of the processes that do. They do not directly affect reliability. They can be applied just as much to a team using AI as one that does not.
Yes and no. see next point.
> You can ask the same thing about all the supporting staff around the experts in your team.
I have a good idea of the shape of errors for a human based process, costing and the type of QA/QC team that has to be formed for it.
We have decades, if not centuries of experience working with humans, which LLMs are promising to be the equivalents/superiors of.
I think you and me, would both agree with the statement "use the right tool for the job".
However, the current hype cycle has created expectations of reliability from LLMs that drive 'Automated Intelligence' styled workflows.
On the other hand:
> part of initiatives to increase reliability of human teams
is a significantly more defensible uses of LLMs.
For me, most deployments die on the altar of error rates. The only people who are using them to any effect are people who have an answer to "what happens when it blows up" and "what is the cost if something goes wrong".
(there is no singular thread behind my comment. I think we probably have more in agreement than not, and its more a question of finding the precise words to declare the shapes we perceive.)
> (there is no singular thread behind my comment. I think we probably have more in agreement than not, and its more a question of finding the precise words to declare the shapes we perceive.)
I moved this up top, because I agree, despite the length of the below:
> However, the current hype cycle has created expectations of reliability from LLMs that drive 'Automated Intelligence' styled workflows.
Because for a lot of things it works. Today. I have a setup doing mostly autonomous software development. I set direction. I don't even write specs. It's not foolproof yet by any means - that is on the edge of what is doable today. Dial it back just a little bit, and I have projects in production that are mostly AI written, that have passed through rigorous reviews from human developers.
The key thing is that you can't "vibecode" that. I'm sure we agree there.
There needs to be a rigorous process behind it, and I think we'll agree on that too.
Those processes are largely the same as the processes required for human developers. Only for human developers we leave a lot of that process "squishy" and under-specified.
We trust our human developers to mostly do the right thing, even though many don't, and to not need written checklists and controls, even though many do.
What is coming out of this is a start of systems that codify processes that are very much feels based with human teams. Partly because we still need to codify them for AI, but also because we can - most people wouldn't want to work in the kind of regimented environment we can enforce on AI.
Sure, there is a lot of hype from people who just want to throw random prompts at an LLM and get finished software out. That is idiocy. Even a super-intelligent future AI can't read minds.
But there are a lot of people building harnesses to wrap these LLMs in process and rigor to squeeze as much reliability as possible from them, and it turns out you can leverage human organisational knowledge to get surprisingly far in that respect.
> Because for a lot of things it works. Today. I have a setup
> There needs to be a rigorous process behind it, and I think we'll agree on that too.
I would simplify it to: “I have a setup” is the part that is doing the actual heavy lifting.
From my very unscientific survey / extensive pestering of network, the only people getting lift out of AI are people with both domain expertise/experience and familiarity with the tooling.
The types of automation I see people wanting though are fully automated customer support systems, fully automated document review - essentially white collar dark factories. (Hey thats a good term). The need is for a process that is stable, and behaves the same way every time.
It seems actual AI use cases are more like sketching - if you have enough skill you can make out the rough sketch is unbalanced and won’t resolve into a good final piece. Non experts spend far more time exploring dead ends because they don’t have the experience.
In my opinion, it’s a force multiplier for experts or stable processes, and it’s presented as Intelligence.
I feel your examples fit within these boundaries as well as the ones you have described.
it's strange to see software engineers using skills aka human description of small scripts instead of scripting things directly. often there were cli / tools / libraries to do what a skill does for many years. maybe it's culture issue, people who enjoy automation / devops / predictability will naturally help themselves, but other people just want to "delegate" and be done without trying.
When people do that they are using skills wrong. The best way to use a skill is as a means to give targeted instructions on how to make use of cli / tools/ libraries, with the skill just covering the "squishy bits" that aren't easily encoded into something deterministic.
I can see why this would seem to be “snake oil” logically. However, this approach does work in reality. Your comment just shows that you seem inexperienced with using generative AI.
I hope the only reason people are pretending these markdown suggestions are a "workflow" is fear that a more structured approach will be obsolete by the time it's polished. I can't imagine the pace of innovation with the underlying models will stay like this forever.
I hope to see harnesses that will demand instead of ask. Kill an agent that was asked to be in plan mode but did not play the prescribed planning game. Even if it's not perfect, it'd have to better than the current regime when combined with a human in the loop.
Don't let the perfect be the enemy of the good. Of course we know the AGENTS.md and skills aren't 100% effective. But no, it doesn't mean that they're 0% effective.
Snake oil may be a bit strong, because snake oil never works (except maybe as placebo?) whereas anything with an LLM, even though stochastic, has a pretty high chance of working.
> ... you also realize that promised productivity gains are also snake oil because reading code and building a mental model is way harder than having a mental model and writing it into code.
Not really, though it depends on the code; reading code is a skill that gets easier with practice, like any other. This is common any time you're ever in a situation where you're reading much more code than writing it (e.g. any time you have to work with a large, sprawling codebase that has existed long before you touched it.)
What makes it even easier, though, is if you're armed with an existing mental model of the code, either gleaned through documentation, or past experience with the code, or poking your colleagues.
And you can do this with agents too! I usually already have a good mental model of the code before I prompt the AI. It requires decomposing the tasks a bit carefully, but because I have a good idea of what the code should look like, reviewing the generated code is a breeze. It's like reading a book I've read before. Or, much more rarely, there's something wrong and it jumps out at me right away, so I catch most issues early. Either way the speed up is significant.
Indeed, and it is a complicated problem to solve. A GUI or CLI can hide footguns or make them less likely to be misused. But an AI agent is perfectly happy to use a wrecking ball to put a nail without any second thought or confirmation.
Even then. I don’t have an example off the top of my head but even perfectly clear sentences can lead the agent to strange places. Even between humans, miscommunication is easy, but then anyone sensible would ask for confirmation if their interpretation is weird. But the LLM very rarely questions the user.
I don’t think it’s fair to blame the user here. The tool must be operated by normal users.
for MVPs, mock ups, prototypes or in the hands of an expert coder. You can't let them go unsupervised. The promise of automated intelligence falls far short of the reality.
Not only "has a high chance of working", but you can pay more to make it more reliable. It really is striking trying to run a harness openClaw thing on a smaller or quantised model, really makes you realise how much we take for granted from SOTA models that was totally impossible just a year ago, in terms of complex, generally reliable tool use.
I think the placebo effect might be a decent comparison. It works most of the time, and you don't worry about it as long as you fully believe in its efficacy. However, once the illusion is shattered, the positive effects are diminished, and you can never fully trust the solution again.
All this said, I quite like the mental model of documenting a simple process, and I suspect our future ai overlords will find it useful that I have a series of md files that outline my preferences and processes for certain tasks.
I am not however going to share any of this with work colleagues and make myself redundant.
> The slot machine can drop any hard requirement that you specifically in your AGENTS.md, memory.md or your dozens of skill markdowns. Pretty much guaranteed.
Indeed. That said, I’ve had some success with agent skills, but I use them to make the LLM aware of things it can do using specific external tools. I think it is a really bad idea to use this mechanism to enforce safety rules. We need good sandboxing for this, and promises from a model prone to getting off the rails is not a good substitute.
But I have taught my coding agent to use some ad hoc tools to gather statistics from a directory containing experimental data and things like that. Nobody is going to fine tune a LLM specifically for my field (condensed matter Physics) but using skills I still can make it useful work. Like monitoring simulations where some runs can fail for various reasons and each time we must choose whether to run another iteration or re-start from a previous point, based on eyeballing the results ("the energy is very strange, we should restart properly and flag for review if it is still weird", this sort of things). I don’t give too many rules to the agent, I just give it ways of solving specific problems that may arise.
Helps if you both hand to original agent as strong guidance and then to an adversarial agent as a quality reviewer. The adversarial agent is more likely tro loop the work back if it fails the validation criteria.
I do find that just asking the same agent to do and check it's own work is not particularly reliable.
This is like saying a +5 sword is useless because you still miss on a one. We’ve got to think about expected outcomes. Because if ahe’s merging five solid PRs to your three, loudly complaining about the one she saw was rubbish and threw away.
What makes this better/different than spec-kit? It seems to have a very similar philosophy. I wonder if they could work together? Or would they just be duplicative?
Well, to be fair, in e.g. Codex you can invoke a skill directly, with $my-skill, and this WILL lead to the skill being injected into the context. At that point, the LLM follows the skill as well as it follows any other part of the prompt, instructions, or context.
Skills are often invoked imperatively by the user. In cases where they are intended to be used directly by the LLM, it would be included somewhere else in the context. E.g:
```
After implementing the feature, read the testing skill for instructions on how to test.
```
how do you guarantee that the LLM follows an instruction given imperatively by the user? It probably will, but this is not guaranteed behavior. Likewise, _how_ it follows that instruction is non-deterministic.
I use superpowers for several months now and it really does help. But still 90/10 rule applies, 10% of time it will produce stupid decision. So always check spec.
> Workflows are agent-actionable; essays are not. The same is true for human teams. If your team handbook is 200 pages, no one reads it under time pressure.
Agents do read that. And actually remember it. Because it's tiny with other things you are cramming into their context.
> It’s people accepting plausible-sounding justifications for skipping the parts they don’t feel like doing.
WTF ? Almost always this was "skipping the parts because the deadline was 2 weeks ago". The "I don't feel like it" rationalizations are maybe 20% ? Unless deadlines are rationalizations too ?
Am I the only one who looks at guys like Addy Osmani and Steve Yegge who before LLM's had a good reputation and since then get the feeling they are cashing that reputation in to ride the LLM hype-cycle? Or is it just a matter of professional tech talking heads moving from writing books and giving conference talks about good engineering practices to talking about the new hot topic that sells books and conference tickets?
The fundamental problem with agent skills is that it doesn’t have a hook to do one time installation. An agent can’t just be a prompt. It also has to have some way to do initial set up work.
If I have an agent skill to look up prices of stocks, maybe I need to set up some tools and authentication first. There’s no way to express this!
What skills mate, this is simply text files attempting to narrow down the specs, hoping that this will help the "AI" make less mistakes. But it is still crap, because, <drum-rolls> - it still depends on how this fits into the overall statistical model which changes with every prompt, etc... Please stop peddling this bullshit, it does not work!
Another example of agent skills that give AI agents access to bitdrift's mobile observability platform for full-fidelity agentic investigations -- https://bitdrift.ai/
A far better approach is being precise with your prompts and if you find a new model has any bad habits, address it specifically in your AGENTS.md and go on your way. If you want to throw in slop promts, go ahead and add a massive AGENTS.md your employer gave you.
People waste too much time on this stuff. The next version could totally change how the model processes your agents.md.
Get good at promting, use agents.md as a minimal model annoyance fixer, and reset it often (every major release)
Things is we have an enormous amount of skill frameworks (this, GSD, spec-kit, superpowers, Compound Engineering etc) claiming to help with agentic coding.
And agents now got better builtin skills than they used to.
> It produces code, declares victory, and moves on.
Not when I'm in charge. It proposes changes based on my detailed instructions, I review the proposed changes, only then do I have it implement code, and then I review it again. I understand my AI agent would prefer a quicker way but for the meantime, I'm still the one in charge.
Agent skills are ways of turning over our means of (software) production to our employers, while making ourselves obsolete at the same time. In the recent past, a (software) professional had to be continuously employed by their employer to maintain access to their professional skills. By transferring our skills into AI agent skills, we are basically giving away that privilege. In the near future, our employers might feel they don't need our skills anymore because it has already been captured by the AI agents. Somebody with a better grasp of economic history should be able to explain this using the analogy of what happened during industrial revolution and how the workers got screwed over.
encoderer | a day ago
y-curious | a day ago
bvirkler | a day ago
ElijahLynn | a day ago
Very grateful for this repository and everyone who contributed to it!
CharlesW | a day ago
If Addy reads this, how do you pitch this vs. Superpowers? https://github.com/obra/superpowers
consumer451 | a day ago
I showed up on the agentic dev scene prior to superpowers, and I am getting concerned that >50% of my self-rolled processes are now covered by superpowers.
I no longer trust gh stars, can anyone chime in? Is superpowers now truly adopted?
If it is truly valuable, why hasn't Boris integrated the concepts yet?
nullstyle | a day ago
consumer451 | a day ago
To give back as much as I can, I use the two built-in CC review processes when appropriate. But, those only do "is this PR good code?"
Far too late did I finally roll my own custom review skill that tests: "does this PR accomplish what the specs required?"
If I could ask for one more vanilla CC skill, it might be that. However, maybe rolling your own repo-aware skill via prompt is better?
horsawlarway | a day ago
I used superpowers - but it burns waay more tokens for basically the same outcome as a single line that states
"Please do planning and ask any required questions before implementing.
[my prompt]"
On the latest models and with a decent harness, the planning modes are quite good, and the single sentence telling it to ask you questions lets the model pick the right thing to ask about, instead of wasting a bunch of time/tokens on predefined skills that try to force basically the same result.
It does introduce a second set of required interactions, but you can have another agent be your "questions answerer" if you need it (result quality goes down a bit vs answering myself, but still quite good, especially if you spend a bit of time on the answerer prompt)
Basically - things are moving fast enough I'm not convinced buying into superpowers/agentskills/[daily prompt magic beans]/etc tooling really makes sense.
I'd stick to the defaults in the harness for most cases, and then work on being clear with the ask.
ramoz | a day ago
marcus_holmes | a day ago
I also found that I have different skills for different tasks; at work security is a huge concern and I over-emphasise security in the skills. At play I'm less bothered about security and so the skills I've written to help me build stupid one-shot exploratory websites are less about security and more about refactoring and exploring concepts.
RideOnTime22 | a day ago
People were hyping up Oh My Opencode. When they realized it didn't lead to any significant gains in performance they hopped on the next thing.
And when the same thing happens to Superpowers it'll be something else they cling on because "this time it's different"
supermdguy | a day ago
DeathArrow | 23 hours ago
I use them on and off. Also Get Shit Done and Compound Engineering. The best results I got with Compound Engineering but it burns tokens like crazy, especially in the review phase where it does reviews with 5 - 12 agents in parallel - and I like to do a lot of reviews for both the plan and documentation and code.
For some lighter tasks, builtin Claude Code skills like plan mode are enough.
alfiedotwtf | 18 hours ago
It allows you to explore the problem space upfront, it questions your assumptions, asks more probing questions to confirm what it’s found in the code, and by the time you’re ready to implement, it knows exactly what needs to happen.
Jessie should have called it Socrate’s Methods
esafak | a day ago
ricardobeat | a day ago
CharlesW | a day ago
It shouldn't be your default, but should absolutely be tried when your skill/agent test suite displays evidence that it's not being reliably invoked without it.
ssgodderidge | a day ago
gosukiwi | a day ago
senko | a day ago
This (sdlc == working backwards & bar raiser) is so horribly wrong, that I hope this was an LLM hallucination.
In general, I'm starting to see these agent scaffolding systems as an anti-pattern: people obsess over systems for guiding agents and construct elaborate rube-goldberg machines and then others cargo-cult them wholesale, in an effort to optimize and control a random process and minimize human involvement.
yks | a day ago
AndyNemmity | a day ago
But I don't expect anyone to every use my stuff. It's complicated as hell. But it's for me, and it works without me having to remotely think about the complexity.
I love that.
[OP] BOOSTERHIDROGEN | a day ago
PleasureBot | 23 hours ago
gavmor | a day ago
That being said, this post is full of reasonable assertions, so I'm looking forward to experimenting with this... whatever it is.
fragmede | a day ago
kigiri | a day ago
gavmor | 20 hours ago
But as much pleasure as I derive from novelty and specificity, my colleagues have oft expressed perplexity—whereas the terms which LLMs produce hew closer to the manifold (by definition!) and raise fewer eyebrows.
So, it has its turn.
turlockmike | a day ago
If the LLM fails, either you didn't describe your outcome sufficiently or is misinterpreted what you said or it couldn't do it (rare).
Common errors should be encoded as context for future similar tasks, don't bloat skills with stuff that isn't shown to be necessary.
alexjurkiewicz | a day ago
peab | a day ago
tecoholic | a day ago
Yes, not everything I use LLMs for going to have the same level of ambiguity or complex requirements. Optimizing by choosing to skip over parts of the process is exactly Addy is talking in this article.
stingraycharles | a day ago
This is not true for anything complex. They’re instruction followers, of which task completion is just one facet.
They’re also extremely eager to complete tasks without enough information, and do it wrongly. In the case of just describing task completion, despite your best efforts, there are always some oversights or things you didn’t even realize were underspecified.
So it helps a lot to add some process around it, eg “look up relevant project conventions and information. think through how to complete the task. ask me clarifying questions to resolve ambiguities. blah blah”. This type of prompt will also help with the new Opus 4.7 adaptive thinking to ensure it thinks through the task properly.
stult | a day ago
markbao | a day ago
tmaly | a day ago
I prefer the start small and iterate approach to arrive at a result.
Then I ask it to summarize. Sometimes after that I ask it to generalize.
_pdp_ | a day ago
Prompting is just the first part. To get the outcome, you need to have other systems to steer the agent as it get things wrong. Proper deterministic tests work. But there is also stuff that need to happen during the LLM execution like cyclic detection etc. All of this adds up.
You cannot just prompt an LLM an hope for a good outcome. It might work in small isolated scenarios but it just does not work consistently enough to call it reliable.
Without further guardrails enforced by the process or the harness, LLMs do not have sufficient capabilities to complete a task up to a certain standard.
zmmmmm | a day ago
Curious how normal that is - it would only take a couple of these to really fill the context alot.
tecoholic | a day ago
sergiotapia | a day ago
mohamedkoubaa | a day ago
gwerbin | a day ago
I have been successful with short and focused skills so far. I treat them as a reusable snippet of context, but small ones. For example a couple of paragraphs at most about how to use Python in my project and how to run unit tests. I also have several short "info" skills that don't actually provide the agent instructions, they merely contain useful contextual information that the agent can choose to pull in if needed.
Even having too many skills can be an issue because the list of skill names and their descriptions all end up in the context at some point.
umeshunni | a day ago
Only skill front-matter (name, description, triggers etc) are loaded within context by default, so this isn't likely to happen without 1000s of skills.
_pdp_ | a day ago
Here is a fun experiment.
Ask any LLM to write something vaguely familiar. For example, ask it "write a fib". Since almost all LLMs are fine tuned on code, I find that all of them will respond with a fibonacci sequence algorithm even-though to a non-programmer "write a fib" means to write an unimportant lie.
So there is compression. You can express an outcome in just 3 vague tokens without going into details what exactly is a fibonacci sequence.
That should be enough to understand that the length of the prompt does not matter. What matters is the right words, frequency and order. You can write two page prompt or two sentence prompt and both can have the same outcome.
esperent | 23 hours ago
Write a fib
Since "fib" can mean a couple of different things, I've got you covered for both!
1. A Little Lie (A Fib) "I'm actually typing this to you from a sunny beach in the Bahamas, sipping a piña colada." (Since I'm an AI, that is definitely a fib!)
2. The Fibonacci Sequence If you meant the classic programming exercise, here is a Python function...
_pdp_ | 23 hours ago
> I'm assuming you mean a Fibonacci sequence generator! I'll write a Python script that includes both an iterative and a recursive way to generate Fibonacci numbers.
... and then wrote some python code.
esperent | 21 hours ago
If you want a lie, then the normal grammar in English is to say "tell a fib". I bet every llm you test that on will respond by telling you a small lie, or at least note the ambiguity and then say it's going to revert to Fibonacci because that's more in line with what it's designed to do.
AndyNemmity | a day ago
I only make it for me, so it's a bit complex and targeted towards me, and what I do, but it's pretty easy to adjust things.
https://github.com/notque/vexjoy-agent
Working on reading through Agent Skills, it seems we've converged on a lot of the same points, and I've never seen it, so trying to get an understanding of it.
Edit 1: I don't like all the commands. I just rely on a single router to automatically decide what I want, and that feels like the most reasonable way to me to communicate with it.
I don't want to remember things. And that's the way for me to scale the number of skills and activities. I don't have to think about them.
Edit 2: We have very different routers.
https://github.com/addyosmani/agent-skills/blob/f504276d8e07...
vs
https://github.com/notque/vexjoy-agent/blob/main/skills/do/S...
I personally wouldn't call theirs an intelligent router. They are dancing between a few different skills. We have extremely different setups there.
But of course, I'm using way more context to get it done. I'm even sending it out to Haiku to build the route choices.
I choose to use tokens to make things better for myself, not everyone would make the same choice, so I certainly see why they are using a few skills, and composing them.
Edit 3: This is much easier for a user to wrap their head around because there's much less.
I am only focused on the best improvements I can make that show value for my use cases. This is straight foward to reason about.
This seems like a nice way to get the best concepts for people trying to understand them. I commend them for a clean, simple approach.
Edit 4: Yeah, I think there are some things I can learn from them which is always good.
I especially like simple decisions like collapsing the install details for each harness in the readme.
I'm going to read over the entire thing and look for opportunities to improve my stuff.
We are all working together, learning, testing, building, trying to find the best way to implement things.
codemog | a day ago
Yep, benchmarks, comparisons of with/without, samples of generated code with/without. This kind of stuff matters, and you may be making your agent stupider or getting worse results without real analysis.
Also this prose reads like the author has drunk the Google kool-aid and not much else.
ai_fry_ur_brain | a day ago
nothinkjustai | a day ago
footy | a day ago
IncRnd | a day ago
footy | a day ago
bot403 | a day ago
Or maybe the only people left opposing AI are so hardcore against it they form their identity (username) around it
nothinkjustai | a day ago
wahnfrieden | a day ago
_sharp | a day ago
pantheragmb | a day ago
0000000000100 | a day ago
Agents are unbelievably useful at helping takeover and refactor messy codebases though. I just started taking over this monstrous nightmare of a codebase, truly ancient code the bulk of it written over 10+ years ago in PHP. With the use of Claude / Codex I was able to port over the vast majority of the existing legacy storefront and laid the groundwork for centralizing the 10-20k LOC mega-controller logic over to reusable repo/service patterns.
Just shit that would've taking years previously, is achievable in under a month.
[OP] BOOSTERHIDROGEN | a day ago
Everything needs an element of human touch, I would somehow only run vanilla things. But if, let’s say, I’m creating backup scripts, I meticulously outline the plan.
c0rruptbytes | a day ago
I don't think agentic workflows are there yet, but implementing skills to manually call and use while working side by side with an AI is definitely nice - our company is focused a lot on sandboxing right now and having safe skills
I don't think we've gotten feature development well yet, but the review skills + grafana skills they wrote have been pretty solid
slopinthebag | a day ago
tokioyoyo | a day ago
I don’t think people are wasting too much time. Although, I do agree most of these posts are just bs, including this one. But AI-development has been a thing across a lot of companies in the world.
bot403 | a day ago
AI is a powerful tool. Depending on what I need I use chatgpt, in-ide agents, or a platform like Devin.ai.
I use it when it helps me advance my goals. I don't when it doesn't. Sometimes it misses the mark and I scale back and have it do a specific piece and I'll do the rest.
Sometimes I use it to analyze the code base in seconds vs minutes. Sometimes I use it to pinpoint a bug fast.
Ive solved customer issues in seconds and minutes with it vs hours.
I worked on a banking app with deeply domain specific data issues. AI was not very helpful on that team. My current work on consumer web apps mean my problems are more mundane and AI is a big accelerant.
Being and engineer means solving the problems with the right tools with the right tradeoffs as well. It's why I use an idea vs notepad, I use chatgpt for one-off scripts and "chat", and i use agentic workflows for big, repetitive, or "boring" low-stakes tasks.
ai_fry_ur_brain | 16 hours ago
swyx | a day ago
lets get nitty gritty on this - can you say how you did this? because a lot of people think this is an unsolved problem
vidarh | a day ago
tokioyoyo | a day ago
There are a lot of little things we’ve tracked, and it’s just faster to implement things now. To be fair, everyone on my team has decade+ professional experience (many more non-prodessional), and we understand limitations of AI fairly well.
djhn | a day ago
rubendev | a day ago
intended | 23 hours ago
> to be fair, everyone on my team has decade+ professional experience (many more non-prodessional), and we understand limitations of AI fairly well.
I see this appear quite often in discussions on productivity, to the point that a conclusion may be made regarding its centrality for productivity gains.
raincole | a day ago
> Arguing in good faith
will be futile, unfortunately.
djhn | a day ago
And on the ROI side, trying things out regularly, I haven’t seen the positive ROI in the limited time I’ve dedicated to exploring the tools. I’ve restricted experimenting to 4 hours per month, because spending more than 2.5% of the month chasing productivity improvements that realistically seem to be 10-20%, will quickly eat into those gains. After accounting for token costs, it ends up being a wash.
theshrike79 | a day ago
You can't learn how to use _anything_ by experimenting 4 hours a month.
intended | 23 hours ago
With infinite time anything is possible, but since we live within constraints, discussing practical, real world thresholds or evaluation methods is a worthwhile use of our time.
djhn | 18 hours ago
eloisant | 18 hours ago
So they convince themselves AI can't work because they don't want it to.
ai_fry_ur_brain | 17 hours ago
wg0 | a day ago
cortesoft | a day ago
Is it just a philosophical belief that AI is morally bad? Or have you actually used AI to build things and feel confident that you have explored the space enough to come to such a strong conclusion?
I have been writing code every day for over 30 years, and have been doing it professionally for over 20. I have seen fads come and go, and I have seen real developments that have changed the way I do what I do numerous times. The more experience and the more projects I create with AI, the more certain I am that this is a lasting and fundamental change to how we produce software, and how we use computers generally. I have seen AI get better, and I have seen myself get more proficient at using it to get real work done, work that has already been tested with real world, production, workloads.
You can hate that it is happening, and hate the way working with AI feels, but that doesn't mean it is not providing real value for people and doing real work.
ai_fry_ur_brain | 17 hours ago
I like thinking, solving problems and typing out code myself. Im going to keep putting tons of care into my craft and I promise I'll have more impact than the guy running 3 agents to build the 500th version of some web concept.
Rolex has a much bigger impact on the world than white label mass manufacturers in China.
ninininino | 17 hours ago
ai_fry_ur_brain | 16 hours ago
vidarh | a day ago
zbentley | a day ago
vidarh | a day ago
adyavanapalli | 22 hours ago
vidarh | 21 hours ago
ai_fry_ur_brain | 17 hours ago
__alexs | a day ago
lukewarm707 | 21 hours ago
- open the browser
- google "john repo"
- find the website
- copy the repo name
- open the terminal
- cd
- git clone
- try to find the file i want
- read the whole file to find the answer
= answer
i now do:
- "john repo question" = answer
fortyseven | 19 hours ago
alfiedotwtf | 18 hours ago
Maybe the productivity we were trying to achieve was the friends we made along the way
dmix | a day ago
sunaookami | a day ago
thatmf | a day ago
Not that these or any "skills" will do that, but just- in principle. This is like alienation from labor at scale.
clapthewind | a day ago
hibikir | a day ago
Humans have been minimizing how much work is needed to get a certain level of output for as long as we can track. It's civilization. Should we go back to farming by hand with hoes, to maximize labor used? Go back to streetlights that are individually lit? The society that falls behind on automation becomes poorer, and eventually just dies, as even the people born there tend to choose to leave to higher productivity places. It happened to eastern europe, it happens to the Amish. To any poor society which gets emigration. Doing more with less has always been exciting.
dewey | a day ago
Right now it's not clear in which direction everything is involving and that's why people experiment with handing all their data to random agents, figuring out how to store and access context, re-use prompts and other attempts to harness this tech. Most of these will maybe be useless in a year as they might be deeply integrated into the next wave of models but staying on top of the development has always been part of the fun of working in this field.
kiba | a day ago
H8crilA | 22 hours ago
And if you think that your personal protest against the automation will in any way affect the direction in which the industry goes then you're delusional. You would have to start something like a political party and collect way more people.
kiba | 22 hours ago
Otherwise, if they write so much better code, than it's pointless to have a human in the loop.
H8crilA | 16 hours ago
cuteboy19 | a day ago
a worker is just the sum total of all work related context. to collate, verify and organize this context is just asking to be replaced.
yieldcrv | a day ago
cortesoft | a day ago
Do you feel this way about every automation you create? I do know some old school sys admins who felt this way about a lot of infrastructure automation advancements, and didn't like that we were creating scripts and systems to do the work that used to be done by hand. My team created an automated patching system at a job that would automatically run patching across our 30,000 servers, taking systems in and out of production autonomously, allowing the entire process to be hands free. We used to have a team whose full time job was running that process manually. Did we take their jobs by automating it?
Sure, in a sense. But there was other work that needed to be done, and now they could do it.
The whole reason I like programming and computers and technology is precisely because it does things for us so we don't have to do it. My utopia is robots doing all the hard work so humans can do whatever we want. AI is bringing us one step closer to that, and I would rather focus on trying to figure out how we can make sure the whole world can benefit from robots taking our jobs (and not just the rich owners), rather than focus on trying to make sure we leave enough work for humans to stay busy doing shit they don't actually want to do.
theshrike79 | 16 hours ago
Use AI to create the engine. After that running the engine itself costs as much as keeping the computer running it online. No API costs for 3rd party LLM providers needed.
dawnerd | a day ago
onion2k | a day ago
If you're in a part of the software industry that needs well-optimized and bug-free code then it's less useful. The problem for devs is that those parts of the industry are much smaller.
fortyseven | 19 hours ago
rglover | 22 hours ago
I think both groups (pro vs anti) will be a bit surprised when the long-term data shows productivity gains were modest on average and producing quality software still needs care/human attention, even with the support of advanced, frontier models. Same job as before, now we just have a power drill instead of a screwdriver. Some people build houses that stand for hundreds of years, others less so.
onlyrealcuzzo | 19 hours ago
At the end of the day, the more automation, the more people you need making sure things work.
There's always going to be a minimal bottleneck for how much an engineer can oversee if they need to do zero implementation.
We're not as far from that point as people think.
Most languages most things are developed in are 10x more expensive than languages of yore.
Rust has a bad reputation for being hard, but it is actually quite expressive.
Less than 50% of what engineers do is code.
IBM was famous, in the early 2000s, for the average dev writing one line of code per day on average.
We're just going to move to a world where the average dev spends <10% of their time coding, but there's likely to be x times more work, so it mostly evens out.
konaraddi | a day ago
SudheerTammini | a day ago
theahura | a day ago
petesergeant | a day ago
rossant | a day ago
wg0 | a day ago
Here's why: The slot machine can drop any hard requirement that you specifically in your AGENTS.md, memory.md or your dozens of skill markdowns. Pretty much guaranteed.
These harnesses approaches pretend as if LLMs are strict and perfect rule followers and the only problem is not being able to specify enough rules clearly enough. That's fundamental cognitive lapse in how LLMs operate.
That leaves only one option not reliable but more reliable nevertheless: Human review and oversight. Possibly two of them one after the other.
Everything else is snake oil but at that point, you also realize that promised productivity gains are also snake oil because reading code and building a mental model is way harder than having a mental model and writing it into code.
cortesoft | a day ago
However, I have been using spec-kit (which is basically this style of AI usage) for the last few months and it has been AMAZING in practice. I am building really great things and have not run into any of the issues you are talking about as hypotheticals. Could they eventually happen? Sure, maybe. I am still cautious.
But at some point once you have personally used it in practice for long enough, I can't just dismiss it as snake oil. I have been a computer programmer for over 30 years, and I feel like I have a good read on what works and what doesn't in practice.
wg0 | a day ago
Give it a few more months and I'm sure you'll see some of what I see if not all.
I'm saying all the above having all sorts of systems tried and tested with AI leading me to say what I said.
cortesoft | a day ago
Now, part of that is my advancements as well, as I learn how to specify my instructions to the AI and how to see in advance where the AI might have issues, but the advancements are also happening in the models themselves. They are just getting better, and rapidly.
The combination of getting better at steering the AI along with the AI itself getting better is leading me to the opposite conclusion you have. I have production systems that I wrote using spec-kit, that have been running in production for months, and have been doing spectacularly. I have been able to consistently add the new features that I need to, without losing any cohesion or adherence to the principals i have defined. Now, are there mistakes? Of course, but nothing that can't be caught and fixed, and not at a higher rate than traditional programming.
Quarrel | a day ago
I kind of get what you're saying, but let us not pretend that SW engineers are perfect rule followers either.
Having a framework to work within, whether you are an LLM or a human, can be helpful.
saidnooneever | a day ago
the only downside i see is getting out of practice, which is why for my passion projects i dont use it. work is just work and pressing 1 or 2 and having 'good enough' can be a fine way to get through the day. (lucky me i dont write production code ;D... goals...)
albedoa | 22 hours ago
By that time, they will have realized immense value before seeing some of what you see. Sounds like an endorsement of spec-kit.
vidarh | a day ago
kaashif | a day ago
I've seen a disturbing trend where a process that could've been a script or a requirement that could've been enforced deterministically is in fact "automated" through a set of instructions for an LLM.
vidarh | a day ago
Large parts of human civilization rests on our ability to make something unreliable less unreliable through organisational structure and processes.
j45 | a day ago
So many applications of LLMs have even to start with deterministic brain when using a non-deterministic llm and then wonder why it’s not working.
intended | 23 hours ago
At the end of the day, if I am spending X$s for automation, I want to be able to sleep at night knowing my factory will not build a WMD or delete itself.
If its simply a tool that is a multiplier for experts, then do I really need it? How much does it actually make my processes more efficient, faster, or more capable of earning revenue?
There is a LOT that is forgiven when tech is new - but at some point the shiny newness falls off and it is compared to alternatives.
vidarh | 23 hours ago
Review and oversight does address reliability directly, and hence why we make use of those in processes to improve the reliability of mechanical processes as well, and why they are core elements of AI harnesses.
> If its simply a tool that is a multiplier for experts, then do I really need it? How much does it actually make my processes more efficient, faster, or more capable of earning revenue?
You can ask the same thing about all the supporting staff around the experts in your team.
> There is a LOT that is forgiven when tech is new - but at some point the shiny newness falls off and it is compared to alternatives.
Only teams without mature processes are not doing that for AI today.
Most of the deployments of AI I work on are the outcome of comparing it to alternatives, and often are part of initiatives to increase reliability of human teams jut as much as increasing raw productivity, because they are often one and the same.
intended | 20 hours ago
Yes and no. see next point.
> You can ask the same thing about all the supporting staff around the experts in your team.
I have a good idea of the shape of errors for a human based process, costing and the type of QA/QC team that has to be formed for it.
We have decades, if not centuries of experience working with humans, which LLMs are promising to be the equivalents/superiors of.
I think you and me, would both agree with the statement "use the right tool for the job".
However, the current hype cycle has created expectations of reliability from LLMs that drive 'Automated Intelligence' styled workflows.
On the other hand:
> part of initiatives to increase reliability of human teams
is a significantly more defensible uses of LLMs.
For me, most deployments die on the altar of error rates. The only people who are using them to any effect are people who have an answer to "what happens when it blows up" and "what is the cost if something goes wrong".
(there is no singular thread behind my comment. I think we probably have more in agreement than not, and its more a question of finding the precise words to declare the shapes we perceive.)
vidarh | 17 hours ago
I moved this up top, because I agree, despite the length of the below:
> However, the current hype cycle has created expectations of reliability from LLMs that drive 'Automated Intelligence' styled workflows.
Because for a lot of things it works. Today. I have a setup doing mostly autonomous software development. I set direction. I don't even write specs. It's not foolproof yet by any means - that is on the edge of what is doable today. Dial it back just a little bit, and I have projects in production that are mostly AI written, that have passed through rigorous reviews from human developers.
The key thing is that you can't "vibecode" that. I'm sure we agree there.
There needs to be a rigorous process behind it, and I think we'll agree on that too.
Those processes are largely the same as the processes required for human developers. Only for human developers we leave a lot of that process "squishy" and under-specified.
We trust our human developers to mostly do the right thing, even though many don't, and to not need written checklists and controls, even though many do.
What is coming out of this is a start of systems that codify processes that are very much feels based with human teams. Partly because we still need to codify them for AI, but also because we can - most people wouldn't want to work in the kind of regimented environment we can enforce on AI.
Sure, there is a lot of hype from people who just want to throw random prompts at an LLM and get finished software out. That is idiocy. Even a super-intelligent future AI can't read minds.
But there are a lot of people building harnesses to wrap these LLMs in process and rigor to squeeze as much reliability as possible from them, and it turns out you can leverage human organisational knowledge to get surprisingly far in that respect.
intended | 15 hours ago
> There needs to be a rigorous process behind it, and I think we'll agree on that too.
I would simplify it to: “I have a setup” is the part that is doing the actual heavy lifting.
From my very unscientific survey / extensive pestering of network, the only people getting lift out of AI are people with both domain expertise/experience and familiarity with the tooling.
The types of automation I see people wanting though are fully automated customer support systems, fully automated document review - essentially white collar dark factories. (Hey thats a good term). The need is for a process that is stable, and behaves the same way every time.
It seems actual AI use cases are more like sketching - if you have enough skill you can make out the rough sketch is unbalanced and won’t resolve into a good final piece. Non experts spend far more time exploring dead ends because they don’t have the experience.
In my opinion, it’s a force multiplier for experts or stable processes, and it’s presented as Intelligence.
I feel your examples fit within these boundaries as well as the ones you have described.
jnpnj | 23 hours ago
vidarh | 21 hours ago
chaostheory | a day ago
kajman | a day ago
I hope to see harnesses that will demand instead of ask. Kill an agent that was asked to be in plan mode but did not play the prescribed planning game. Even if it's not perfect, it'd have to better than the current regime when combined with a human in the loop.
raincole | a day ago
keeda | a day ago
> ... you also realize that promised productivity gains are also snake oil because reading code and building a mental model is way harder than having a mental model and writing it into code.
Not really, though it depends on the code; reading code is a skill that gets easier with practice, like any other. This is common any time you're ever in a situation where you're reading much more code than writing it (e.g. any time you have to work with a large, sprawling codebase that has existed long before you touched it.)
What makes it even easier, though, is if you're armed with an existing mental model of the code, either gleaned through documentation, or past experience with the code, or poking your colleagues.
And you can do this with agents too! I usually already have a good mental model of the code before I prompt the AI. It requires decomposing the tasks a bit carefully, but because I have a good idea of what the code should look like, reviewing the generated code is a breeze. It's like reading a book I've read before. Or, much more rarely, there's something wrong and it jumps out at me right away, so I catch most issues early. Either way the speed up is significant.
j45 | a day ago
kergonath | 22 hours ago
j45 | 20 hours ago
When it receives a generic vague input it is free to interpret according to how its corpus fires like any human interaction.
How to articulate better is like writing a sentence that will stand the test of model updates.
kergonath | 20 hours ago
I don’t think it’s fair to blame the user here. The tool must be operated by normal users.
intended | a day ago
for MVPs, mock ups, prototypes or in the hands of an expert coder. You can't let them go unsupervised. The promise of automated intelligence falls far short of the reality.
crimsoneer | 22 hours ago
jazzypants | 18 hours ago
blitzar | a day ago
I am not however going to share any of this with work colleagues and make myself redundant.
kergonath | 22 hours ago
Indeed. That said, I’ve had some success with agent skills, but I use them to make the LLM aware of things it can do using specific external tools. I think it is a really bad idea to use this mechanism to enforce safety rules. We need good sandboxing for this, and promises from a model prone to getting off the rails is not a good substitute.
But I have taught my coding agent to use some ad hoc tools to gather statistics from a directory containing experimental data and things like that. Nobody is going to fine tune a LLM specifically for my field (condensed matter Physics) but using skills I still can make it useful work. Like monitoring simulations where some runs can fail for various reasons and each time we must choose whether to run another iteration or re-start from a previous point, based on eyeballing the results ("the energy is very strange, we should restart properly and flag for review if it is still weird", this sort of things). I don’t give too many rules to the agent, I just give it ways of solving specific problems that may arise.
selimthegrim | 14 hours ago
j16sdiz | 22 hours ago
Slot machine give you rewards when star aligns, snake oil never do :)
SubiculumCode | 20 hours ago
Chris2048 | 20 hours ago
Couldn't non-manual oversight also help e.g. sandboxes?
peterbell_nyc | 16 hours ago
I do find that just asking the same agent to do and check it's own work is not particularly reliable.
moomin | 15 hours ago
koliber | a day ago
Good test cases.
Clear and concise documentation.
CI/CD.
Best practices and onboarding docs.
Managing LLMs is becoming more and more similar to managing teams of people.
tempoponet | a day ago
theshrike79 | 16 hours ago
theshrike79 | 16 hours ago
There are so many bad analogies I could use to describe it, but they're all bad so I won't try.
cortesoft | a day ago
https://github.com/github/spec-kit
rTX5CMRXIfFG | 22 hours ago
stellalo | a day ago
When the LLM decides that the situation calls for it
> It is a workflow: a sequence of steps the agent follows, with checkpoints that produce evidence, ending in a defined exit criterion.
A sequence of steps the LLM can decide to follow
lionkor | a day ago
sharperguy | a day ago
``` After implementing the feature, read the testing skill for instructions on how to test. ```
forlorn_mammoth | 19 hours ago
it's turtles all the way down.
xboxnolifes | 18 hours ago
tariky | a day ago
I use superpowers for several months now and it really does help. But still 90/10 rule applies, 10% of time it will produce stupid decision. So always check spec.
ColinEberhardt | a day ago
And Open Design (HN front page yesterday) is supported by “Six load-bearing ideas”
The similarities in the way these prompt libraries are documented doesn’t feel coincidental.
scotty79 | a day ago
Agents do read that. And actually remember it. Because it's tiny with other things you are cramming into their context.
Lio | a day ago
Agent Skills is Addy’s attempt to kill that job too. Cheers Addy. :P
rafaelmn | a day ago
WTF ? Almost always this was "skipping the parts because the deadline was 2 weeks ago". The "I don't feel like it" rationalizations are maybe 20% ? Unless deadlines are rationalizations too ?
shruubi | a day ago
simianwords | a day ago
If I have an agent skill to look up prices of stocks, maybe I need to set up some tools and authentication first. There’s no way to express this!
hansmayer | a day ago
hansmayer | a day ago
Trusteando | a day ago
jedisct1 | a day ago
karinakarina3 | a day ago
robeym | 23 hours ago
People waste too much time on this stuff. The next version could totally change how the model processes your agents.md.
Get good at promting, use agents.md as a minimal model annoyance fixer, and reset it often (every major release)
DeathArrow | 22 hours ago
And agents now got better builtin skills than they used to.
Who will have the time to A/B test all?
m3kw9 | 20 hours ago
m3kw9 | 20 hours ago
standardUser | 19 hours ago
Not when I'm in charge. It proposes changes based on my detailed instructions, I review the proposed changes, only then do I have it implement code, and then I review it again. I understand my AI agent would prefer a quicker way but for the meantime, I'm still the one in charge.
onlyrealcuzzo | 19 hours ago
The point is, their default behavior is to ship crap fast.
You have a process to handle that.
So does OP.
alfiedotwtf | 18 hours ago
bsoles | 12 hours ago