AI=true is an Anti-Pattern

38 points by hongminhee a day ago on lobsters | 9 comments

threkk | a day ago

I have seen this pattern at work. It is really heart-breaking that we are willing to write very much-needed docs for machines, but not for fellow humans.

For me it's not so much heart breaking as it confirms almost all of my intuitions about people.

oger | a day ago

Yup, it's funny/weird how SW departments for decades delayed writing documentation and automatic tests (which would have been really useful for SW developers), but now that AI tools need these, docs and tests have become important.

On the one hand I am angry that machines get better support than humans do. On the other hand I'll be glad if these long-overdue tasks are finally done; and maybe even as non-AI-user I can exploit this attitude to get some improvements greenlit that help me.

viraptor | 10 hours ago

It's just a side effects of not being able to figure out the LLM memory right now we can say something to a new hire and expect that they'll remember it as long as they work with the project. It may be occasionally inefficient to rediscover something after a long absence, but it's not a terrible overall. On the other hand, we need a way to communicate the same thing over and over again to the LLMs in a very efficient way.

It's silly but at the same time it makes perfect sense why documentation was ignored by humans and why tribal knowledge is a common thing. It's https://xkcd.com/1205/ but in time/token cost.

we can say something to a new hire and expect that they'll remember it as long as they work with the project.

This is something I strongly disagree with. Some reasons:

  • In my past jobs it did happen quite a lot that I was loaned out to another project for a while; so I kept accumulating knowledge from different projects, and people kinda expected that I knew the details for all of these projects. That's mentally quite demanding.
  • And not only is this mentally demanding, the value of this knowledge is very volatile: the moment I quit a job, all of the company-specific knowledge became worthless to me. Only the knowledge about "standard stuff" (C++ development, Linux etc.) remained valuable. I can understand that companies a) don't care whether this is mentally demanding for me and b) especially don't care whether I spend my memory on knowledge that has no long-term value to me. But for me this is important; and I can only encourage SW devs to keep this in mind when learning on the job.
  • Anyway, the next hire for the job needs to be trained as well. In my experience few people are capable of learning years of experience in six months of hand-over period, so lots of knowledge is effectively lost in the transfer, and written docs could reduce that.
  • Also, realistically people just don't remember things very well. In my experience after a few months the subtle details of some design are gone from most developers' heads. The rediscovery sometimes amounts to a rewrite.

hyperpape | a day ago

I think there is a lot of overlap, I'm not convinced that there isn't a difference between a human getting overwhelmed by a long readme and an AI agent overflowing its context window. I think humans are still more flexible and more resilient. Agents need more scaffolding to find the relevant material.

This isn't to say that humans don't benefit from curation, but I suspect that the curation that benefits agents is more rigid.

timthelion | 7 hours ago

I experienced this many times pre AI.That the build scripts in ci worked but not in the README, so maybe this just goes to show that things that get run automatically within the developer's flow tend to be tested better...

rsanheim | 17 hours ago

Yes. Good ux for humans is good ux for agents. This is why a well made cli is preferable to a MCP every time.

You don’t need to worry about context spam with your tools if the agent just calls ‘foo-cli —help’ when it needs it. And the you as human can even use it too!

mrexodia | 32 minutes ago

While it is true that a lot of improvements to CLI tool outputs are applicable to both LLMs and humans, there is some areas where the design is fundamentally different.

A test runner might print all the succeeding tests as they come along. A human sees this as an indication of progress, for an LLM this is pure noise. Similar things apply to build systems.

I started experimenting with detecting if the session is interactive and assuming it’s a human watching in that case. This is probably a better idea than an AI=true flag, but time will tell…