The cognitive dark forest

19 points by hungariantoast 15 hours ago on tildes | 18 comments

indirection | 13 hours ago

The platform will know your idea is pregnant far before you will.

I think this alludes to the Target pregnancy scandal, which was debunked.

People have been saying "AI can predict human behavior" since that article (from 2012). Unprofitable disasters like the Metaverse, unappealing advertisements, and lack of strong evidence make me skeptical.

Before LLMs, a company couldn’t just absorb your idea and ship it. Ideas needed programmers, and programmers worked in meat-space-and-time, i.e. they were a limited resource, expensive and slow.

Before LLMs, big companies absorbed and shipped ideas from smaller companies after launch: this famously happened in 1998 with Watson/Sherlock for Mac OS. Another example is Trader Joes ripping off smaller brands, by faking interest in stocking to get product samples. Nowadays, I most frequently see big companies steal smaller companies' ideas via acqui-hire (e.g. OpenClaw and Moltbook).

I think it's unlikely a big company would steal an idea before it demonstrated success; such an idea may not succeed at all, and big companies tend to avoid risk, plus there are many other promising ideas. Meanwhile, acqui-hiring or outright buying a rising startup is relatively cheap for a billion-dollar company, though life-changing for the startup founder(s); most deals are in the single- to triple-digit millions.

atchemey | 11 hours ago

I think you're missing the analogy of the dark forest. All it takes is one bad actor to do this for it to become the dominant strategy. The existence of one "hunter" mandates a response that proliferates. It is, quite literally, "if you can't beat them, join them."

I think you are unnecessarily discounting the absolutely trivial risk to reward ratio that mass data center compute offers. Why not spend a few tens of dollars in electricity to make a duplicate of something that could gross millions a year? Code ten million projects for half a billion dollars, and if 1% take off, you break even. It's trivial to implement "clean room" rebuilds of extant products with AI and change them modestly to avoid IP/copyright limitations, all while allowing your competition to do the expensive experimentation and optimization for you...more importantly, you deny a competitor their unchallenged market, even as you take a slice. What's the response for a competitor once this happens once or twice? Well, they have to, too. And so do all of their competitors. It is an inherent race to the bottom for digital properties, one where increasingly marginal margins are whittled away as duplicates propagate.

Or.

Cartelization is another possibility. A handful of tech giants could, hypothetically, just agree to snipe new innovators, while protecting their own. In such a scenario, margins can remain robust, but innovation from the outside will be absorbed. Why bother to buy a promising app for a few million dollars, when 5 companies can spend $1k on apps, swamp the market, and destroy the original concept? One or two of those apps will survive, the other companies take the modest loss, and move on, while the winning companies take profits. Unfortunately, taken to a maximalist extreme, this will still eventually fail due to economic degradation, as the paths for individuals to earn money reduce, while extraction increases...hello Black Mirror.

I'm not the author, but I think the author raises thought-provoking points. It may be worthwhile to look into the "Genesis" Project, and the goals (explicit and implicit) it sets out for technology, engineering, medicine, and science, then complement it with Curtis Yarvin's concept of technofeudalism, which is supported by deep-pocketed donors. The dream is a fully vertically integrated economy where the "no money only spend" meme is a reality. In short, CEOs have autonomy and absolute freedom to do as they wish, while others are serfs. (A brief article on Yarvin). Put another way, whether literally or figuratively, "the humans will be discarded." Of course, all the AI absolutist supporters assume they will be the CEOs, and don't think about the inherent absurdism of a self-propagating money machine that turns us all into paperclips, but, hey, they are smart, just look at all the money they have...

post_below | 9 hours ago

One thing the original author assumes, and your post seems to take as fact, is that it's actually possible to just press a button and turn an idea onto code.

That could happen in the future, it has not happened in the present. Full stop. If a big company saw an idea online that they thought was worth stealing they would need to assign a lot of actual humans to the problem. AI agents would make it faster, but it would be far from cheap.

Even if it was possible to press a button and get a shippable application (and I can't stress enough that it isn't) the code is only part of the process. You need to plan the architecture, design the UI, handle devops, hire and train some level of support, do code review, do alpha and beta testing, work out brand strategy and marketing and so on.

In theory a big company has existing pipelines to make all of the above easier, but in actual practice the more people and bureaucracy involved, the slower things get.

At best AI speeds up the process, it definitely doesn't revolutionize copying ideas, a tradition that predates homo sapiens.

atchemey | 6 hours ago

You raise good and practical points, if the goal is good praxis. The goal is not good praxis, it is to "flood the zone with shit," so nobody new can get a foothold. And as soon as one does it, all who can do it need to do it. It's like the NAFTA jobs moving from the US to Mexico - if one company does it, all feel the need to do so. In a more direct comparison, it's like exporting the core technical support and coding for many apps to India - lower cost and sloppier, but an offshore "necessity" if one company does. It's just a techno race to the bottom.

Regrettably, I believe you're also mistaken about the state of the art available for LLM coding tools. I'm a scientist, a chemist by training, and I am bad at coding. For focused projects, the free tier of a number of different LLM coding tools is more than sufficient. Is it perfect? No, of course not. Is it very fucking fast and free? Yes. And that's good enough for many things. These are the baby versions of the real tools out there, the heavy compute clusters that require MW of power and cooling with vast context. Agentic AI with appropriately calibrated loss functions are very much able to do exactly what you're saying at minimal cost once the data centers are there...and Lord knows they are there...one agent to strategize and dialogue with an executor ai, a couple sub agents to focus on back end, front end, UX, image generation, etc, and you've got it. Hell, a standard test for self-deployed AI is to make a full travel website and publish it to a domain - while vastly simpler than an App with services in the background, it is made within 10 minutes on a gaming computer with no coding.

It's closer than you think, but it'll be worse than we currently have...turboenshittification.

post_below | 6 hours ago

These tools cannot one shot production software. Or anything remotely close to that. Yes they can one shot low stakes applications that don't have to worry too much about security, user data, performance, reliability and everything that goes with being a public facing app on the internet, or a load bearing piece of the financial system, or etc..

You seem to be suggesting a secret tier of LLM tools that we mortals cannot use. As far as I know there is no evidence that exists and (until this moment!) I hadn't heard anyone even suggest it. It would be financially irresponsible for a frontier model company to train a larger/better model and then not release it.

The top AI companies take the race they're in pretty seriously, and so do their investors.

tauon | 5 hours ago

Edit after the fact:
I don’t think my parts of this comment chain are very relevant to the discussion, feel free to collapse this (or read if you have some time, I guess.)


Just joining in, but why does it have to be one-shot?
I’ll try to make a case for this scenario while not being one of the “AGI in 2027” people.

As far as the general public is concerned, the so-called “agentic” development tools take an idea, at least in most areas, and turn it into code today. To be frank, the specific model you end up using doesn’t really matter all that much anymore, and releasing a new one isn’t strictly necessary anymore from here on out.* **

Sure, it might take $time while iterating, but for the most part, for the price of less than a gym subscription you can hire one not quite unlimited, but many concurrent personal developers.
Now, if you’re already a developer at one of the big tech firms, this process is just sped up even more even if you did nothing more than UX/user walkthrough-like testing at the end, because you get to write ticket-style, specific feature requests/bug reports on the missing functionality. On the other hand, if you can steer the technical direction of the implementation through having even a modicum of experience, it’s probable you could avoid the silliest of security issues without too much overhead in terms of automation.

And the cat’s out of the bag, too. Even if OpenAI and Anthropic were to go down tomorrow, research in this space would continue (at least the part of it which doesn’t require huge GPU clusters and $$$, and there still is quite a lot of potential in everything not related to that) at an academic and hobbyist level. And Google would most likely not go under in this event, so maybe even the big spending type of research could continue somewhat.

As a closing thought for this, here’s a well-known developer person credibly paying out $500 per novel-to-them problem that an agentic LLM harness can’t solve: Xcancel link.
Go ahead and make some money! It ought to be easy (and please update here once successful, I’d be genuinely curious to know too) :-)

Also cf. CCBench, a benchmark specifically designed to check

How well do agents perform on tasks that aren't part of public training data?

– in which an OpenAI model two minor versions behind their latest, i.e. notably behind SOTA, scores 75% on real world tasks. What categories of business models, or for this thought experiment, scummy business tactics are unlocked if you have general-purpose developers that can accomplish more than three quarters of arbitrary, real-world tasks before their cut-off of…, checks notes…, 20 minutes for the task?***


The fact of the matter is, unless you’re like the one company that I can think of in this space next to existing banks, nobody making money via software is working on load-bearing financial infrastructure.**** And there’s an argument to be made that even this will be or already is possible for the “agents” to work on, with human oversight, at a speed only really correlating to and limited by money. Which was, mostly, honestly already the case pre-agentic development.


Notes:

*Unless your company valuation relies on it.

**Although it would be nice to have something less reliant on the training corpora, and more capable to actually do novel reasoning, but I don’t think we’re necessarily getting there with LLMs alone anymore.

***I don’t like to rely on this benchmark for the sake of an argument a lot, as it’s comprised of working in relatively smaller codebases, which many a existing project are decidedly not like, but bigger equals more difficult to test, and it still features quite the results worth a mention; it’s damn impressive if you ask me. But I’d get if you dismissed this, just consider it as another example of the direction we’re unstoppably headed in.

****Edit: Mea culpa, I forgot about Stripe. But I was exaggerating anyways, hopefully obviously.

post_below | 4 hours ago

It sounds like you're claiming AI coding agents are more capable than they are, and that you're not actually a software engineer. If that's true I imagine nothing I can say will change your vibed thesis.

tauon | 4 hours ago

Valid point, and you don’t have to try, although I like to think I am open to changing my mind, perhaps more so than the average person you’ve had this discussion with elsewhere online.

I should have made it more clear that I don’t think “agents” are a replacement right now, rather an additional tool for existing developers. Like, they need professional oversight and steering (which a company trying to drown out competition would be able to provide).

post_below | 4 hours ago

Like companies trying to drown out the competition could provide long before LLMs. The context of the original article is that implementation is now trivial. For example:

If whole projects can get one-prompted or agent-teamed it becomes just the money game.

Except they can't. Maybe someday, but until then the whole premise falls apart.

tauon | 3 hours ago

You’re right, but also my wording was imprecise – I didn’t talk about time (to market) in the second paragraph you responded to, as I thought it covered with the example from the benchmark in my first comment (in conjunction with a human developer at the wheel).
Greenfielding is absolutely faster than it used to be.


They definitely can’t at the moment, and I didn’t take the original article to be a literal description of today, more of a “what will be”, yet somehow ended up writing a lengthy comment about the status quo of (also models, but mostly) harnesses rather than continuing the speculation.

In any case, thank you for bearing with me while I came to this conclusion. I kinda wish I hadn’t gotten distracted and instead directed my original comment differently to focus on the actual topic/thought experiment, but I think I’ll leave it up with a disclaimer as a reminder to myself.

Yes, but the target pregnancy hoax is also much closer to reality these days.

It's becoming increasingly commonplace for this interaction:

  1. Two people have conversation in private
  2. One or more of these people start getting targetted ads directly related to the conversation.
  3. People presume phone is listening in.

While 3 might not be true, the fact that 2 can happen as frequently as it does is obscene.

williams_482 | 9 hours ago

I think this alludes to the Target pregnancy scandal, which was debunked.

When I clicked on a hyperlink of the word "debunked" I expected something more concrete and convincing than an article which on closer reading only claims the story is improbable.

R3qn65 | 9 hours ago

That’s reasonable, but there’s zero proof that the story is true, either. So it’s not like the debunking is debunking a bunch of prior proof - there never was any. It was all hearsay.

carsonc | 11 hours ago

I agree. This has been a long standing concern with search engines. Yet here we are, doing Google searches to find out if anyone has ever thought of our latest great idea.

I would actually want to believe that our new AI overlords will actually "steal" all our ideas because nothing ever happens, but I don't see how this is different from what our old search overlords could have done. If there is a reason this is different, I'm open to reasons why.

skybrian | 9 hours ago

The "dark forest" metaphor presumes that you want to keep things to yourself to prevent copying, but suppose you want to encourage copying? Maybe it's better to put an idea or a vibe-coded demo out there because you want it to become more common?

For software, this is free as in "free puppy." It's less work to get someone else to maintain it.

ThrowdoBaggins | 6 hours ago

I’ve certainly heard of the myth (maybe grounded in truth, but I’ve never thought to check) where a dude gets frustrated by something in software that he uses, frustrated at having his bug reports and/or error tickets ignored, and so eventually gets a job in the company, fixes the bug, and immediately resigns. Your comment here feels like the inverse, where you just send your good idea out into the world in order to have a company absorb it and produce their own version.

R3qn65 | 8 hours ago

Thought-provoking post, thanks.

The entire premise is based on the thesis that the world used to be thus:

Ideas are cheap - execution is hard -and- the world ahead is ripe with opportunity.

And that the world has now flipped. Execution is cheap and ideas are hard enough that companies need to steal them.

But with respect to the author, I disagree completely. I guess if we define “idea” as something like “make a phone, but better” then sure, ideas are cheap. But fully-formed ideas that fit into the context of the world were never cheap and they still aren’t. Steve Jobs’ genius wasn’t that he thought of the idea “make a phone, but better,” his genius was in his neurotic obsession with design/quality, his ability to inspire, and his laser focus on execution. I’m not sure I can think of a single enterprise that succeeded based solely on the strength of a big-picture idea.

I asked AI (lol) to find counterexamples and it suggested Twitter as a project that could fit, given Twitter’s success despite notoriously poor performance during the early years and wack backend. I can see that, but the fact that Twitter survived the launch of multiple better-executed clones (e.g. Pownce) suggests that it wasn’t the idea of “public forum, but you can only use 140 characters” that made Twitter successful, but something else.

If ideas are so cheap, why would companies bother stealing them anyway?

My point is that I think there’s an even stronger argument the other way around. The author argues that before, ideas were cheap and execution was hard. I think it’s more like ideas were hard AND execution was hard. Today, ideas are still hard but execution is much easier - and that’s just as true for the 16-year-old with a good idea as it is for Microsoft. AI has a lot of bad effects and I’m worried about a lot of parts of it. But I think one of the positive aspects of AI is that it’s reducing barriers to entry for people with talent and good ideas but not a lot of capital: Microsoft could always execute on a good idea - that hasn’t changed. But now the kid can too.

Isn't this whole metaphor somewhat illogical?

Firstly I think his point makes only sense when you watch it strictly with the lens of making profit. If you don't want to make money with your ideas, it's not necessary to keep them secret and hide them from the corpos, right?

But if you want to make a business out of your ideas, you can't hide forever anyway. So this whole metaphor of 'civilizations hiding to avoid annihilation' isn't transferable at all, because it's not exactly a winning strategy to hide your company from the open market.

And the moment you found a company with your idea and try to grow a business with it, the corporations can still copy your special thing in a heartbeat (at least if you believe in the idea that LLMs make that possible like the author does).

e: english hard