The Cognitive Dark Forest

55 points by refaktor 19 hours ago on lobsters | 46 comments

FeepingCreature | 16 hours ago

I think this presupposes that people build things for credit or reward. Often (I would argue mostly) I build something just to have the thing. A big company snipes me and builds the thing into their own product? Great, all the better, now I have options. Maybe more versions will show up.

Inasmuch as open source is "building things so that they exist", this will be positive for open source.

simonw | 15 hours ago

Yeah, I find arguments that "I shouldn't put work out there since it might be exploited" surprising, because the joy of code and the internet generally is that someone can "exploit" my work without it costing me anything at all - copies are free.

I have to remind myself that different people have different value systems.

Similar to why I always use Apache 2 or MIT or BSD for my open source work rather than GPL or similar. I want people to be able to use my work with as few restrictions as possible.

I find arguments that "I shouldn't put work out there since it might be exploited" surprising, because the joy of code and the internet generally is that someone can "exploit" my work

God, the passive voice here. Your joy. You get joy out of it, the joy is not inherent in companies exploiting work, and there are plenty of people who do not get joy out of being exploited. The difference is, that not only are LLMs forcing your view of exploitation upon the rest of the industry, but here you are in comments constantly defending it, too.

simonw | 6 hours ago

Yup, I'm a villain like that.

talideon | an hour ago

There's no use of passive voice in that sentence. If you consider something to be weasel words, describe it as such.

tumdum | 12 hours ago

I think that, in practice, with (a)gpl vast majority of people will be able to use your work. It’s the corporations that have the biggest issue with (a)gpl.

simonw | 11 hours ago

I want people who work at companies with an allergic reaction to the GPL to be able to use my work without having to fight their legal department first.

kolja | 2 hours ago

Just to state the obvious, the people are perfectly able to use your work without fighting legal (given that legal does its work, and not sow FUD), even if (A)GPLed. It's just the products of the company that can't. But you already knew that.

bwbuhse | 12 hours ago

Corporations can and even do use copyleft code all the time, it literally just means that they have to share any enhancements if they’re modifying or adding to that code, which seems to make sense to me.

singpolyma | 11 hours ago

Or not! Since most corporations are happy to just ignore the requirements of the GPL and do whatever they like anyway.

danielrheath | 10 hours ago

Perhaps a more coherent version of that position is "has frequently been exploited to achieve goals which I fundamentally oppose".

Someone who believes e.g. "FAANG are a net negative to the world" is going to have a hard time feeling good about their OSS work being easy to use at those places.

simonw | 10 hours ago

Both Instagram and Threads are built on Django so I guess I've come to terms with that one already.

That doesn't follow to me. I think Facebook is probably net negative, but I wouldn't prefer the world if Facebook was built with crappier libraries and tools and had to reinvent more wheels. Sure, maybe the company would be slightly less successful, but that isn't the only effect.

isagalaev | 8 hours ago

Yeah, I find arguments that "I shouldn't put work out there since it might be exploited" surprising, because the joy of code and the internet generally is that someone can "exploit" my work without it costing me anything at all

I think what you're missing is the implicit value that building open projects had before: you could become famous. Your project could become a default choice within some technological niche, attracted contributors and grow. In the new era, if you implement an interesting idea, it's just gets trivially AI-washed (as in, the AI re-implementation is called "from scratch" and isn't linked to the original) and gets sold without even giving you any bragging rights. There's no incentive to contribute to the established original.

Like, if AI were around when you guys built Django, in a year's time a large company would just produce something similar and push it onto the Python webdev community through a modicum of marketing. You wouldn't have any benefit of a head-start that consolidated the effort around it.

isagalaev | 7 hours ago

To clarify, I mostly hear this idea of AI being a natural evolution of opens source sharing ideas from people who got their fame before it happened: you, mitsuhiko, antirez. You guys enjoyed this implied value of building in the open without having to explicitly pursue it: it just how it happened to work. You telling younger people how to enjoy this brave new world sounds a little tone-deaf, to be honest.

carlomonte | 4 hours ago

apart from bragging rights, what's lost is the steering wheel on your project. that is a huge incentive, too: see your idea exploited to the end in a way you guide.

FeepingCreature | 4 hours ago

You keep the steering on your own use, of course. What's lost is the chance to be the "default implementation", the cultural cachet of owning the namespace, the place where it happens. To be honest, I'm fine with sacrificing that; I've somewhat often thought "I wish I could have this project but without the people who run it."

jonathannen | 3 hours ago

Yeah, I find arguments that "I shouldn't put work out there since it might be exploited" surprising, because the joy of code and the internet generally is that someone can "exploit" my work without it costing me anything at all - copies are free.

I think you are also highlighting a duality here. By saying/implying it can be exploited implies some kind of ownership or possession of the thing. It's effectively saying "I want to put this in the world, but only for use in ways I endorse."

[OP] refaktor | 16 hours ago

Most people still build most things for some sort of reward. It's interesting dynamics where this doesn't affect you, but this doesn't challenge the points of the blog post directly.

Either way, as written at the top, this post is a thought experiment and of course there are edge cases it doesn't cover. It explores possible general principles.

I do not think all is over, and in fact writing this helped me find some existential optimism again. By laying out the fears, I was able to see their weaknesses (which might become another post at some point).

bjeanes | 12 hours ago

It is also a somewhat privileged position to be able to make something just to see it exists. I too have done that and have been similarly privileged. But the Luddite in me has to ask: if the job market keeps worsening and "the forest" keeps growing, will any of us have that privilege or will we be desperately trying to survive?

Outside of capitalism, this kind of force from AI might really be a boon. Inside it, from my perspective, it's only another tool to widen all the socioeconomic gaps.

So-called "AI" in all its incarnations is solving one problem above all(for a certain class of people): wages.

FeepingCreature | 4 hours ago

Fwiw, I think any AI deployment capable enough that it leaves many people desperately trying to survive, will shortly thereafter graduate to a point where it kills everybody anyway. As such it's not that it isn't a worry, it's that I recommend joining up with #PauseAI instead of worrying about labor economics.

bjeanes | 3 hours ago

Sure, but these options are not mutually exclusive. Participating in #PauseAI does not mean I will stop worrying about labour economics (or unit economics as you said before editing?) because from my perspective and politics, everything here is fundamentally about labour.

FeepingCreature | 2 hours ago

Absolutely. I tend to not worry too much because I suspect the phase where there is labor displacement from AI is a short transition to the end-state where all labor is done by AI, one way or another, and we should probably direct our attention into how to transition to this regime without mass death, again one way or another.

It just seems like any division in "haves" and "have-nots" where the haves are humans is going to be extremely temporary, regardless.

ashwinsundar | 16 hours ago

"Often"? Speak for yourself I guess. I do things because I want to be the one doing it. Otherwise why do anything? Every single skill I have is done better by someone else, somewhere.

beaverBadger | 14 hours ago

I have to agree, given how freely AI companies have been able to (legally and illegally) index our hand-crafted "training data" from Github, Reddit, et al, without our permission, and sell it back to us (arguably with increasingly better LLMs) on a subscription. It is eerily similar to stealing the labor of many and keeping the fruits of it for the ones who stole the labor.

Increasingly I feel like the more I share in the open, the more that will train a future LLM that will eventually get used, not just by evil monopolizing corporations, but also by governments or military against me in a direct or indirect form.

mtset | 11 hours ago

Thanks for putting this into words.

Irene | 8 hours ago

As an ex-Googler with a privacy focus who's paid close attention to this concern since long before ML got to its current level of plausible output... I don't think companies are actually very good at using the data available to them to generate product ideas. They react a lot more than they act.

So I don't share this author's concern about megacorps figuring out that an idea "is pregnant" because I simply don't think the people who would do that are good enough at asking the kinds of questions that would lead to such insights.

The thing about critique of capital being subsumed into capital, though, that's a real and very serious problem. That doesn't come from these new technologies, it's always been the case. Think about how rock music was, when it was new, a direct challenge to the status quo.... now try to name a mainstream song from the last 30 years that is.

[OP] refaktor | 25 minutes ago

Thanks for the insight from within these internet titans. I agree we are not here technology+people wise, but I was exploring where this could go within the theme of "the dark forest" and some interesting ideas did come out I think.

One light counter, not that well (in)formed right now maybe. AI companies are having huge valuations. I don't think those valuations make sense - if they were to alway stay just "coding tools". I suspect they pitch something more like: software eats the world (past) -> ai eats software (present) -> ai eats the world. So they will have to "take it all" to make sense, and this is one view into that.

Another more practical. I agree big corporations aren't nimble and flexible enough, but I could imagine a bigger company could fund a (external?) team that sole purpose would be to comb thought their logs and generate solutions out of it. Having access to all the conversations is huge I think, so a lot is possible to do with it. And partial or full solutions are already prompted / generated / debugged in there somewhere.

ashwinsundar | 16 hours ago

By this logic though, shouldn't it be easier than ever to snipe the big companies shamelessly broadcasting their signals across the universe? A solo developer is more empowered than ever to disrupt the status quo, all the while maintaining a very minimal broadcast signal that is virtually undetectable

[OP] refaktor | 16 hours ago

there is also this dynamics yes.

jkaye | 14 hours ago

A piece of feedback for the author: give people a reason to care about what you write very, very early in your writing. Because you submitted this here yourself, I'm assuming you want other people to read it.

Two paragraphs in and I have no idea what this is about - I am always going to close the page. Maybe it's awesome but I will never know.

skycam | 14 hours ago

On the contrary, I'm perfectly happy reading a piece like this that doesn't start with a complete summary of itself or some grandiose exaggerated claim. I'm usually only here on lobsters when I've got time to kill, and personally I don't mind trusting the author to deliver their point in their own style by the end (especially for a piece as short as this one). This isn't a criticism on your reading style (not everybody has the time I do or would choose to dedicate it the way I do), it's just a reminder that other reading styles exist and the author might not be catering to yours specifically.

jkaye | 10 hours ago

I'm not asking for a complete summary or a grandiose claim. I'm looking for literally any indication of "the point". If there is no point, sure, I agree that different people may engage differently. But usually there is one. Don't bury the lede.

For sure I would not have commented this on a post not submitted by the author BTW. It's no commentary on the writing itself. If you're sharing your writing somewhere, you're doing it for a reason.

[OP] refaktor | 14 hours ago

I'm not trying to be smart or ironic, really asking - but besides title, subtitle and text I'm not sure in this case how I should do that? Please provide more info.

carlomonte | 14 hours ago

the text is ok, but the reader needs to know the reference (liu cixin). i don't think that a long, deep dive into the dark forest concept would improve the text; it will probably make it a case of TLDR.

bjeanes | 12 hours ago

I disagree that they need to know the text. The key idea being referenced in those works is mentioned in this text directly. The reference serves more as a citation for the analogy. It's just a way to introduce a kind of alternate "Great Filter Hypothesis" / Fermi Paradox and the. Applies that idea to ideas instead of civilisations.

The 3-body problem series as a whole is not a load bearing concept in this piece, by any means.

jkaye | 10 hours ago

The point of my comment is that there's very obviously important context missing. So I have to disagree with you.

bjeanes | 7 hours ago

What important context is missing? I haven't read the Dark Forest and I don't feel like I missed anything.

jkaye | 9 hours ago

Well, I honestly can't really help you since I don't know the point. Maybe the other thread is the answer - I can't be sure.

dbushell | 3 hours ago

A piece of feedback for the author:

A piece of feedback for the author: ignore this comment. Bro read 25 words and gave up — that's entirely his problem.

Not everything written online has to be dumbed down for the tiktok attention economy.

Student | 13 hours ago

This really presupposes that execution solely consists in writing code.

Irene | 8 hours ago

Yes, excellent point. That ... isn't really what seems to make most startups fail, you know?

[OP] refaktor | 11 hours ago

It's a thought experiment so it presupposes A LOT. You are correct, execution is not just writing code, and even when it is, it's not this simple.

So I won't repeat myself too much: https://lobste.rs/s/bpotqb/cognitive_dark_forest#c_f4lwum

This is great as a think piece and as the theme of a science fiction novel. I'd read it!

But I'm going to say it again:

If whole projects can get one-prompted or agent-teamed it becomes just the money game.

Today, this only works for projects that have been done a 100 times before, and I don't see that changing.

For anything innovative, there will by definition not be enough data for an LLM to work on. And bringing an innovative project to life is not just a matter of coding it once and being done. It often takes piling small innovation on small innovation for years until it clicks. None of these innovations can be one-shot prompted and for each, if you use an LLM as a tool, you have to own or develop the expertise yourself in order to guide it.

In short, I believe that for innovative ideas, execution expertise is still as much a bottleneck as always.

Obviously, if you open-source your innovation, an LLM might be able to reproduce it, just like a human could. But if you choose to keep your innovation closed-source, I don't see any way an LLM could produce a high-quality rip-off without extensive involvement of human expertise.

carlomonte | 13 hours ago

Comment removed by author
Comment removed by author

laqq3 | 7 hours ago

May I ask if this piece was written with AI assistance? (Not trying to pick an argument or be judgemental, genuinely curious, if only to train my personal "AI detector".)

These snippets made me think so.

But in the cognitive dark forest, the most dangerous actor is not your peer. It’s the forest itself.

You think of something new and express it - (...) - it enters the system. Your novel idea becomes training data. The sheer act of thinking outside the box makes the box bigger.

Resistance isn’t suppressed. It’s absorbed.

[OP] refaktor | 2 hours ago

I use LLMs to find typos/misspellings and I use it to sometimes find more naturally sounding phrases, because I'm not a natural English speaker. The text you quoted I'm 99% certain is all mine. On HN somebody said that the "thinkpad" section is llmized, but that is all mine to.

A few times it proposed different words, but I kept mine, because I felt proposed were too general - even if mine were a little odd. One thing I know I took from llm, and I don't like now is the phrase: "That’s the condition." - I was searching for something like "that's just how it is now / this is the reality" - "tako pač je / tukaj pač smo" in Slovene.