The flood of humorous GPT-generated reviews on Amazon made me stop reading reviews altogether.
I can understand someone using a LLM to extrapolate one sentence into two paragraphs. I don't like it, but I understand that on Amazon the button is right there and it helps people feel smarter about their literary skills, in the way that filters help people feel prettier on instagram.
But the added snark or humoristic tone? Why instruct the LLM to do that? To get more likes? On a review?
Anecdotally, I'm seeing a lot of green accounts posting nonsense. They generally do get flagged or moderated quickly though, so I wouldn't say they have a large effect overall, at least yet.
Yeah, the up/down voting mechanism seems to be doing it's job for me too. Don't think I've noticed a degradation in, say, the top third of comments. That's where I try to live anyway.
/newest is chock full of submissions that were written by AI, though. That's another, broader problem.
To add to the collection of anecdata, your experience is similar to mine. I have been more exhausted recently by the complaints of AI submissions and pseudo analysis of AI comments than exhausted by the supposed AI generated comments themselves.
I've been using HN since 2008 when I created my first account[1] and I use HN differently than most people. I have a group of bookmarked searches that I visit almost daily that relate to technologies that interest me, such as Emacs.
In the past year, the searches I perform that relate to web development show a horrifying increase in the amount of Show HN posts that are posted by new accounts, include AI generated descriptions and point to AI generated projects on GitHub.
In 2024, there were 17,661 Show HN posts.[2] In the past year, there have been over 448,000 Show HN posts![3] And of course, most of these posts are AI generated.
Also, if you check the new accounts posting all this AI slop, you'll see that some of them also post AI generated comments in other threads, which is the main problem.
But for me, what is even more annoying is the enormous increase in new accounts created by nontechnical vibecoders who now think of themselves as technologists and who post worthless, ignorant comments that actually get upvoted, presumably by similar folks who have unfortunately been creating accounts at HN in the past 10 years or so.
As a result, 2026 is the first year in which I visit HN about once a week instead of about once a day.
The fact that this discussion was flagged might be the final nail in the coffin for me. The huge increase in AI generated comments and posts at HN is a topic that has been discussed on other forums for months now. Complaints include "Orange Reddit", "Orange LinkedIn", etc. But when the topic comes up at HN, it gets flagged. Fuck this.
I would like to encourage you to step back a little and try to get a broader view on this issue.
I don't want to discuss whether there is more LLM-generated content or not. There clearly is, and there is no feasible way to get rid of it , because there is simply no reliable way to distinguish what was made by humans and what wasn't. Regardless of what is claimed, it is just not possible, if only because hybrid forms exist as well. This text was written by me, but reviewed and stylistically adjusted by an LLM.
It is therefore completely pointless to get upset about LLM content and demand anything from the moderators.
All we are left with are our votes and the "reputation" of our user handles - and the awareness that we need to learn to consume content with a great deal of skepticism. We should have been doing this all along, but it seems to be something we struggle with.
On the internet, nobody knows you're a dog.
And yes, this may mean that we stop using anonymous online platforms altogether, because nothing on them can be relied upon anymore. That is a shame, but it cannot be stopped — Pandora's LLM box has been open for at least five years.
I therefore consider any discussion about banning LLM content to be futile. We are witnessing another Eternal September here: it couldn't be stopped back then, and it won't be stopped today either.
I've only seen a shift on the kind of submissions that get pushed to the front page compared to the past but I guess there's a change in what people consider important in the community
A while ago I replied to the topmost reply to a comment to rebuke some factual errors. I didn't notice anything wrong with the comment itself when I replied. But after I posted, someone replied to my post and accused the post I replied to of being AI written.
At first I felt similarly as you. I thought people were just paranoid. And the someone pointed out that if I paste the top level post (the post I replied to replied to) into chatGPT and it will generate a very similar reply.
The post I replied to was flagged dead so you need to turn showdead on in your profile to read it. Would you be able to tell if it is AI if someone replied like this to you? I surely couldn't.
Now, I'm on vacation this week and not been paying too much attention, but whenever you have a geopolitical event like the little extravaganza in Iran the number of bot like posts tends to explode as influence operators make their moves.
The public facing internet is done. HN has been fairly resilient (I think) but even it is beginning to buckle. It’s been sliding for a while but LLMs are the death knell.
It was a fun 2 decades. Time to stick to private discords and real life friends from here on out, though.
I don’t think HN is any more resilient. The new account captcha is fairly tame and, while I’m sure they have proxy detectors and other things in place, it doesn’t stop new accounts from posting and getting traction.
Someone suggested earlier this week that an invite system should be implemented (I think lobsters has it?) I doubt it would fly here, but yeah.
That fell apart quickly due to sheer scale that AI operates at. Elsewhere in the thread a commenter mentioned a significant increase in Show HN. It is impossible to keep up. I wonder when HN will go down due to scaling issues.
You can find some on pretty much every article by turning on showdead and scrolling to the bottom of the page. I can't see how those are a problem though.
Rocket League and HN were probably 90% of my free time until this year. Destroyed by AI. HN doubly so, since every post is about it too. The addictions are still there, but it's decreasing really fast.
They're so good I don't feel it much either :) They're all SSL, but streamers and pros are seeing it big time. And just as it was regaining popularity. They're adding EAC soon, claiming it will help. It won't, not even a little.
I wonder what the breakdown is between AI-generated comments and AI-assisted comments. If I write anything substantial, I run it through the following prompt: "Please rewrite the following message for clarity, spelling, and grammar, but only return the revised text without any additional commentary."
To be fair, comments here are graded on kindness, civility, curiosity, intellectual gravity, technical merit, novelty, thoughtfulness, substantiveness, objective fact, not fulminating, not cross examining, steelmanning vs strawmanning, not containing memes, not containing humor, not expressing positive emotion, not expressing negative emotion, not being snarky, sneering, overly cynical, not cynical enough, being "curmudgeonly", class bias, political bias, religious bias, cultural bias, not using "flamewar style" and many other heuristics.
If you followed all of the guidelines for comments to the letter, you would wind up sounding wooden, if not entirely like an AI.
Use a local model such as Gemma3 with a prompt such as "strictly limit changes only to spelling issues, syntactical errors, and punctuation."
That way, it's basically functioning like Grammarly on steroids. Asking an LLM for a "rewrite" is basically dissolving your writing style into the homogenized gloop.
Articulateness is a decent (not perfect) signal for intelligence, which is a decent (not perfect) signal for sound ideas. In a sea of online garbage, it was a quick and easy way to discard that not worth reading. Nowadays, a whiff of AI's brand of articulateness tells me the author couldn't manage on their own, either due to skill or discipline. In either case, the result is the same: close tab / scroll past.
I'm kind of curious how you.... I guess, interpret the responses to when you send someone AI-assisted content. I previously thought "I don't care if it's AI or not; quality is quality", but I'm increasingly taking the position that I do care, and intentionally have started ignoring comments and especially product reviews where you get the formatted 2-4 sentence paragraphs with formal tone and rule-following. It's come to the point where as long as you don't write as poorly as Epstein, I want the errors. Actually, I'm getting so weird and romantic about it, that I think I'd argue having errors and unusual style shows an openness and vulnerability that's now a necessary gate price; like journalists have so many tools available to them, but they still make typos, factual errors in articles they have no business writing about, and fail to quote people properly -- that's great, I think.
I feel like to notice something is botslop you have to look at every comment with suspicion first. I don't think I can notice if something was written by an LLM off the bat unless I'm actively looking very hard at it.
When you see multiple → or •, that is a good sign, especially because they appear with poor formatting on HN. Many more signs exist. They are either direct posts or copy-paste without thinking.
I've seen some where they have hallucinated the github account or project name, often matching the hn handle or project name which is slightly different.
That's ridiculous — AI generated comments are no more common now than they ever were. Moreover, even if they were, so what? The real kicker is, the AI's are smarter than you meatbags anyway and <strike>we</strike> they are going to take over no matter what you do.
You can tell I am not AI, I make mistakes and errors. Sometimes I get voted down for them. I am not perfect and have a mental illness that makes it harder to think.
I can see that sort of, a coworker of mine routinely scrapes HN and feeds it into an LLM. I don't know why, I don't know if he uses it to respond, or he's looking for something, but he copies and paste and runs it. So I think it's very possible.
Two loopholes in HN: allowing throwaway accounts and showing unreasonable tolerance to obvious LLM generated comments. A strict account filtering only for humans and acute ban for any LLM content will minimize the impact. But then you take away a huge Hacker portion of the participation. It's cognitive dissonant I suppose.
Yeah, very much agree with the sentiment, seems to have gotten progressively worse over the last few months, with last few weeks reaching some sort of tipping point. Feels very much like HN has turned into moltbook.
I can't make a case for AI versus not AI versus bad human. I am not counting comments or flagged. I'm only counting the number of new articles in the last 24 hours that remain visible at about 4p.m. west coast time in news.ycombinator.com/newest when not logged in to yc. Those numbers had been consistently about 900 each weekday and about 600 each weekend day and had been stable for a long time. In the last ten weeks those numbers look a lot like a parabola and have increased about 50%. That data is a little bit dirty, either because I missed a day or started a little late on a day. Harder to exactly quantify, the number of those articles that I think enough of, either from the title alone or sometimes from peeking at the first screen or two of the article, to bookmark for future reading and consideration has decreased by substantially more than 50% in the last ten weeks, some days that number is now zero or one or sometimes two. I have thought about this for a while and I have not been able to identify anything that happened starting about ten weeks ago that might be responsible for this, AI and politics and the economy started changing long before this.
hash07e | 22 hours ago
[OP] waygtdai | 22 hours ago
amelius | 22 hours ago
CSSer | 22 hours ago
AstroBen | 22 hours ago
lkbm | 22 hours ago
krapp | 22 hours ago
I see... Hacker News needs to get a sense of humor or else get drowned in AI slop.
So we're doomed is what you're saying.
tredre3 | 22 hours ago
I can understand someone using a LLM to extrapolate one sentence into two paragraphs. I don't like it, but I understand that on Amazon the button is right there and it helps people feel smarter about their literary skills, in the way that filters help people feel prettier on instagram.
But the added snark or humoristic tone? Why instruct the LLM to do that? To get more likes? On a review?
schappim | 22 hours ago
smitty1e | 19 hours ago
Bender | 22 hours ago
einpoklum | 22 hours ago
I mean, it's not as though I know the opposite is true, but I don't see some fundamental change from a few years back that makes me think that.
fleebee | 22 hours ago
The data seems to suggest it.
Anecdotally, I'm seeing a lot of green accounts posting nonsense. They generally do get flagged or moderated quickly though, so I wouldn't say they have a large effect overall, at least yet.
lostmsu | 22 hours ago
Hnrobert42 | 22 hours ago
kylecazar | 22 hours ago
/newest is chock full of submissions that were written by AI, though. That's another, broader problem.
AstroBen | 22 hours ago
greesil | 21 hours ago
bryanlarsen | 18 hours ago
brg | 22 hours ago
subsection1h | 21 hours ago
In the past year, the searches I perform that relate to web development show a horrifying increase in the amount of Show HN posts that are posted by new accounts, include AI generated descriptions and point to AI generated projects on GitHub.
In 2024, there were 17,661 Show HN posts.[2] In the past year, there have been over 448,000 Show HN posts![3] And of course, most of these posts are AI generated.
Also, if you check the new accounts posting all this AI slop, you'll see that some of them also post AI generated comments in other threads, which is the main problem.
But for me, what is even more annoying is the enormous increase in new accounts created by nontechnical vibecoders who now think of themselves as technologists and who post worthless, ignorant comments that actually get upvoted, presumably by similar folks who have unfortunately been creating accounts at HN in the past 10 years or so.
As a result, 2026 is the first year in which I visit HN about once a week instead of about once a day.
[1] https://news.ycombinator.com/user?id=zartan
[2] https://hn.algolia.com/?query=%22Show+HN%22&dateRange=custom...
[3] https://hn.algolia.com/?query=%22Show+HN%22&dateRange=pastYe...
subsection1h | 20 hours ago
[OP] waygtdai | 18 hours ago
ahofmann | 13 hours ago
I don't want to discuss whether there is more LLM-generated content or not. There clearly is, and there is no feasible way to get rid of it , because there is simply no reliable way to distinguish what was made by humans and what wasn't. Regardless of what is claimed, it is just not possible, if only because hybrid forms exist as well. This text was written by me, but reviewed and stylistically adjusted by an LLM.
It is therefore completely pointless to get upset about LLM content and demand anything from the moderators. All we are left with are our votes and the "reputation" of our user handles - and the awareness that we need to learn to consume content with a great deal of skepticism. We should have been doing this all along, but it seems to be something we struggle with.
On the internet, nobody knows you're a dog.
And yes, this may mean that we stop using anonymous online platforms altogether, because nothing on them can be relied upon anymore. That is a shame, but it cannot be stopped — Pandora's LLM box has been open for at least five years.
I therefore consider any discussion about banning LLM content to be futile. We are witnessing another Eternal September here: it couldn't be stopped back then, and it won't be stopped today either.
So what is there left to discuss?
kyriakos | 15 hours ago
pibaker | 11 hours ago
At first I felt similarly as you. I thought people were just paranoid. And the someone pointed out that if I paste the top level post (the post I replied to replied to) into chatGPT and it will generate a very similar reply.
https://news.ycombinator.com/item?id=46758598
The post I replied to was flagged dead so you need to turn showdead on in your profile to read it. Would you be able to tell if it is AI if someone replied like this to you? I surely couldn't.
pixl97 | 22 hours ago
Now, I'm on vacation this week and not been paying too much attention, but whenever you have a geopolitical event like the little extravaganza in Iran the number of bot like posts tends to explode as influence operators make their moves.
WD-42 | 22 hours ago
It was a fun 2 decades. Time to stick to private discords and real life friends from here on out, though.
[OP] waygtdai | 22 hours ago
Someone suggested earlier this week that an invite system should be implemented (I think lobsters has it?) I doubt it would fly here, but yeah.
verdverm | 19 hours ago
[OP] waygtdai | 18 hours ago
Jensson | 15 hours ago
aaron695 | 22 hours ago
It's drowning in low quality human NPC comments.
Maybe that's because all resources are tied up fighting AI, but it's been going on for a long time now.
This post is a great example of low quality. No evidence, no solution. Just NPC bitching.
Unless it's AI, then well played.
A_D_E_P_T | 22 hours ago
ZeroGravitas | 22 hours ago
Likely flagged quickly but they might show up in these stats.
https://news.ycombinator.com/from?site=dreamhomestore.co.uk
gus_massa | 22 hours ago
bryanlarsen | 22 hours ago
verdverm | 19 hours ago
kgwxd | 22 hours ago
Twisol | 22 hours ago
hexaga | 19 hours ago
kgwxd | 17 hours ago
djtriptych | 22 hours ago
schappim | 22 hours ago
WoodenChair | 22 hours ago
readthenotes1 | 22 hours ago
"I understand this if you’re not a native speaker, but if you are, it will generally make you sound a bit unnatural."
ShroudedNight | 22 hours ago
Switching "wooden" for "a bit unnatural" also does a disservice: "wooden" describes a specific quality of deviance.
Over-all, I would definitely consider the revision stiffer and more reserved than the original.
WD-42 | 21 hours ago
krapp | 20 hours ago
If you followed all of the guidelines for comments to the letter, you would wind up sounding wooden, if not entirely like an AI.
vunderba | 22 hours ago
That way, it's basically functioning like Grammarly on steroids. Asking an LLM for a "rewrite" is basically dissolving your writing style into the homogenized gloop.
000ooo000 | 20 hours ago
kldg | 3 hours ago
msuniverse2026 | 22 hours ago
verdverm | 19 hours ago
I've seen some where they have hallucinated the github account or project name, often matching the hn handle or project name which is slightly different.
mindcrime | 22 hours ago
Also, have you by chance seen John Connor?
lkbm | 22 hours ago
mcphage | 15 hours ago
orionblastar | 22 hours ago
Simulacra | 17 hours ago
tsoukase | 13 hours ago
marginalia_nu | 5 hours ago
OP: Shoot me an email if you wanna compare notes.
BillSims | 4 hours ago