You may dislike it, but I appreciate it. I do not want to waste my time reading generated content. And, frankly, I do not wish to engage with "assisted" writing, because when people defer their writing to these tools, they lose themselves -- and I read for the perspective, and the style of the writers, not for the content alone. This is also important as a discussion point: why is it irrelevant as to how the article is written? It gives greater insight into the point made by the author, and how the author has engaged with that work.
Having an early signal which helps me determine how to engage with the content appropriately is valuable to me. If you don't appreciate it it, the [-] button is there for you, just as the hide button is there for me.
That's the biggest problem: if something is 17 paragraphs of slop, and it only dawns on you that it's slop around paragraph 10, you've just wasted a non-trivial amount of time reading something of very little value.
Yes, I so do wish the moderators would add "slop" as a flag reason. It doesn't even have to do anything. Just being able to see "two hundred users flagged this as slop" would save me much time.
A moderator has communicated to me that the "spam" flag should be used for this, but I always do so reluctantly. It's not quite spam. It is content written from a place of no intention.
I get not like LLM generated content, but there’s a spectrum of LLM use from “find and write the whole thing” to “I don’t speak natively, can you translate / clean up my English”. More importantly, we don’t need the top dozen comments on every article debating whether it was written by an LLM, and I’ve also seen several posts that the author insists they wrote by hand (and I have no reason to disbelieve them) and still people were accusing them of using an LLM. If the comments can’t even reliably identify LLM content then they’re just noise—at best they are as bad as AI slop articles.
I strongly prefer to read obviously-ESL text with foreign language grammatical constructions and everything over reading text that has been translated from that into "standard" English by an LLM.
Seconded, with the caveat that I am still sympathetic to seeing ESL folks use these tools. There are still many people who take it upon themselves to evangelize "good" writing and berate people for grammatical/spelling/style issues at every opportunity. I have sympathy for people using these tools to avoid that, but I still want to know when they are used so I can at least manage expectations and look for points where translation has failed. I also do not tolerate this as an excuse to avoid writing, which I have experienced IRL and online.
"I don’t speak natively, can you translate / clean up my English"
Cool, I would prefer to know this anyway. This helps me know when it's the case.
we don’t need the top dozen comments on every article debating whether it was written by an LLM
I agree! This technology should be more identifiable and people more honest about their usage.
I’ve also seen several posts that the author insists they wrote by hand (and I have no reason to disbelieve them)...
If the comments can’t even reliably identify LLM content then they’re just noise—at best they are as bad as AI slop articles.
This is hard to reconcile. While I would like to give the benefit of the doubt, there are simply too many people outright lying about their usage of these tools as well; it has become shameful, for one, and two, it has become especially prominent of a pattern to use these tools to generate attention bait. In lobsters, at least, I have seen very few articles mislabelled as slop, and generally, the ones that are are quickly contested with supporting evidence with existing samples of writing style or something to that effect. These comments are strong signals and the subsequent discussions consistently more reliable. I especially disagree with equivocating them with LLMs; their incorrectness rate is significantly lower, and their faults far more bearable.
I am writing in English and translating to French. In the past, translation was done with nothing, then DeepL, and now LLMs. For copyediting the English version, I was first using nothing, then a spellchecker, then Grammarly (which does more than grammar), then LLMs with a prompt to minimize edits. I plan to write a bit about that, but in the meantime, I disclose LLM usage in the footer with an icon that can be hovered over to get the exact tool used: a brain if no AI or just grammar check, sparkles if there was light copy-editing and a bot when there is a translation.
My point is that I consider myself "not using AI for writing content", but I use AI to fix content. The translation to French using LLMs is a convenience over DeepL because LLMs can keep Markdown. Since I am French-native, I can validate/edit the translation as a last step.
Hold on, I dislike purely generated text as much as anyone else. But there is a difference between having a LLM fix a few small style or grammar issues and having a LLM rewrite your entire text. The latter will land you in slop territory the former will likely not be noticed by anyone if you did do the prompt right.
This is also true with translations. For example you can do just do "translate this text to French" and you likely will get something that reads like slop because the LLM will take a lot of liberties. You can also do something along the lines of "translate this text to French. Keep the structure of the translation as close as possible to the original English text".
The person you replied to also explicitly says the are a native French speaker, which implies they go back in and make sure it is translated the way they want to.
I have done the same in the past as well though not for articles, rather for comments. I am active here on lobster.rs but also a tech related website in a different language I am native in. Sometimes I find myself having written a comment in one language and encountering a discussion in a different language where I want to basically say the same thing. In the past I'd have translated the entire thing by hand, if it is short I still do. These days I either put it through deepL or an LLM and then go through and clean up everything I wouldn't have said in the same way.
That isn't slop in my opinion, that is simply using LLMs as a tool of conversion rather than having them generate content for me.
Edit:
I got carried away on the actual example. But the tl;dr is that I do take issue with people labeling any LLM usage as slop. Such binary thinking is not productive imho.
The person you replied to also explicitly says the are a native French speaker, which implies they go back in and make sure it is translated the way they want to.
I'm a native swedish speaker. I would rather read the original english than a sloppy translation to swedish, even if the author was also a swede and pinky promises to have proofread the translation.
That isn't slop in my opinion, that is simply using LLMs as a tool of conversion rather than having them generate content for me.
I'd be careful to dismiss their ensloppifying midas touch that lightly.
I'm a native swedish speaker. I would rather read the original english than a sloppy translation to swedish, even if the author was also a swede and pinky promises to have proofread the translation.
That's the thing though. If used with attention you'd likely wouldn't notice and I'd argue it might also matter less. Remember AI slop refers to low quality and mass produced generated content. In my opinion if you have an original text, then there is no generation happening. If the conversion then holds up to human standards I honestly don't see the issue.
You might have an issue with LLM usage outright. Which is also fair, but I'd still argue against labeling anything LLM touched as "slop". It confuses the conversation imho.
I'd be careful to dismiss their ensloppifying midas touch that lightly.
I can guess what you are trying to imply here. But it would be a guess, so can you clarify it for me?
In my opinion if you have an original text, then there is no generation happening.
You're still generating text, you're just trying to convince the slop machine to pretty please try to resemble the source material. And it's still an unreliable process. To quote yourself:
For example you can do just do "translate this text to French" and you likely will get something that reads like slop because the LLM will take a lot of liberties.
That doesn't sound like a reliable process to me. I don't usually have to worry about magick convert producing an entirely unrelated image.
Which is also fair, but I'd still argue against labeling anything LLM touched as "slop". It confuses the conversation imho.
Yes, the slop industry and its usual stooges have been trying to push that line hard in the last year or so. Suffice to say, I disagree with the premise. (And it's amusing to see the same thing start to happen for their preferred terms, like "agentic engineering".)
I can guess what you are trying to imply here. But it would be a guess, so can you clarify it for me?
As far as I can tell, there appears to be no such thing as responsible ensloppification. Well, other than some coping mechanism that "at least I'm not as bad as those guys!".
I mean, if you are going to quote me then do the entire context. Where I also explicitly include human editing after the fact.
Personally I think you are talking from an entrenched position. You don't like LLM usage, period. Which I think is a fair position to hold. What I think is not doing yourself favors is calling it all slop.
Simply because slop has a specific meaning and you are just confusing any conversation over it making your position less clear to others as well. Just call a spade a spade and say you disagree with LLM usage instead.
As far as I can tell, there appears to be no such thing as responsible ensloppification.
Yeah, see I have no clue if you mean LLM usage in general and if that is from a technical perspective as in that LLMs still aren't prefect or from a more fundamental perspective about their trainingdata, energy usage, etc. When you insist on calling it "slop", to me that signals it is about the content of it. But then I'd expect to be able to have a conversation about it. Yet here we are with you refusing to even consider that a possibility. In which case I think you have a more fundamental issue with LLM usage, which again is entirely fair, but then just call it that and don't just do this angry snarky lashing out where everything is slop.
I'm going to continue to use the commonly accepted definitions of words
I mean, sure. I just feel that this one has a different commonly accepted definition and might cause some confusion. Also, I am honestly trying to get a better understanding of your position and so far I have only received snark. I am honestly not sure what you are feel you are getting out of this.
I don't. Fixing includes grammar, but also some constructs like removing passive voice (avoiding passive voice is not that a problem in French but it's the number one improvement people says you should do in English). IMO, the editing stays very light and it is a bit like what was done by Grammarly.
As people usually wants to write good code, they want to write good English.
I tried pangram on my texts, it says 100% human with high confidence. I mention it as this tool is mentioned indirectly elsewhere.
English is not my native language. My english will always have weirdness to it. And I like it that way. I wish people were not so insecure about their english skills. Just do your best.
Yeah these comments are clearly working around a shortcoming of the flag system; spam is specifically defined as having a commercial element, making it inappropriate (if interpreted as defined) for many slop posts.
Banning the comments without fixing flags is pointless. Address the root cause rather than the symptoms.
The mods have indicated that spam is the right flag for this kind of content, at least as far I can tell, and I definitely have noticed that other people are using "spam" as a flag for articles that are non-commercial in nature.
I'm an old fart who doesn't seem to have as good an instinctive ability to spot 'slop' as others - possibly because slop just looks to me like human-generated marketing drivel from a few years ago, so I don't see it as a distinct thing. I do find the pages of discussion on the provenance of what to me just looks like a mediocre article quite irritating - even more so when there's some use facts buried in it.
Perhaps we could have a way of setting a personal numeric assessment of article quality or 'sloppiness' as separate thing to the 'interesting' up/downvoting? Then we could average those and possibly people could have an auto-hide threshold?
That's probably making lobste.rs more complicated than was originally intended, but maybe it's reasonable to do so to deal with the rather dystopian world we now find ourselves in.
possibly because slop just looks to me like human-generated marketing drivel from a few years ag
Honestly, likely because a large chunk of the training data is human generated marketing drivel. Something I also would call out a few years ago, so the net result for me is the same tbh. In fact, LLM generated text finally has gotten more people to notice how shallow a lot of the articles and blog posts are, something I wish people would have pointed out more a few years ago.
There have been multiple threads with hundreds of replies each on this subject; the moderators here have indicated (1, 2) that they are not adding a new flag reason and that "spam" is the right flag to use for this kind of content. (at least that is my reading/interpretation of their responses).
This is not site policy; we haven't yet reached consensus on this point. The meaning of "spam" is socially determined, as everything else is. If people disagree with that position of mine, I hope you will talk to each other about that.
I disagree. Here is me talking about it. Commenting that something is slop is helpful to me, and I hope people keep doing it and don't mark it off-topic.
I would be sympathetic to this, being someone who actively advertises I will not use AI textual content in my writing, if I didn't think that there were some people setting an unusually low bar for what AI is or seeing AI content under every rock. I don't think those people are being disingenuous, but I do think there is a decent possibility they can be wrong. It's becoming a scarlet letter.
Of course, my objection is true as much for posts as it is for flags, though one might argue flagging a post has less community accountability.
I think flagging has less community accountability and less opportunity for learning. And I guess that second part might crystalize why I prefer a comment versus a flag here. The existing flags all have clear meanings that don't merit explanation, usually, according to their definitions on /about:
Off-topic - not about computing
Already-posted - dupes or direct responses to something less than a week old
Broken link - 404, 5xx, paywall
Spam - promoting a commercial service
None of those carry a lot to explain. Slop detection is, IMO, such an "art" right now that it probably does merit an explanation if it's not clear commercial slop.
I agree with your observation about the scarlet letter, and don't like it at all.
When an account is repeatedly flagged for spamming, the user who is spamming is banned. How do we get this mechanism in place for slop? Will sufficient comments have a repeat violator banned?
Comments saying "this is LLM slop" has saved me multiple times, including on at least one story that I have posted myself. I don't want to read slop and I definitely don't want to post slop.
I find comments alerting me of the fact that submissions are LLM-generated to be helpful, and use them as guidance to flag the submission as spam, and hide it.
(I don't take such comments as gospel, and do my own verification!)
I find such comments helpful as a reader, in a way that I wouldn't find a flag helpful. Especially when they point out why.
Given the definition for the "spam" flag on the about page:
For stories, these are: "Off-topic" for stories that are not about computing; "Already Posted" for duplicate submissions and links that elaborate on or responses to a thread that's less than a week old (see merging); and "Broken Link" for links that 404, 500, or present a paywall; "Spam" for links that promote a commercial service.
versus the behavior you're proposing (and that I've seen at least one of the moderators endorse before) I'd rather see the comments continue.
My desire to avoid slop is greater than my desire to avoid content marketing. I can contextualize marketing easily enough, and the flag gives me enough information about that. Until/unless there's a separate flag for slop, I'd like the comments (especially the ones with reasoning) to continue, please.
I'm personally using "spam" in a more inclusive manner than the one used on the About page. For me spam is not just what used to be defined as "UCE" (Unsolicited Commercial Email) but the sort of cut-and-paste screeds that infested certain Usenet groups, posted by obsessives or trolls to disrupt different threads.
Bad LLM generated content is indistinguishable from that kind of content.
I like this definition of spam. Instead of focusing on the intent or process of the author (which we can't know for certain), it focuses on the effects on the reader: is it disruptive or a waste of time? Then it's spam.
Like some other commenters, I disagree. LLM-generated articles are ubiquitous, and while it's possible to come up with counterexamples and hypotheticals, the vast majority of that writing represents no cognitive effort on the side of the creator. It's just seen as a hack to get traffic to your website by generating endless clickbait about hot-button topics (notably including "AI good" and "AI bad").
Since it's hard to tell at a glance, I appreciate the warnings. Those who don't mind engaging with machine-generated text just for the sake of it can ignore them. But those who'd rather use the internet to interact with human-generated content can nope out more quickly.
Because some people are in the first group, and because we get sidetracked by arguments that "you can't know for sure", flagging doesn't work reliably. Lobste.rs isn't nowhere near as afflicted by this as HN (I keep score), but we do get slop-stories on the front page at least once a week.
I'm of two minds about this personally... the following are my personal thoughts, not the site's official position, and this is not any sort of considered opinion, I just wanted to share what I'm thinking in case that helps the discussion
your reason for wanting this remedy is a good one, that I sympathize with. it for sure must be incredibly demoralizing to have something you put effort into dismisses as if no human was even involved with it, especially when the dismissal is on superficial grounds (em-dashes or whatever). that would push me away really fast, too.
the remedy itself may not be the ideal one for that purpose. I would expect the effect of that intervention to be that we lose our biggest defense against human-free content. when we all set social norms for each other and are consistent about it, that scales in a way that top-down exercise of authority never can. so I'm really reluctant to discourage a social dynamic that's load-bearing in this moment, even when I see harm from it. if other things shifted such that it's no longer load-bearing, I'd want to revisit that.
are there other remedies that might help? I encourage anyone who has something that hasn't been mentioned here to suggest it
I of course encourage all of you to be careful about not throwing accusations around without high confidence. careless reactivity is super-destructive, no matter what one believes about the virtue of whatever cause it's being deployed towards. it's very difficult for a space to remain a community at all when people are just making spur-of-the-moment personal attacks on each other all the time.
it for sure must be incredibly demoralizing to have something you put effort into dismisses as if no human was even involved with it, especially when the dismissal is on superficial grounds (em-dashes or whatever). that would push me away really fast, too.
I strongly encourage everyone to explicitly disclose the use of generative text in their own publication, and their extent.
As a reader, unmarked generative text is a breach of trust as it blurries the basic social contract of "I put effort in this writing that I'm proposing you read". It leaves me wondering if someone that took a shortcut and accepted the inhumane , grating, low information density, generated voice did proper diligence on the discourse itself. It detracts from the credibility of the entire article.
Disclosing the usage and the extent of the usage works towards restoring some of this trust, and gives me an early choice of moving on rather than having to warn others in comments. Sure, that might limit the range of your writing somewhat, but that's a sure way of not having anyone comment to disclose what you concealed.
Similarly I engage you to disclose with precision and conciseness when no generative tooling was used in your posts. It's a bit sad that this doesn't go without saying, but stating it will free the mind of your readers.
I want to acknowledge this criticism. As I've said in the past, it's been difficult to know what to do. We continue to discuss it, both with all of you all and amongst ourselves. We take actions when we feel like we know they'll help.
Also, as a reminder, the last time I spoke about this I quite explicitly called for the community to have dialogue and strive for consensus. The dynamic we've been going through is roughly what I expected that would end up meaning in practice. I could wish it's more civil but, I mean, this was the plan.
are there other remedies that might help? I encourage anyone who has something that hasn't been mentioned here to suggest it
The single most productive change would be blocking submissions with both the show and vibecoding tags. The submitters are often subject to numerous personal attacks and it's not a good look for the site.
People will just not tag show submissions with vibecoding if we do that.
The issue already is (in part) that not enough things are tagged with vibecoding, so they're not caught by our filters and we end up complaining. This will probably only worsen the problem.
and what about text where someone asked the LLM to fix the grammar/spelling? and what about a well thought argument that someone asked the LLM to translate from spanish? and what about the well thought argument that someone maybe dyslexic asked the LLM to write up from a bullet points list?
please don't simplify something nuanced. you risk to exclude interresting contributions. and no, i don't want either to see here slop devoid of any informational value or worse, factually wrong hallucinations.
We've already rehashed the pros and cons of LLM-generated text to death. All I'm asking is to stop arguing about it in the comments, either flag as spam and move on, or don't.
All I'm asking is to stop arguing about it in the comments
Would comments about LLM usage be more palatable if we took the arguing out of them?
I personally love the matter-of-fact comments that are, "Hey, just FYI this looks like a low-effort AI post because of (this thing) and (that thing)." The heads-up comes with evidence, and I often learn something new. Comments for the win!
But when it turns to arguing, that's when it becomes rehashed and annoying, and I start wishing the commenter had tagged/flagged/hidden the story instead.
I honestly don't think this is viable. As soon as you post your polite, matter-of-fact tend, someone else will reply with a "who cares if it's LLM-written if the content is good" and then someone else will reply outlining all the reasons LLMs are evil. So your initial comment, written with the best of intentions, will nevertheless kick off an argument. There are multiple examples of this on the front page right now.
I agree, it's frankly exhausting to see discussions derailed into arguments regarding LLM generated text. I really don't understand why people can't just use the hide button and move on.
I really don't understand why people can't just use the hide button and move on.
Because I don't want other people to waste time reading AI slop either.
But then, once you leave a comment saying that something is slop, someone is bound to come out of the woodwork saying that AI is fine, this isn't slop, why are you saying this, and it turns into yet another pointless fucking discussion.
This site has a bunch of human-slop-peddlers who will use off-topic or spam for any post that they dislike. It’s frustrating. Previously: https://lobste.rs/s/irxjid/ai_vampire#c_d0zhhi
I totally agree. I often read a post and have no inclination that it's AI-generated, and someone confidently says that it is. I could be wrong of course, but how do we know for sure if something is AI-generated?
I would think that a true slop article would be naturally moderated away because people won't upvote it.
i’m not convinced there is anything inherent about ai generated text that means people (or most people) will always catch it—and the endgame does seem to be that eventually, there may be very little way to distinguish based on the text alone. so is it “true slop” if an LLM wrote it but no one can tell?
when people talk about wanting perspective and insight from authors, i think there really is something there. if someone were to produce some hypothetical blog post with an LLM, with no human assistance besides a prompt, that just so happens to be coherent, factually accurate, interestingly written, etc: i would argue i am still not particularly interested in reading it. or at least not in the context of lobsters. maybe some kind of “LLM text encyclopedia”, where i know that that is the nature of what i’m getting, would be a reasonable platform to read that sort of content. but i don’t think it fits in a community that is borne from human thought and creativity.
i don’t think my comment here has much bearing on the post at hand, because i can’t actually say with confidence whether most of the people making “smells like LLM” comments are indeed accurate, but i am very sympathetic to the desire of some in the community to have confidence that we are reading something written by a person with a conscious mind.
I agree. It's at least as on-topic as comments about low text contrast or bright site backgrounds that make a post hard to read, which we do not tend to view as noise.
There are probably bloggers whose native language isn't English who use AI to improve the language of their blog posts. People more familiar with English may recognize such text as AI slop, even if it is just a kind of assisted writing.
Since English isn't my native language either, I don't notice when AI has been used in texts like these. For me, what matters most is how the core message comes across.
Sure, this meta thread isn't about why people do or don't use LLMs. All I want is to stop seeing the argument about "this was generated by a robot!" "no it wasn't" "yes it obviously was" in every single comment thread here (hyperbole applied)
I understand that. I just wanted to give an input why some linked blog postings may look like AI slop, but are not. This is just something to consider before marking such posting as spam.
My impression is, that the most vocal people on lobste.rs are native English speaker. But there are also others around, like myself, who most often need a little more time to write a comment or reply in a somehow proper English. So I just try to make people aware that some submissions could be an English texts / blogs written from someone with an other mother tongue and so they may have use technology (e.g. translation or AI) to help improve the language.
My recommendation is to just be more relaxed when reading a submission and not criticizing the style and only discuss about the topic.
I understand where this is coming from, not of personal experience but secondhand. I am the native English speaker in a group of 30 odd researchers at my institution. I see that there is a perceived need to use them because frankly there are many English speakers who believe grammatical failures are personal failures. This is a kindness issue, not an issue with your writing. If your balance ultimately leads you to use these tools for translations, then that is your choice; I cannot tell you not to use them if they meaningfully improve your experience on the internet. I still want to know that you're using them, because it does genuinely lower my confidence in what is written -- these tools being somewhat bad about injecting meaning or erasing nuance. If anything, this signals to me that I need to look for points where there may have been information changed by accident; this has happened in the context of my work and also in prominent forums (see the GitHub comment chain of the recent Go fsnotify drama).
TL;DR: I recognise the rock and hard place of (1) feeling pressure to have better English/wanting to do the work of translating in advance, and (2) the shaming associated with using these tools. I don't believe that removing comments debating whether something is written by an LLM will change that, as ultimately this comes down to a human kindness problem.
I personally do not use AI at all. I occasionally use DeepL to translate a single word or seldom a whole sentence.
On the other hand I have difficulties to recognize if a blog posting I am reading was improved with the help of AI or not, especially if the content for me is coherent and does make sense. So with that I may submit a link to a blog posting which I think is interesting and does make sense to be discussed by the community. It is then very frustrating when this gets modded as spam because the non-native writer may have used AI to tidy up his English writing. I am now even more hesitant to submit a story.
To any commenters who feel threatened by this suggested behavior, please note that I will continue to upvote your comment if I feel you have informatively reported your determination that a submission is LLM slop.
Everything that sharpens my LLM detection skills is appreciated!
For me it goes even beyond just being an off-topic comment and straight into trollish or unkind territory. 90% of the time I see this comment, it feels less like the commenter has engaged with the content and more like they've just built themselves a way to detect neurodivergent writing and it's now gone off.
If people atleast stuck to only posting it below content that is clearly just slop or shovelposts I would be less opposed to them, but as it is right now, they make me, as a neurodivergent person with a style of posting that might not always match that of the normies, feel a tad unwelcome.
You'll have to expand. To me, LLM generated content is as far away from neurodivergent as can be, as it's a statistical amalgation of all the text on the internet, maybe specifically skewed towards to sort of content-free pablum that suggests press releases and ad copy.
There's an implicit assumption here that most people who comment that something is LLM-generated are neuronormative, which I'm not sure is true. I'll admit I don't really see the connection here either - do you have any particular examples in mind here? Maybe this could help us tune our slop sensors or something; it does suck that you feel this way.
I'm sorry that makes you feel unwelcome. If we were to normalize an explicit statement about the use of LLMs (or lack thereof) at the start of articles, would you feel comfortable doing that?
LLM slop is categorically different from low-effort/quality content. Even low-effort content has human intention behind it, but LLM slop tends to be a meandering mental dump. It is engineered to look like text that will tell you something, but all you're left with are fizzling trains of thought and arguments that lead nowhere. It is almost engineered to waste your time in that sense.
And this is coming from someone that doesn't shy away from AI assistance during development, but it's simply rude to dump your LLM's slop onto other people. You have to review it, iterate on it, research its validity and make sure that in the end every single character has your intention behind it.
If you just dump LLM slop without this process, the result is so much worse than low-effort content, it's almost malicious.
I see LLM usage as a spectrum. On one side you have purely generated text with barely any human involvement on the other end you have usage where an author might have used an LLM to clean up a sentence or two.
I very much appreciate being alerted to the former, the slop side of the spectrum and things close to it. But, I don't care about the latter. The issue I am seeing is, like with many things, that it is a spectrum and people disagree where the cutoff is. This also leads to things like false positives. Then I also find people who label anything where there might be a hint of LLM involvement as slop which I think is a bit ridiculous.
Since I still would like to be alerted about LLM usage on the slopified part of the spectrum. So my proposal isn't to flag these comments as off-topic but do ask that people posting these comments explain why. Because just a comment "this is ai slop" is noise to me as well.
I actually "agree". Mostly because these comments do help stories gain more activity. (That is, even if they do not affect hotness/ranking, I browse /comments, and the "this was written by an LLM" comments overrepresent stories for me.)
However, I only agree if and only if these comments can be replaced by flagging or we introduce new filtering measures.
Proposal A would be to extend the spam flag or add a new flag that is described in writing as "I think this story/comment does not have enough human effort".
Proposal B would be to add a quick button so I can give a variable time block for a user (1 week, 1 month, 1 year, forever). Optionally, there would be a way for blocks to have effects beyond the blocking user. (For example, that users stories/comments get a negative hotness modifier proportional to the logarithm of users that are currently blocking them. Or even a public global ranking of top-blocked users.)
My separate proposal would be to experiment a time-boxed no-LLM Lobsters.
During a month, absolutely 0 LLM mentions are tolerated. No stories that mention LLMs- regardless of positive/negative/neutral sentiment. No comments mentioning LLMs either- no "this was authored by an LLM", "this would be a good use of an LLM", nothing. LLM-authored posts would be fine (unless they are explicit about that), posts about LLM-assisted software are allowed too (if the post does not mention LLMs).
And then we run a survey at the end to see if this had any effect.
The steady stream of people trying to self-promote in Lobsters and the effort required to moderate that in my opinion demonstrates that Lobsters is seen as an influencing force in public tech discourse. Therefore, I think it's appropriate to run a short experiment to see what effect pushing back on LLM discourse (both positive, negative, and neutral) could have.
I think having an "ai-written" tag for posts would be better. Something neutral sounding that doesn't necessarily sound negative (unlike 'slop'), but allows folks to filter it out
I don’t mind slop. I actually prefer it to regular human written texts in many ways. These comments are unfortunately typically depreciating and should go imo.
Now to expand a bit on my personal views why I love slop text:
I don’t see the time reading a blob of prose as wasted, at most find it a fun way to produce noise; if I’m in a hurry I’ll just skim through regardless of slop or not slop.
I also find that with slop ideas are in many ways easier to understand and are communicated in a hierarchy of related concepts that I’m very fond of. It is not just X but also Y is a much better flow than what most people think of an alternative form.
Slop also allows me to adapt better to the new age where less is bore, so at worst I see it as prosaic practice.
I’m not trying to convince anyone, just sharing my personal point of view. People are not all the same.
I am serious, sorry if it passed down as something else. To drive the point a bit, if i had written it out as LLM slop, the ambiguity would most likely have been lost, +1 for slop here. Hackers & Painters (& Poets).
Regarding the "I don't think this community is for you":
It is amazing the destruction of the "AI" brand. We are witnessing it first hand. Our future is shaping up to be driven by people that hate everything that AI does. Young ones, people from places with datacenters, layedoff people, those of made a living from selling commodity goods that now AI can replace entirely (this includes a lot of us techies), etc... The hate is going to keep increasing. The direction we are going is one that considers AI as not being a "greater good". You talk about AI in a commencement speech and get booed.
And I don't think that's wrong. I can understand where it is coming from. We might need a lot more of this hate to take the trillionaires out of their new zealand bunkers. Ok.
lobste.rs can be such a place and I can take a hit if that's so. We bards find randomness and noise particularly enlightening, but there is a reason there was no & Poets in the Hackers & Painters.
addison | 17 hours ago
You may dislike it, but I appreciate it. I do not want to waste my time reading generated content. And, frankly, I do not wish to engage with "assisted" writing, because when people defer their writing to these tools, they lose themselves -- and I read for the perspective, and the style of the writers, not for the content alone. This is also important as a discussion point: why is it irrelevant as to how the article is written? It gives greater insight into the point made by the author, and how the author has engaged with that work.
Having an early signal which helps me determine how to engage with the content appropriately is valuable to me. If you don't appreciate it it, the [-] button is there for you, just as the hide button is there for me.
simonw | 11 hours ago
That's the biggest problem: if something is 17 paragraphs of slop, and it only dawns on you that it's slop around paragraph 10, you've just wasted a non-trivial amount of time reading something of very little value.
A heads-up in that case would have been nice.
apropos | 5 hours ago
Yes, I so do wish the moderators would add "slop" as a flag reason. It doesn't even have to do anything. Just being able to see "two hundred users flagged this as slop" would save me much time.
A moderator has communicated to me that the "spam" flag should be used for this, but I always do so reluctantly. It's not quite spam. It is content written from a place of no intention.
weberc2 | 10 hours ago
I get not like LLM generated content, but there’s a spectrum of LLM use from “find and write the whole thing” to “I don’t speak natively, can you translate / clean up my English”. More importantly, we don’t need the top dozen comments on every article debating whether it was written by an LLM, and I’ve also seen several posts that the author insists they wrote by hand (and I have no reason to disbelieve them) and still people were accusing them of using an LLM. If the comments can’t even reliably identify LLM content then they’re just noise—at best they are as bad as AI slop articles.
0x2ba22e11 | 8 hours ago
I strongly prefer to read obviously-ESL text with foreign language grammatical constructions and everything over reading text that has been translated from that into "standard" English by an LLM.
addison | 8 hours ago
Seconded, with the caveat that I am still sympathetic to seeing ESL folks use these tools. There are still many people who take it upon themselves to evangelize "good" writing and berate people for grammatical/spelling/style issues at every opportunity. I have sympathy for people using these tools to avoid that, but I still want to know when they are used so I can at least manage expectations and look for points where translation has failed. I also do not tolerate this as an excuse to avoid writing, which I have experienced IRL and online.
orib | 6 hours ago
I am sympathetic to why they would want to use them. I still have negative interest in reading their output if they use these tools.
addison | 9 hours ago
Cool, I would prefer to know this anyway. This helps me know when it's the case.
I agree! This technology should be more identifiable and people more honest about their usage.
This is hard to reconcile. While I would like to give the benefit of the doubt, there are simply too many people outright lying about their usage of these tools as well; it has become shameful, for one, and two, it has become especially prominent of a pattern to use these tools to generate attention bait. In lobsters, at least, I have seen very few articles mislabelled as slop, and generally, the ones that are are quickly contested with supporting evidence with existing samples of writing style or something to that effect. These comments are strong signals and the subsequent discussions consistently more reliable. I especially disagree with equivocating them with LLMs; their incorrectness rate is significantly lower, and their faults far more bearable.
vbernat | 4 hours ago
I am writing in English and translating to French. In the past, translation was done with nothing, then DeepL, and now LLMs. For copyediting the English version, I was first using nothing, then a spellchecker, then Grammarly (which does more than grammar), then LLMs with a prompt to minimize edits. I plan to write a bit about that, but in the meantime, I disclose LLM usage in the footer with an icon that can be hovered over to get the exact tool used: a brain if no AI or just grammar check, sparkles if there was light copy-editing and a bot when there is a translation.
My point is that I consider myself "not using AI for writing content", but I use AI to fix content. The translation to French using LLMs is a convenience over DeepL because LLMs can keep Markdown. Since I am French-native, I can validate/edit the translation as a last step.
natkr | 4 hours ago
Why would you intentionally make your stuff read like slop? How is that fixing anything?
creesch | 3 hours ago
Hold on, I dislike purely generated text as much as anyone else. But there is a difference between having a LLM fix a few small style or grammar issues and having a LLM rewrite your entire text. The latter will land you in slop territory the former will likely not be noticed by anyone if you did do the prompt right.
This is also true with translations. For example you can do just do "translate this text to French" and you likely will get something that reads like slop because the LLM will take a lot of liberties. You can also do something along the lines of "translate this text to French. Keep the structure of the translation as close as possible to the original English text".
The person you replied to also explicitly says the are a native French speaker, which implies they go back in and make sure it is translated the way they want to.
I have done the same in the past as well though not for articles, rather for comments. I am active here on lobster.rs but also a tech related website in a different language I am native in. Sometimes I find myself having written a comment in one language and encountering a discussion in a different language where I want to basically say the same thing. In the past I'd have translated the entire thing by hand, if it is short I still do. These days I either put it through deepL or an LLM and then go through and clean up everything I wouldn't have said in the same way.
That isn't slop in my opinion, that is simply using LLMs as a tool of conversion rather than having them generate content for me.
Edit:
I got carried away on the actual example. But the tl;dr is that I do take issue with people labeling any LLM usage as slop. Such binary thinking is not productive imho.
natkr | 3 hours ago
I'm a native swedish speaker. I would rather read the original english than a sloppy translation to swedish, even if the author was also a swede and pinky promises to have proofread the translation.
I'd be careful to dismiss their ensloppifying midas touch that lightly.
creesch | 2 hours ago
That's the thing though. If used with attention you'd likely wouldn't notice and I'd argue it might also matter less. Remember AI slop refers to low quality and mass produced generated content. In my opinion if you have an original text, then there is no generation happening. If the conversion then holds up to human standards I honestly don't see the issue.
You might have an issue with LLM usage outright. Which is also fair, but I'd still argue against labeling anything LLM touched as "slop". It confuses the conversation imho.
I can guess what you are trying to imply here. But it would be a guess, so can you clarify it for me?
natkr | 2 hours ago
You're still generating text, you're just trying to convince the slop machine to pretty please try to resemble the source material. And it's still an unreliable process. To quote yourself:
That doesn't sound like a reliable process to me. I don't usually have to worry about
magick convertproducing an entirely unrelated image.Yes, the slop industry and its usual stooges have been trying to push that line hard in the last year or so. Suffice to say, I disagree with the premise. (And it's amusing to see the same thing start to happen for their preferred terms, like "agentic engineering".)
As far as I can tell, there appears to be no such thing as responsible ensloppification. Well, other than some coping mechanism that "at least I'm not as bad as those guys!".
creesch | an hour ago
I mean, if you are going to quote me then do the entire context. Where I also explicitly include human editing after the fact.
Personally I think you are talking from an entrenched position. You don't like LLM usage, period. Which I think is a fair position to hold. What I think is not doing yourself favors is calling it all slop.
Simply because slop has a specific meaning and you are just confusing any conversation over it making your position less clear to others as well. Just call a spade a spade and say you disagree with LLM usage instead.
Yeah, see I have no clue if you mean LLM usage in general and if that is from a technical perspective as in that LLMs still aren't prefect or from a more fundamental perspective about their trainingdata, energy usage, etc. When you insist on calling it "slop", to me that signals it is about the content of it. But then I'd expect to be able to have a conversation about it. Yet here we are with you refusing to even consider that a possibility. In which case I think you have a more fundamental issue with LLM usage, which again is entirely fair, but then just call it that and don't just do this angry snarky lashing out where everything is slop.
natkr | an hour ago
I'm going to continue to use the commonly accepted definitions of words, rather than let simonw dictate my language.
I was using the quote to illustrate the nature of the "tool".
creesch | 57 minutes ago
I mean, sure. I just feel that this one has a different commonly accepted definition and might cause some confusion. Also, I am honestly trying to get a better understanding of your position and so far I have only received snark. I am honestly not sure what you are feel you are getting out of this.
vbernat | 2 hours ago
I don't. Fixing includes grammar, but also some constructs like removing passive voice (avoiding passive voice is not that a problem in French but it's the number one improvement people says you should do in English). IMO, the editing stays very light and it is a bit like what was done by Grammarly.
As people usually wants to write good code, they want to write good English.
I tried pangram on my texts, it says 100% human with high confidence. I mention it as this tool is mentioned indirectly elsewhere.
zimpenfish | 2 hours ago
Although bear in mind that actual grammarians and linguists disagree with/mock the "passive voice" complaints that people normally make[0][1]
[0] https://languagelog.ldc.upenn.edu/nll/index.php?s=passive+voice
[1] https://archive.ph/ff2h8 (using archive.ph because lel.ed.ac.uk has an expired certificate) - section 3 is the good stuff
Aks | 2 hours ago
English is not my native language. My english will always have weirdness to it. And I like it that way. I wish people were not so insecure about their english skills. Just do your best.
mtset | 18 minutes ago
I quite like the way you write.
Aks | 18 hours ago
This would go away if we can flag items as slop/spam, yes.
technomancy | 18 hours ago
Yeah these comments are clearly working around a shortcoming of the flag system; spam is specifically defined as having a commercial element, making it inappropriate (if interpreted as defined) for many slop posts.
Banning the comments without fixing flags is pointless. Address the root cause rather than the symptoms.
orib | 6 hours ago
I would also like a clear reminder when submitting that LLM authored content is disallowed.
[OP] drmorr | 17 hours ago
The mods have indicated that spam is the right flag for this kind of content, at least as far I can tell, and I definitely have noticed that other people are using "spam" as a flag for articles that are non-commercial in nature.
technomancy | 17 hours ago
Disagree; they have not indicated this sufficiently clearly as the about page clearly contradicts this.
hoistbypetard | 16 hours ago
Hmm. Maybe a PR for the about page is warranted, then. But I'd rather see the comments continue, personally.
srtcd424 | 14 hours ago
I'm an old fart who doesn't seem to have as good an instinctive ability to spot 'slop' as others - possibly because slop just looks to me like human-generated marketing drivel from a few years ago, so I don't see it as a distinct thing. I do find the pages of discussion on the provenance of what to me just looks like a mediocre article quite irritating - even more so when there's some use facts buried in it.
Perhaps we could have a way of setting a personal numeric assessment of article quality or 'sloppiness' as separate thing to the 'interesting' up/downvoting? Then we could average those and possibly people could have an auto-hide threshold?
That's probably making lobste.rs more complicated than was originally intended, but maybe it's reasonable to do so to deal with the rather dystopian world we now find ourselves in.
creesch | 3 hours ago
Honestly, likely because a large chunk of the training data is human generated marketing drivel. Something I also would call out a few years ago, so the net result for me is the same tbh. In fact, LLM generated text finally has gotten more people to notice how shallow a lot of the articles and blog posts are, something I wish people would have pointed out more a few years ago.
Aks | 2 hours ago
If the text reads like a marketing blurb or very "commercial/professional" and you are reading it on billybobs bear blog, its likely generated.
It's a context thing: do you think this person would spend all that effort search engine optimization, or is it just generated like similar texts?
[OP] drmorr | 18 hours ago
I'm confused? You already can flag items as spam, and I do it regularly.
Aks | 18 hours ago
Currently the rules for "spam" are meant for mostly marketing stuff. https://lobste.rs/about#flags
[OP] drmorr | 17 hours ago
There have been multiple threads with hundreds of replies each on this subject; the moderators here have indicated (1, 2) that they are not adding a new flag reason and that "spam" is the right flag to use for this kind of content. (at least that is my reading/interpretation of their responses).
hoistbypetard | 16 hours ago
The very first comment you link says:
I disagree. Here is me talking about it. Commenting that something is slop is helpful to me, and I hope people keep doing it and don't mark it off-topic.
classichasclass | 14 hours ago
I would be sympathetic to this, being someone who actively advertises I will not use AI textual content in my writing, if I didn't think that there were some people setting an unusually low bar for what AI is or seeing AI content under every rock. I don't think those people are being disingenuous, but I do think there is a decent possibility they can be wrong. It's becoming a scarlet letter.
Of course, my objection is true as much for posts as it is for flags, though one might argue flagging a post has less community accountability.
hoistbypetard | 14 hours ago
I think flagging has less community accountability and less opportunity for learning. And I guess that second part might crystalize why I prefer a comment versus a flag here. The existing flags all have clear meanings that don't merit explanation, usually, according to their definitions on /about:
None of those carry a lot to explain. Slop detection is, IMO, such an "art" right now that it probably does merit an explanation if it's not clear commercial slop.
I agree with your observation about the scarlet letter, and don't like it at all.
Irene | 11 hours ago
for the record, I agree that comments have better accountability than flags. I'm glad to hear you see it that way, too.
orib | 6 hours ago
When an account is repeatedly flagged for spamming, the user who is spamming is banned. How do we get this mechanism in place for slop? Will sufficient comments have a repeat violator banned?
hoistbypetard | 9 hours ago
I think you understood my comment exactly in the spirit I intended it. Thank you.
Aks | 17 hours ago
I had no idea, I wish they just would write the rules down.
gerikson | 17 hours ago
krig | 12 hours ago
Comments saying "this is LLM slop" has saved me multiple times, including on at least one story that I have posted myself. I don't want to read slop and I definitely don't want to post slop.
gerikson | 17 hours ago
I find comments alerting me of the fact that submissions are LLM-generated to be helpful, and use them as guidance to flag the submission as spam, and hide it.
(I don't take such comments as gospel, and do my own verification!)
hoistbypetard | 16 hours ago
I find such comments helpful as a reader, in a way that I wouldn't find a flag helpful. Especially when they point out why.
Given the definition for the "spam" flag on the about page:
versus the behavior you're proposing (and that I've seen at least one of the moderators endorse before) I'd rather see the comments continue.
My desire to avoid slop is greater than my desire to avoid content marketing. I can contextualize marketing easily enough, and the flag gives me enough information about that. Until/unless there's a separate flag for slop, I'd like the comments (especially the ones with reasoning) to continue, please.
gerikson | 16 hours ago
I'm personally using "spam" in a more inclusive manner than the one used on the About page. For me spam is not just what used to be defined as "UCE" (Unsolicited Commercial Email) but the sort of cut-and-paste screeds that infested certain Usenet groups, posted by obsessives or trolls to disrupt different threads.
Bad LLM generated content is indistinguishable from that kind of content.
bitshift | 10 hours ago
I like this definition of spam. Instead of focusing on the intent or process of the author (which we can't know for certain), it focuses on the effects on the reader: is it disruptive or a waste of time? Then it's spam.
lcamtuf | 7 hours ago
Like some other commenters, I disagree. LLM-generated articles are ubiquitous, and while it's possible to come up with counterexamples and hypotheticals, the vast majority of that writing represents no cognitive effort on the side of the creator. It's just seen as a hack to get traffic to your website by generating endless clickbait about hot-button topics (notably including "AI good" and "AI bad").
Since it's hard to tell at a glance, I appreciate the warnings. Those who don't mind engaging with machine-generated text just for the sake of it can ignore them. But those who'd rather use the internet to interact with human-generated content can nope out more quickly.
Because some people are in the first group, and because we get sidetracked by arguments that "you can't know for sure", flagging doesn't work reliably. Lobste.rs isn't nowhere near as afflicted by this as HN (I keep score), but we do get slop-stories on the front page at least once a week.
Irene | 11 hours ago
I'm of two minds about this personally... the following are my personal thoughts, not the site's official position, and this is not any sort of considered opinion, I just wanted to share what I'm thinking in case that helps the discussion
your reason for wanting this remedy is a good one, that I sympathize with. it for sure must be incredibly demoralizing to have something you put effort into dismisses as if no human was even involved with it, especially when the dismissal is on superficial grounds (em-dashes or whatever). that would push me away really fast, too.
the remedy itself may not be the ideal one for that purpose. I would expect the effect of that intervention to be that we lose our biggest defense against human-free content. when we all set social norms for each other and are consistent about it, that scales in a way that top-down exercise of authority never can. so I'm really reluctant to discourage a social dynamic that's load-bearing in this moment, even when I see harm from it. if other things shifted such that it's no longer load-bearing, I'd want to revisit that.
are there other remedies that might help? I encourage anyone who has something that hasn't been mentioned here to suggest it
I of course encourage all of you to be careful about not throwing accusations around without high confidence. careless reactivity is super-destructive, no matter what one believes about the virtue of whatever cause it's being deployed towards. it's very difficult for a space to remain a community at all when people are just making spur-of-the-moment personal attacks on each other all the time.
dureuill | 5 hours ago
I strongly encourage everyone to explicitly disclose the use of generative text in their own publication, and their extent.
As a reader, unmarked generative text is a breach of trust as it blurries the basic social contract of "I put effort in this writing that I'm proposing you read". It leaves me wondering if someone that took a shortcut and accepted the inhumane , grating, low information density, generated voice did proper diligence on the discourse itself. It detracts from the credibility of the entire article.
Disclosing the usage and the extent of the usage works towards restoring some of this trust, and gives me an early choice of moving on rather than having to warn others in comments. Sure, that might limit the range of your writing somewhat, but that's a sure way of not having anyone comment to disclose what you concealed.
Similarly I engage you to disclose with precision and conciseness when no generative tooling was used in your posts. It's a bit sad that this doesn't go without saying, but stating it will free the mind of your readers.
apropos | 5 hours ago
To be blunt, the inaction of Lobste.rs mods on this topic is encouraging its own social dynamic.
Irene | 4 hours ago
I want to acknowledge this criticism. As I've said in the past, it's been difficult to know what to do. We continue to discuss it, both with all of you all and amongst ourselves. We take actions when we feel like we know they'll help.
Irene | 3 hours ago
Also, as a reminder, the last time I spoke about this I quite explicitly called for the community to have dialogue and strive for consensus. The dynamic we've been going through is roughly what I expected that would end up meaning in practice. I could wish it's more civil but, I mean, this was the plan.
dbremner | 10 hours ago
The single most productive change would be blocking submissions with both the show and vibecoding tags. The submitters are often subject to numerous personal attacks and it's not a good look for the site.
dzwdz | 3 hours ago
People will just not tag
showsubmissions withvibecodingif we do that.The issue already is (in part) that not enough things are tagged with
vibecoding, so they're not caught by our filters and we end up complaining. This will probably only worsen the problem.intelfx | 5 hours ago
...yet your proposal is to block the submissions, not punish the perpetrators of the personal attacks?
gerikson | 3 hours ago
"Show" has always been spam-adjacent, even before LLMs.
Lobste.rs has a strong reaction against being used as a marketing channel.
kablamooo | 8 hours ago
It’s nice to know if I’m about to click on AI slop to save me time of clicking on it and then getting my time wasted.
carlomonte | 17 hours ago
and what about text where someone asked the LLM to fix the grammar/spelling? and what about a well thought argument that someone asked the LLM to translate from spanish? and what about the well thought argument that someone maybe dyslexic asked the LLM to write up from a bullet points list?
please don't simplify something nuanced. you risk to exclude interresting contributions. and no, i don't want either to see here slop devoid of any informational value or worse, factually wrong hallucinations.
[OP] drmorr | 17 hours ago
We've already rehashed the pros and cons of LLM-generated text to death. All I'm asking is to stop arguing about it in the comments, either flag as spam and move on, or don't.
bitshift | 9 hours ago
Would comments about LLM usage be more palatable if we took the arguing out of them?
I personally love the matter-of-fact comments that are, "Hey, just FYI this looks like a low-effort AI post because of (this thing) and (that thing)." The heads-up comes with evidence, and I often learn something new. Comments for the win!
But when it turns to arguing, that's when it becomes rehashed and annoying, and I start wishing the commenter had tagged/flagged/hidden the story instead.
[OP] drmorr | 7 hours ago
I honestly don't think this is viable. As soon as you post your polite, matter-of-fact tend, someone else will reply with a "who cares if it's LLM-written if the content is good" and then someone else will reply outlining all the reasons LLMs are evil. So your initial comment, written with the best of intentions, will nevertheless kick off an argument. There are multiple examples of this on the front page right now.
dureuill | 5 hours ago
Imo the "who cares?" comment is the one off topic that should be marked as spam, not the original, informative comment, that something is slop
Yogthos | 13 hours ago
I agree, it's frankly exhausting to see discussions derailed into arguments regarding LLM generated text. I really don't understand why people can't just use the
hidebutton and move on.dzwdz | 10 hours ago
Because I don't want other people to waste time reading AI slop either.
But then, once you leave a comment saying that something is slop, someone is bound to come out of the woodwork saying that AI is fine, this isn't slop, why are you saying this, and it turns into yet another pointless fucking discussion.
orib | 6 hours ago
Yes, what if we roast marshmallows after we set the house on fire? Is it ok then? Marshmallows are tasty.
a5rocks | 11 hours ago
Tangentially from this proposal: why is this proposal being flagged as off-topic or spam? It doesn't meet either standard, I think?
(Yes, I know the about section specifically says this kind of meta-questioning is bad... but I am very curious if I'm missing anything)
hyperpape | 11 hours ago
This site has a bunch of human-slop-peddlers who will use off-topic or spam for any post that they dislike. It’s frustrating. Previously: https://lobste.rs/s/irxjid/ai_vampire#c_d0zhhi
amw-zero | 15 hours ago
I totally agree. I often read a post and have no inclination that it's AI-generated, and someone confidently says that it is. I could be wrong of course, but how do we know for sure if something is AI-generated?
I would think that a true slop article would be naturally moderated away because people won't upvote it.
sloane | 12 hours ago
i’m not convinced there is anything inherent about ai generated text that means people (or most people) will always catch it—and the endgame does seem to be that eventually, there may be very little way to distinguish based on the text alone. so is it “true slop” if an LLM wrote it but no one can tell?
when people talk about wanting perspective and insight from authors, i think there really is something there. if someone were to produce some hypothetical blog post with an LLM, with no human assistance besides a prompt, that just so happens to be coherent, factually accurate, interestingly written, etc: i would argue i am still not particularly interested in reading it. or at least not in the context of lobsters. maybe some kind of “LLM text encyclopedia”, where i know that that is the nature of what i’m getting, would be a reasonable platform to read that sort of content. but i don’t think it fits in a community that is borne from human thought and creativity.
i don’t think my comment here has much bearing on the post at hand, because i can’t actually say with confidence whether most of the people making “smells like LLM” comments are indeed accurate, but i am very sympathetic to the desire of some in the community to have confidence that we are reading something written by a person with a conscious mind.
kemitchell | 7 hours ago
Desirability aside, this seems like a potential user trap. "this is LLM slop" is on the topic of the post.
kevinc | 5 hours ago
I agree. It's at least as on-topic as comments about low text contrast or bright site backgrounds that make a post hard to read, which we do not tend to view as noise.
fab23 | 17 hours ago
There are probably bloggers whose native language isn't English who use AI to improve the language of their blog posts. People more familiar with English may recognize such text as AI slop, even if it is just a kind of assisted writing.
Since English isn't my native language either, I don't notice when AI has been used in texts like these. For me, what matters most is how the core message comes across.
[OP] drmorr | 17 hours ago
Sure, this meta thread isn't about why people do or don't use LLMs. All I want is to stop seeing the argument about "this was generated by a robot!" "no it wasn't" "yes it obviously was" in every single comment thread here (hyperbole applied)
fab23 | 17 hours ago
I understand that. I just wanted to give an input why some linked blog postings may look like AI slop, but are not. This is just something to consider before marking such posting as spam.
[OP] drmorr | 17 hours ago
Wouldn't the same consideration need to be taken before leaving a comment that says "this is LLM slop"?
fab23 | 16 hours ago
Sure, yes.
My impression is, that the most vocal people on lobste.rs are native English speaker. But there are also others around, like myself, who most often need a little more time to write a comment or reply in a somehow proper English. So I just try to make people aware that some submissions could be an English texts / blogs written from someone with an other mother tongue and so they may have use technology (e.g. translation or AI) to help improve the language.
My recommendation is to just be more relaxed when reading a submission and not criticizing the style and only discuss about the topic.
addison | 8 hours ago
I understand where this is coming from, not of personal experience but secondhand. I am the native English speaker in a group of 30 odd researchers at my institution. I see that there is a perceived need to use them because frankly there are many English speakers who believe grammatical failures are personal failures. This is a kindness issue, not an issue with your writing. If your balance ultimately leads you to use these tools for translations, then that is your choice; I cannot tell you not to use them if they meaningfully improve your experience on the internet. I still want to know that you're using them, because it does genuinely lower my confidence in what is written -- these tools being somewhat bad about injecting meaning or erasing nuance. If anything, this signals to me that I need to look for points where there may have been information changed by accident; this has happened in the context of my work and also in prominent forums (see the GitHub comment chain of the recent Go fsnotify drama).
TL;DR: I recognise the rock and hard place of (1) feeling pressure to have better English/wanting to do the work of translating in advance, and (2) the shaming associated with using these tools. I don't believe that removing comments debating whether something is written by an LLM will change that, as ultimately this comes down to a human kindness problem.
fab23 | 2 hours ago
I personally do not use AI at all. I occasionally use DeepL to translate a single word or seldom a whole sentence.
On the other hand I have difficulties to recognize if a blog posting I am reading was improved with the help of AI or not, especially if the content for me is coherent and does make sense. So with that I may submit a link to a blog posting which I think is interesting and does make sense to be discussed by the community. It is then very frustrating when this gets modded as spam because the non-native writer may have used AI to tidy up his English writing. I am now even more hesitant to submit a story.
natkr | 3 hours ago
It sucks that people get taken in by the slop vendors' sales pitches but, no, ensloppifying your writing is not improving or "assisting" it.
gerikson | 3 hours ago
To any commenters who feel threatened by this suggested behavior, please note that I will continue to upvote your comment if I feel you have informatively reported your determination that a submission is LLM slop.
Everything that sharpens my LLM detection skills is appreciated!
cultpony | 10 hours ago
For me it goes even beyond just being an off-topic comment and straight into trollish or unkind territory. 90% of the time I see this comment, it feels less like the commenter has engaged with the content and more like they've just built themselves a way to detect neurodivergent writing and it's now gone off.
If people atleast stuck to only posting it below content that is clearly just slop or shovelposts I would be less opposed to them, but as it is right now, they make me, as a neurodivergent person with a style of posting that might not always match that of the normies, feel a tad unwelcome.
gerikson | 3 hours ago
You'll have to expand. To me, LLM generated content is as far away from neurodivergent as can be, as it's a statistical amalgation of all the text on the internet, maybe specifically skewed towards to sort of content-free pablum that suggests press releases and ad copy.
dzwdz | 3 hours ago
There's an implicit assumption here that most people who comment that something is LLM-generated are neuronormative, which I'm not sure is true. I'll admit I don't really see the connection here either - do you have any particular examples in mind here? Maybe this could help us tune our slop sensors or something; it does suck that you feel this way.
dureuill | 4 hours ago
I'm sorry that makes you feel unwelcome. If we were to normalize an explicit statement about the use of LLMs (or lack thereof) at the start of articles, would you feel comfortable doing that?
enobayram | 4 hours ago
LLM slop is categorically different from low-effort/quality content. Even low-effort content has human intention behind it, but LLM slop tends to be a meandering mental dump. It is engineered to look like text that will tell you something, but all you're left with are fizzling trains of thought and arguments that lead nowhere. It is almost engineered to waste your time in that sense.
And this is coming from someone that doesn't shy away from AI assistance during development, but it's simply rude to dump your LLM's slop onto other people. You have to review it, iterate on it, research its validity and make sure that in the end every single character has your intention behind it.
If you just dump LLM slop without this process, the result is so much worse than low-effort content, it's almost malicious.
creesch | 3 hours ago
I see LLM usage as a spectrum. On one side you have purely generated text with barely any human involvement on the other end you have usage where an author might have used an LLM to clean up a sentence or two.
I very much appreciate being alerted to the former, the slop side of the spectrum and things close to it. But, I don't care about the latter. The issue I am seeing is, like with many things, that it is a spectrum and people disagree where the cutoff is. This also leads to things like false positives. Then I also find people who label anything where there might be a hint of LLM involvement as slop which I think is a bit ridiculous.
Since I still would like to be alerted about LLM usage on the slopified part of the spectrum. So my proposal isn't to flag these comments as off-topic but do ask that people posting these comments explain why. Because just a comment "this is ai slop" is noise to me as well.
koala | an hour ago
I actually "agree". Mostly because these comments do help stories gain more activity. (That is, even if they do not affect hotness/ranking, I browse
/comments, and the "this was written by an LLM" comments overrepresent stories for me.)However, I only agree if and only if these comments can be replaced by flagging or we introduce new filtering measures.
Proposal A would be to extend the spam flag or add a new flag that is described in writing as "I think this story/comment does not have enough human effort".
Proposal B would be to add a quick button so I can give a variable time block for a user (1 week, 1 month, 1 year, forever). Optionally, there would be a way for blocks to have effects beyond the blocking user. (For example, that users stories/comments get a negative hotness modifier proportional to the logarithm of users that are currently blocking them. Or even a public global ranking of top-blocked users.)
My separate proposal would be to experiment a time-boxed no-LLM Lobsters.
During a month, absolutely 0 LLM mentions are tolerated. No stories that mention LLMs- regardless of positive/negative/neutral sentiment. No comments mentioning LLMs either- no "this was authored by an LLM", "this would be a good use of an LLM", nothing. LLM-authored posts would be fine (unless they are explicit about that), posts about LLM-assisted software are allowed too (if the post does not mention LLMs).
And then we run a survey at the end to see if this had any effect.
The steady stream of people trying to self-promote in Lobsters and the effort required to moderate that in my opinion demonstrates that Lobsters is seen as an influencing force in public tech discourse. Therefore, I think it's appropriate to run a short experiment to see what effect pushing back on LLM discourse (both positive, negative, and neutral) could have.
confusedcyborg | 4 hours ago
I think having an "ai-written" tag for posts would be better. Something neutral sounding that doesn't necessarily sound negative (unlike 'slop'), but allows folks to filter it out
gerikson | 4 hours ago
Previously, on lobsters.
Note that that was suggested by me, and I was brought around to the viewpoint that it's a bad idea: https://lobste.rs/c/0vfxye
HugoDaniel | 3 hours ago
I don’t mind slop. I actually prefer it to regular human written texts in many ways. These comments are unfortunately typically depreciating and should go imo.
Now to expand a bit on my personal views why I love slop text:
I don’t see the time reading a blob of prose as wasted, at most find it a fun way to produce noise; if I’m in a hurry I’ll just skim through regardless of slop or not slop.
I also find that with slop ideas are in many ways easier to understand and are communicated in a hierarchy of related concepts that I’m very fond of. It is not just X but also Y is a much better flow than what most people think of an alternative form.
Slop also allows me to adapt better to the new age where less is bore, so at worst I see it as prosaic practice.
I’m not trying to convince anyone, just sharing my personal point of view. People are not all the same.
gerikson | 2 hours ago
Assuming you are serious, I don't think this community is for you.
HugoDaniel | an hour ago
I am serious, sorry if it passed down as something else. To drive the point a bit, if i had written it out as LLM slop, the ambiguity would most likely have been lost, +1 for slop here. Hackers & Painters (& Poets).
Regarding the "I don't think this community is for you":
It is amazing the destruction of the "AI" brand. We are witnessing it first hand. Our future is shaping up to be driven by people that hate everything that AI does. Young ones, people from places with datacenters, layedoff people, those of made a living from selling commodity goods that now AI can replace entirely (this includes a lot of us techies), etc... The hate is going to keep increasing. The direction we are going is one that considers AI as not being a "greater good". You talk about AI in a commencement speech and get booed.
And I don't think that's wrong. I can understand where it is coming from. We might need a lot more of this hate to take the trillionaires out of their new zealand bunkers. Ok.
lobste.rs can be such a place and I can take a hit if that's so. We bards find randomness and noise particularly enlightening, but there is a reason there was no & Poets in the Hackers & Painters.