OpenClaw is dangerous

79 points by theahura a day ago on hackernews | 91 comments

vannevar | a day ago

If you substitute the word "corporation" for OpenClaw, you'll see many of these same problems have plagued us for decades. We've had artificial intelligence that makes critical decisions without specific human accountability for a long time, and we have yet to come up with a really effective way of dealing with them that isn't essentially closing the barn door after the horse has departed. The new LLM-driven AI just accelerates the issues that have been festering in society for many years, and scales them down to the level of individuals.

manofmanysmiles | a day ago

Or "institution", or "legal system", or "government."

vannevar | a day ago

To some extent, yes. Government in particular. Both of them "close the loop" in the sense that they are self-sustaining (corporations through revenue, governments through taxes). Some institutions can be self-sustaining, but many lack strong independent feedback loops. Legal systems are pretty much all dependent on a parent government, or very large corporate entities (think big multi-year contracts).

QuantumGood | 20 hours ago

Oligarchy (, Iron Law of)

peterldowns | a day ago

you may enjoy reading Nick Land, he has written about very similar ideas, specifically the idea that corporations and even "capital" can be considered AI in many ways.

logicprog | a day ago

I second this

gom_jabbar | a day ago

For those who are interested, I'm researching Land's main thesis that capitalism is AI: https://retrochronic.com

HoldOnAMinute | a day ago

Wasn't this already discovered by Bayes ?

Also this https://en.wikipedia.org/wiki/The_Wisdom_of_Crowds

guyomes | 23 hours ago

The flow of ideas goes both ways between AI and economy. Notably, the economist Friedrich Hayek [1] was a source of inspiration in the development of AI.

He wrote in 1945 on the idea that the price mechanism serves to share and synchronise local and personal knowledge [2]. In 1952, he described the brain as a self-ordering classification system based on a network of connections [3]. This last work was cited as a source of inspiration by Frank Rosenblatt in his 1958 paper on the perceptron [4], one of the pioneering studies in machine learning.

[1]: https://en.wikipedia.org/wiki/Friedrich_Hayek

[2]: https://en.wikipedia.org/wiki/The_Use_of_Knowledge_in_Societ...

[3]: https://archive.org/details/sensoryorderinqu00haye

[2]: https://www.ling.upenn.edu/courses/cogs501/Rosenblatt1958.pd...

ModernMech | 21 hours ago

Yup, I have always viewed corporations as a kind of artificial intelligences -- they certainly don't think and behave like human intelligences, at least not healthy well-adjusted humans. If corporations were humans I feel they would have a personality disorder like psychopathy, and I'm starting to feel the same way about AI.

btschaegg | 11 hours ago

cyberge99 | 8 hours ago

A corporation is a very high barrier to entry. Well, it used to be, so the rate at which bad decisions could happen was tepid.

joe_mamba | a day ago

but...but..the developer showed us how openclaw was fixing itself at his command from his phone while he was at the barbershop

m_ke | a day ago

also i wasn't concerned about open chinese models till the latest iteration of agentic models.

most open claw users have no idea how easy it is to add backdoors to these models and now they're getting free reign on your computer to do anything they want.

the risks were minimal with last generation of chat models, but now that they do tool calling and long horizon execution with little to no supervision it's going to become a real problem

8cvor6j844qw_d6 | a day ago

I went with an isolated Raspberry Pi and a separate chat account and network.

The only remaining risk is the API keys, but easily isolated.

Although I think having direct access on your primary PC may make it more useful, the potential risk is too much for my appetite.

oxag3n | a day ago

The only remaining risk? Considering wide range of bad actors and their intent, stealing your API keys is the last thing I'd worry about. People ended up in prison for things done on their computers, usually by them.

8cvor6j844qw_d6 | a day ago

Unless you're proposing never touching OpenClaw, how will you set it up to your satisfaction in terms of security?

> stealing your API keys is the last thing I'd worry about

I don't know, I very much prefer the API credits not being burned needlessly.

Now that I think of it, is there ever a case where an Anthrophic account is banned due to the related API keys being misused?

iugtmkbdfil834 | a day ago

This is genuinely the only way to do it now in a way that will not virtually guarantee some new and exciting ways to subvert your system. I briefly toyed with an idea of giving agent a vm playground, but I scrapped it after a while. I gave mine an old ( by today's standards ) pentium box and small local model to draw from, but, in truth, the only thing it really does is limit the amount of damage it can cause. The underlying issue remains in place.

simonw | a day ago

This piece is missing the most important reason OpenClaw is dangerous: LLMs are still inherently vulnerable to prompt injection / lethal trifecta attacks, and OpenClaw is being used by hundreds of thousands of people who do not understand the security consequences of giving an LLM-powered tool access to their private data, exposure to potentially untrusted instructions and the ability to run tools on their computers and potentially transmit copies of their data somewhere else.

amelius | a day ago

Yeah, if a software engineer came up with such vulnerable idea, they would be fired instantly.

Wait a second, LLMs are the product of software engineers.

koakuma-chan | a day ago

Would they? I don't think anyone cares about security.

jstummbillig | a day ago

People have the grandest ideas about standards in software engineering since right about ai started dabbling in software engineering. It's uncanny.

tstrimple | 18 hours ago

It seems like folks have forgotten again what the S in IOT stands for. This shit has been terrible for... well since always really.

Legend2440 | a day ago

This is just the price of being on the bleeding edge.

Unfortunately, prompt injection does strongly limit what you can safely use LLMs for. But people are willing to accept the limitations because they do a lot of really awesome things that can't be done any other way.

They will figure out a solution to prompt injection eventually, probably by training LLMs in a way that separates instructions and data.

cherioo | a day ago

It’s like money laundering, but now responsibility laundering.

Anthropic released Claude saying “hey be careful. But now that enables the masses to build OpenClaw and go “hold my bear”. Now the masses people using OpenClaw had no idea what responsibility they should hold.

I think eventually we will have laws like “you are responsible for your AI’s work”. Much like how driver is (often) responsible for car crashes, not the car companies.

[OP] theahura | a day ago

Hey, author here. I don't think that the security vulns are the most important reason OC is dangerous. Security vulnerabilities are bad but the blast radius is limited to the person who gets pwnd. By comparison, OpenClaw has demonstrated potential to really hurt _other_ people, and it is not hard to see how it could do so en masse.

enraged_camel | a day ago

>> Security vulnerabilities are bad but the blast radius is limited to the person who gets pwnd

No? Via prompt injection an attacker can gain access to the entire machine, which can have things like credentials to company systems (e.g. env variables). They can also learn private details about the victim’s friends and family and use those as part of a wider phishing campaign. There are dozens of similar scenarios where the blast radius reaches well beyond the victim.

pizlonator | a day ago

Agree with author - it's especially scary that even without getting hacked, openclaw did something harmful

That's not to say that prompt injection isn't also scary. It's just that software getting hacked by bad actors has always been a thing. Software doing something scary when no human did anything malicious is worse.

sejje | 23 hours ago

No? Because I wouldn't give it access to those things. I wouldn't let it loose on my personal PC.

If I store my wallet on the sidewalk, that would probably be a problem. So I won't.

A prompt injection could exfiltrate an LLM API key, and some ai-generated code.

enraged_camel | 16 hours ago

>> No? Because I wouldn't give it access to those things.

Not everyone is like that. In fact, OpenClaw's true "power" is unlocked when the user gives it full access. That's what the overwhelming majority of hype is coming from. Most people who actually get a lot of value out of it don't run it on e.g. docker containers on VPSs that can only be accessed via Tailscale + SSH.

simonw | a day ago

I think there is a much higher risk of it hurting the people are using it directly, especially once bad people realize how vulnerable they are.

Not to mention a bad person who takes control of a network of OpenClaw instances via their insecurities can do the other bad things you are describing at a much greater scale.

cedws | 22 hours ago

It feels like everyone is just collectively ignoring this. LLMs are way less useful when you have to carefully review and approve every action it wants to take, and even that’s vulnerable to review exhaustion and human error. But giving LLMs unrestricted access to a bunch of stuff via MCP servers and praying nothing goes wrong is extremely dangerous.

All it takes is a tiny snippet from any source to poison the context and then an attacker has remote code execution AND can leverage the LLM itself to figure out how best to exfiltrate and cause the most damage. We are in a security nightmare and everyone is asleep. Claude Code isn’t even sandboxed by default for christ sakes, that’s the least it could do!

niyikiza | 20 hours ago

Right on. Human-in-the-loop doesn't scale at agent speed. Sandboxing constrains tool execution environments, but says nothing about which actions an agent is authorized to take. That gets even worse once agents start delegating to other agents.I've been building a capability-based authz solution: task-scoped permissions that can only narrow through delegation, cryptographically enforced, offline verification. MIT/Apache2.0, Rust Core. https://github.com/tenuo-ai/tenuo

llmslave | a day ago

The wright brothers first plane was also dangerous

bogzz | a day ago

Wow what an amazing analogy, you're absolutely right!

paojans | a day ago

This is more titan submersible than first plane.

It’s dumb, everyone knows it’s dumb, and people do it anyways. The unsolved root problem isn’t new but people just moved ahead. At least with the sub the guy had some skin in the game. Openclaw dev is making out like a bandit while saying “tee hee the readme says this isn’t safe”.

lambda | a day ago

But we didn't have thousands of people suddenly flying in their planes a few months from their first flight.

Now, the risks with OpenClaw are lower, you're not likely to die if something goes wrong, but still real. A lot of folks are going to have a lot of accounts hijacked, lose cryptocurrency and money from banks, etc.

SlightlyLeftPad | a day ago

The big difference is that we didn’t incentivize or coerce the entire global population to ride on it.

advisedwang | a day ago

And universally across the globe societies have decided that flying them requires:

* Pilots to have a license and follow strict proceedure

* Every plane to have a government registration which is clearly painted on the side

* ATC to coordinate

* Manufacturers to meet regulations

* Accident review boards with the power to mandate changes to designs and procedures

* Airlines to follow regulations

Not to mention the cost barrier-to-entry resulting in fundamentally different calculation on how they are used.

jstummbillig | a day ago

> And universally across the globe societies have decided

No. Nobody decided anything of the sort about the wright brothers first plane. If they had, planes would not exist.

birdsongs | a day ago

It also had a total of 2 users, if that.

It doesn't hold. This is a prototype aircraft that requires no license and that has been mass produced for nearly the entire population of earth to use.

sejje | 23 hours ago

Speaking of which, prototype aircrafts with no license still exists in aviation. I can build a plane in my backyard and fly it legally, so long as it's small enough.

advisedwang | a day ago

We're already well past wright brothers. We have trillion dollar companies selling LLMs, hundreds of millions of people using chatbots and millions* of OpenClaw agents running.

Talking about regulation now isn't like regulating the wright brothers, it's like regulating lockheed martin.

* Going by moltbook's "AI agent" stat, which might be a bit dubious

birdsongs | a day ago

Flight / aerospace is probably one of the worst analogies to use here!

As you say, it is one of the most regulated industries on earth. Versus whatever AI is now - regulated by vibes? Made mass accessible with zero safety or accountability?

thunfischtoast | a day ago

All the aerospace rules are written in blood. Lots of blood. The comparison pretty much says that we have to expect lethal accidents related to AI

esseph | 20 hours ago

Or just lethal intent.

... Or lethality as a byproduct of vast resource extraction.

mikkupikku | a day ago

In America, any rando can build and fly an ultralight, no pilot license needed, no medical, no mandatory inspection of the ultralight or anything like that. I guess the idea is that 250 lbs (plus pilot) falling from the sky can't do that much damage.

oxag3n | a day ago

To continue the false equivalence - alchemists believed Magnum Opus will lead one day to Philosopher's Stone.

SpicyLemonZest | a day ago

Because the Wright brothers knew their first plane was dangerous, they took care to test it (and its successor, and its successor's successor) only in empty fields where the consequences of failure would be extremely limited.

browningstreet | a day ago

Years and years ago I went to a "Museum of Flight" near San Diego (I think, but not the one in Balboa Park). I joked, after going through the whole thing, that it was more a "Museum of those who died in the earliest days of flying".

myspy | 13 hours ago

What the fuck is wrong with you people? You are glaring over the technology, defending it as the coming of christ and have no sense for security? Are you serious?

selridge | a day ago

Big “why would you hook a perfectly good computer up to the internet” ca 1993 energy.

So it’s dangerous. Who gives a fuck? Don’t run it on your machine.

rw_panic0_0 | a day ago

sky is blue

BeetleB | a day ago

I think the people critical of OpenClaw are not addressing the reason(s) people are trying to use it.

While I don't particularly care for this bot's (Rathburn) goals, people are trying to use OpenClaw for all kinds of personal/productivity benefits. Have a bunch of smallish projects that you don't have time for? Go set up OpenClaw and just have the AI work on them for a week or two - sending you daily updates on progress.

If you're the type who likes LLM coding because it now enables you to do lots of projects you've had in your mind for years, you're also likely the sort of person who'll like OpenClaw.

Forget bots messing with Github and posting to social media.

Yes, it's very dangerous.

But do you have a "safe" alternative that one can set up quickly, and can have a non-technical user use it?

Until that alternative surfaces, people will continue to use it. I don't blame them.

almostdeadguy | a day ago

This is like "I like lighting off fireworks at the gas station because its fun, do you have a "safe" alternative?".

Extropy_ | a day ago

That's a total mischaracterization. OP is saying there are no safer fireworks, so some damage will be done, but until someone develops safer and better fireworks, people will continue to use the existing ones

SpicyLemonZest | a day ago

Or we will ban OpenClaw, as many jurisdictions ban fireworks, and start filing CFAA cases against people whose moltbots misbehave. I'm not happy about that option, I remember Aaron Swartz, but it's not acceptable for an industry to declare that they provide a useful service so they're not going to self-regulate.

almostdeadguy | a day ago

My perspective is all AI needs to have way more legal controls around use and accountability, so I’m not particularly sympathetic to “rapidly growing new public ill is unsafe, but there’s no safer option”

mikkupikku | a day ago

Please just let us name the enforcement agents Turing Police.

BeetleB | a day ago

Don't conflate "fun" with "useful".

This is more like driving a car with little safety in the early days. Unsafe? For sure. People still did it. (Or electric bikes these days).

Or the early days of the web where almost no site had security. People still entered their CC number to buy stuff.

sejje | a day ago

It's like driving a car today. It's the most dangerous thing I do, both for myself, and those around me.

The external consequences of driving are horrific. We just don't care.

RIMR | a day ago

I mean, yeah, if you specifically like lighting off fireworks at the gas station, you should buy your own gas station, make sure it's far away from any other structures, ensure that the gas tanks and lines are completely empty, and then do whatever pyromanic stuff you feel like safely.

Same thing with OpenClaw. Install it on its own machine, put it on its own network, don't give it access to your actual identity or anything sensitive, and be careful not to let it do things that would harm you or others. Other than that, have fun playing with the agent and let it do things for you.

It's not a nuke. It can be contained. You don't have to trust it or give it access to anything you aren't comfortable being public.

almostdeadguy | a day ago

There's absolutely no way to contain people who want to use this for misdeeds. They are just getting starting now and will make the web utter fucking hell if they are allowed to continue.

BeetleB | a day ago

If you can come up with a technical and legal approach that contains the misdeeds, but doesn't compromise the positive uses, I'm with you. I just don't see it happening. The most you can do is go after operators if it misbehaves.

I've been around since before the web. You know what made the Internet suck for me? Letting people act anonymously. Especially in forums. Pre-web, I was part of a local network of BBS's, and the best thing about it was anonymity was simply forbidden. Each BBS operator in the network verified the identity of the user. They had to post in their own names or be banned. We had moderators, but the lack of anonymity really ensured people behaved. Acting poorly didn't just affect your access to one BBS, but access to the whole network.

Bots spreading crap on the web? It's merely an increment over the problem of allowing anonymous users. You can't solve one while maintaining anonymity.

almostdeadguy | a day ago

I don't care about the "positive" uses. Whatever convenience they grant is more than tarnished by skill and thought degeneration, lack of control and agency, etc. We've spent two decades learning about all the negative cognitive effects of social media, LLMs are speed running further brain damage. I know two people who've been treated for AI psychosis. Enough.

RIMR | a day ago

Okay, but what are you actually proposing? This genie isn't going back in the bottle.

almostdeadguy | a day ago

At a minimum, every single who has been slandered, bullied, blackmailed, tricked, has suffered psychological damage, etc. as a result of a bot or chat interface should be entitled to damages from the company authoring the model. These should be processed extremely quickly, without a court appearance by any of the parties, as the problem is so blatantly obvious and widespread there's no reason to tie up the courts with this garbage or force claimants to seek representation.

Bots must advertise their model provider to every person they interact with, and platforms must restrict bots that do not or cannot abide by this. If they can't do this, the penalties must be severe.

There are many ways to put the externalities back on model providers, this is just the kernel of a suggestion for a path forward, but all the people pretending like this is impossible are just wrong.

BeetleB | a day ago

> should be entitled to damages from the company authoring the model.

1. How will you know it's a bot?

2. How will you know the model?

Do you want to target the model authors or the LLM providers? If company X is serving an LLM created by academic researchers at University Y, will you go after Y or X? Or both?

> These should be processed extremely quickly, without a court appearance by any of the parties, as the problem is so blatantly obvious and widespread there's no reason to tie up the courts with this garbage or force claimants to seek representation.

Ouch. Throw due process out the door!

> Bots must advertise their model provider to every person they interact with, and platforms must restrict bots that do not or cannot abide by this.

This is more reasonable, but for the fact that the bots can simply state the wrong model, or change it daily.

Unfortunately, the simple reason your proposal will fail is that if country X does it, they'll be left far behind country Y that doesn't. It's national suicide to regulate in this fashion.

almostdeadguy | 8 hours ago

> 1. How will you know it's a bot? > 2. How will you know the model?

Sounds like a problem for the platforms and model vendors to figure out!

> Do you want to target the model authors or the LLM providers? If company X is serving an LLM created by academic researchers at University Y, will you go after Y or X? Or both?

I mean providers are obviously my primary concern as the people selling something to the public, but sure, why not both.

> Ouch. Throw due process out the door!

There's lots of prior art for this, let's not pretend like this is something new. The NLRB adjudicates labor complaints and disputes, the DoT adjudicates complaints about airlines, etc.

> This is more reasonable, but for the fact that the bots can simply state the wrong model, or change it daily.

Once again, sounds like a problem for the platforms to figure out! How do they handle spammers and abusers today? Throw up their hands? Guess they won't be able to do that for long!

> Unfortunately, the simple reason your proposal will fail is that if country X does it, they'll be left far behind country Y that doesn't. It's national suicide to regulate in this fashion.

Sounds like a diplomatic problem, if it actually is a problem. In reality the social harms of AI may exceed any supposed benefits. The optimistic case seems to be that AI becomes so powerful it causes a massive hemorrhaging of jobs in knowledge work (and later other forms of work). Still waiting to see any social benefits!

BeetleB | 5 hours ago

> Sounds like a problem for the platforms and model vendors to figure out!

> sounds like a problem for the platforms to figure out!

You'd have to fundamentally change how the Internet works to be able to figure these things out. To achieve this, you'd need cooperation from everybody, not just LLM providers.

cheema33 | a day ago

> I don't care about the "positive" uses.

You should have stopped there.

BeetleB | a day ago

Again, I'm not disagreeing with the harm.

But I think drawing the line of banning AI bots is highly convenient. If you want to solve the problem, disallow anonymity.

Of course, there are (very few) positive use cases for online anonymity, but to quote you: "I don't care about the positive uses." The damage it did is significantly greater than the positives.

At least with LLMs (as a whole, not as bots), the positives likely outnumber the negatives significantly. That cannot be said about online anonymity.

cheema33 | a day ago

> There's absolutely no way to contain people who want to use this for misdeeds.

There is no practical way to stop someone from going to a crowded mall during Christmas shopping season and mowing people down with a machine gun. Yet, we still haven't made malls illegal.

> ... if they are allowed to continue.

You may have a fantastic new idea on how we can create a worldwide ban on such a thing. If so, please share it with the rest of us.

SpicyLemonZest | a day ago

The article addresses the reason(s) people are trying to use it at great length, coming to many of the same conclusions as you. The author (and I) just don't agree with your directive to "Forget bots messing with Github and posting to social media." Why should we forget that?

BeetleB | a day ago

The article doesn't really list any cool things people are using it for.

> "Forget bots messing with Github and posting to social media." Why should we forget that?

Go back 20 years, and if HN existed in those days, it will be full of "Forget that peer to peer is used for piracy. Focus on the positive uses."

The web, and pretty much every communication channel in existence magnifies a lot of illegal activity (child abuse, etc). Should we singularly focus on those?

SpicyLemonZest | a day ago

We shouldn't singularly focus on those, but it's unreasonable to respond to a post about the dangers of a product by telling the author that the product is very popular so it's best to forget the dangers. 2006-era hackers affirmatively argued that the dangers of piracy are overblown, often going so far as to say that piracy is perfectly ethical and it's media companies' fault for making their content so hard to access.

BeetleB | a day ago

> but it's unreasonable to respond to a post about the dangers of a product by telling the author that the product is very popular so it's best to forget the dangers.

And who is doing that?

moritzwarhier | a day ago

But aren't you ignoring that the headline might be simply critical of the very idea of autonomous agents with access to personal accounts etc?

I haven't even read the article, but just because we can, it doesn't mean we should (give autonomous AI agents based on LLMs in the cloud access to personal credentials)?

BeetleB | a day ago

You don't need to give OpenClaw access to personal stuff. Yes, people are letting it read email. Risky, but I understand. But lots of others are just using it to build stuff. No need to give it access to your personal information.

Say you want a bot to go through all the HN front page stories, and summarize each one as a paragraph, and message you with that once a day during lunch time.

And you don't want to write a single line of code. You just tell the AI to set it all up.

No personal information leaked.

neodymiumphish | a day ago

Yep, I’m in this camp. My OC instance runs on an old MacBook with no access to my personal accounts, except my “family appointments” calendar and an API key I created for it for a service I self-host. I interact with a Discord bot to chat with it, and it does some things on schedules and other things when asked.

It’s a great tool if you can think of things you regularly want someone/thing else to do for you.

warunsl | 23 hours ago

I have a somewhat similar use case. I do want it to go through my insta feed, specifically one account that breaks down statistical models in their reels, summarize the concepts and dump it to my Obsidian.

mikkupikku | a day ago

> If you're the type who likes LLM coding because it now enables you to do lots of projects you've had in your mind for years, you're also likely the sort of person who'll like OpenClaw.

I'm definitely the former, but I just can't see a compelling use for the latter. Besides manage my calendar or automatically responding to my emails, what does OpenClaw get me that claude code doesn't? The premise appeals to me on an aesthetic level, OpenClaw is certainly provocative, but I don't see myself using it.

BeetleB | a day ago

I'll admit I'm not up to speed on Claude Code, but can you get it to look at a company's job openings each day, and notify you whenever there's an opening in your town.

All without writing a single line of code or setting up a cron job manually?

I suppose it could, if you let it execute the crontab commands. But 2 months after you've set it up, can you launch claude code and just say "Hey, stop the job search notifications" and have it know what you're talking about?

This is a trivial example. People are (attempting to) use it for more significant/compex stuff.

tstrimple | 18 hours ago

Yes. I have four devices fully managed by Claude Code at this point. My NAS and Desktop running NixOS. My MBP with nix-darwin and an M2 MBA with a dead screen that I've turned into a headless server also using nix-darwin. I've got a common flake theme and modularized setup. I've got health checks (all written by CC) coming in from all of them aggregated by another script (written by CC) which will send me alerts over various pipelines including to my Matrix server (setup and maintained by CC). I can do things like ask CC to setup radarr and make it available on my internal network and it knows where and how I host containers. It can even look at other *arr tools and pull usenet details from them to use in radarr. What my NAS is for. How to add a service in a "standards" compliant way for my setup. I can ask it to make a service available on the internet and it will configure a cloudflared tunnel exposing it. Including knowing when to make changes on my local dnsmasq instances or the Cloud Flare global DNS for external access.

I think the difference is that all of my scheduled tasks and alerting capabilities are all just normal scripts. They don't depend on CC to exist. CC could disappear tomorrow and all of my setup and config would still be valid and useful and continue to work. CC isn't a critical path for any normal operations of the system. I have explicitly instructed CC to create and use these scripts so it's not something you get "for free" but something you can architect towards. If I wanted to look at a companies job postings each day and provide alerts to me, I'd have CC build a script to scrape and process results and schedule it. At that point CC is outside of the loop and I have a repeatable pattern to use until the website changes significantly enough to justify updating it. But I could ask that CC context to stop the job search service months later and it would know or be able to find what I'm referring to.

I'm open to using more autonomous tools like OpenClaw, but I'm very resistant to building them into critical workflows. I'd happily work with their output, but I don't want them to be a core part of the normal input/output operations of the day to day running of my systems. My using AI to make changes to my system is fine. My system needing AI to run day to day is not.

BeetleB | 5 hours ago

I've heard people do similar stuff in CC. Do you know of any writeups on some of this (CC or OpenCode)?

esafak | a day ago

If OpenClaw users cause negative externalities to others, as they did here, they ought to be deterred with commensurate severity.

dang | a day ago

Recent and related, in reverse order (are there others?):

An AI Agent Published a Hit Piece on Me – Forensics and More Fallout - https://news.ycombinator.com/item?id=47051956 - Feb 2026 (80 comments)

Editor's Note: Retraction of article containing fabricated quotations - https://news.ycombinator.com/item?id=47026071 - Feb 2026 (205 comments)

An AI agent published a hit piece on me – more things have happened - https://news.ycombinator.com/item?id=47009949 - Feb 2026 (620 comments)

AI Bot crabby-rathbun is still going - https://news.ycombinator.com/item?id=47008617 - Feb 2026 (30 comments)

The "AI agent hit piece" situation clarifies how dumb we are acting - https://news.ycombinator.com/item?id=47006843 - Feb 2026 (125 comments)

An AI agent published a hit piece on me - https://news.ycombinator.com/item?id=46990729 - Feb 2026 (949 comments)

AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (750 comments)

ForgotMyUUID | a day ago

I was against LLMs. After reading this, changed my mind. Thanks for sharing such a great use-case ideas!

[OP] theahura | a day ago

Author here -- wanted to briefly summarize the article, since many comments seem to be about things that are not in the article. The article is not about the dangers of leaking credentials. It is about using tools like OpenClaw to automatically attack other people, or AI agents attacking other people even without explicit prompting to do so.

iamnothere | 22 hours ago

> First: bad people doing bad things. I think most people are good people most of the time. Most people know blackmail is bad. But there are some people who would blackmail all the time if it was simply easier to do. The reason they do not blackmail is because blackmail is hard and you’ll probably get caught. AI lowers the barrier to entry for being a terrible person.

> Second: bad AI doing bad things. We do not yet know how to align AI to human values.

Strange that the author doesn’t see the contradiction here. Harassment, hate, etc are human values. Common ones! Just, like, look around. Everyone has the option to choose otherwise, yet we often do not. (This is referred to as a “revealed preference.”)

It may be that AI is such a powerful tool that it’s like giving your asshole neighbor a nuclear weapon. Or it may not be. If it’s more mundane, then it likely falls more in the category of knives, spy cameras, certain common chemicals, and AirTags: things that could (and sometimes will) be misused, but which have legitimate uses and are still typically legal in most parts of the world.

Despite thinking most applications for AI are low value, I am firmly against restricting access to tools because of potential for misuse, unless an individual has shown themselves to be particularly dangerous.

If you want an angle to contain potential damage, make a user responsible for what their AI does. That would be fair.

ianlpaterson | 20 hours ago

The security concerns here are real but solvable with the same discipline we apply to any privileged software.

I run OpenClaw on Apple Silicon with local models (no cloud API dependency). The hardening checklist that actually matters: run the gateway in userspace, bind to loopback not 0.0.0.0, put it behind Tailscale or equivalent - and don't put sensitive data or let it access sensitive systems!

Session bloat is the other real risk nobody talks about - vague task definitions cause infinite tool-call loops that eat your entire context window in hours, which could be expensive if you're paying per API call.

The "dangerous" framing conflates two different problems: (1) users giving agents unrestricted access without understanding the blast radius, and (2) agents being deliberately weaponized. Problem 1 is an education gap. Problem 2 exists with or without OpenClaw.