All of them. It's not like AI companies have managed to fix the security issues since last time they promised they had fixed all the hallucinations & accidental database deletions.
It's literally a loop that wraps APIs from AI providers. Go ahead & explain how an open source AI wrapper fixes security holes inherent in existing AI.
Yes, yes it is. And it's amaaaazing. We're going to have lots of sharp edges getting stuff like this secured, but it is not going to go away. Too useful.
I wonder about this as well. I see people breathlessly talking about how it manages their inbox or checks flight statuses, but how often should you need a bot for these things?
For me, personal home IT “chores” that I’ve put off for years. I can do them, but god what a pain in the ass to spin up a VM, configure Prometheus, configure grafana, configure a bunch of collectors for my WiFi and network infrastructure, and then spend a night or three tweaking dashboards and re-learning promql or whatever.
I just end up never doing it. Got it done in a couple hours with openclaw.
I’m sure there are much better ways to do that, which I will now learn in time due to the initial activation energy being broken on the topic. But for now, it’s fun running down my half decade old todo list.
Connecting telegram to an agent with a bunch of skills and access to isolated compute environment is largely a solved problem. I don't want to advertise but here but plenty of solutions to spin this up, including what we have built.
But if it doesn’t have access to the network, then it’s just not very useful. And if it does, then it’s just a prompt injection away from exfiltrating your data, or doing something you didn’t expect (eg deleting all your emails).
That isn't secure is the issue, the more things you have it hooked up to the more havoc it can cause. The environment being locked down doesn't help when you're giving it access to potentially destructive actions. And once you remove those actions, you've neutered it.
The openclaw security model is the equivalent of running as root - i.e. full access. If that is insecure the inverse of it is running without any access as default and adding the things that you need.
The unsolved security challenge is how to give one of these agents access to private data while also enabling other features that could potentially leak data to an attacker (see the lethal trifecta.)
That's the product people want - they want to use a Claw with the ability to execute arbitrary code and also give it access to their private data.
That's easy. We just keep pumping these things and remind everyone that there's no real consequences (at least to the people who actually matter) and what was previously agreed as super important and critical will eventually turn out to no longer be super important or critical. Lethal trifecta solved. Who cares if your agent is forwarding private and confidential emails to random people, if everyone else is doing it too. Syndrome from the Incredibles movie won, and we helped make it happen. In fact, we made sure of it.
The same way as delivering a "truly secure human". Which is of course impossible, that's why spies, double agents and even triple agents exist and have succeeded to do their job. And that's despite an enormous number of guardrails meant to prevent exactly that.
And simply "secure enough" doesn't help much either, because whereas a single human spy can only do so much damage, if an LLM is given access to everything in one way or another - which is the whole concept - then the potential damage is boundless.
It seems almost impossible. I spent the weekend comparing nanoclaw to openclaw - nanoclaw is a slightly more secure version - containerized filesystem basically - and very popular.
It's a) harder to setup, b) less functional out of the box, c) has almost exactly the same security risk surface -- either you hook it up to your email, comms, documents and give it API tokens, or you don't. If you do -- well, at least it can't delete your hard drive without turning full evil and looking for red pill type exploits that break the container -- but, it still has the same other security dynamics.
Anyway, employing a very suspicious watcher that's hooked to the shell and API calls is probably the way forward. Can that thing be reasoned with / tricked?
You assume the security is something you bolt on rather than the security weakness being inextricable from the value. The superior approach is to distill what the LLM is doing, with careful human review, into a deterministic tool. That takes actual engineering chops. There’s no free lunch.
I would like a personal assistant on my phone that, based on my usual routine and my exact position, can tell me (for example) which bus will get me home the quickest off the ferry, whether the bridge is clogged with traffic, do I need an umbrella? what's probably missing from my fridge, time to top up transit pass, did I tap in? etc etc. These things would appear on my lock screen when I most probably need to know them.
No email stuff, no booking things, no security problems.
Indeed I have a bunch of apps that do most of these things, but it's the seamless integration I'm looking for - which may not need much AI at all (especially of the LLM kind), just some well directed machine learning and UI integration.
I read this as the aspirational dream of computers actually doing what you want. Yes, you can absolutely spend a bunch of time to build out the personal automation that will proactively inform you of relevant events. Yet, that is likely to be a lot of finicky messing around that may be pretty fragile and dependent upon N APIs staying fixed.
In an alternative reality Apple didn't absolutely shit the bed on AI and made this possible. Sadly they've shown they are woefully behind and have utterly useless people leading divisions they shouldn't have been allowed anywhere near.
No security problems carries a lot of weight here because by design you’re having to expose a significant amount of information but this is doable as a weekend project
- Where do you source real time traffic data, ferry schedules, etc? Google APIs get you part of the way there but you'd need to crawl public transit sites for the rest.
- How do you keep track of what went into the fridge, what was consumed/thrown away?
- How do you track real world events like buying a physical pass?
Sounds like there is need for decent singular interface for bunch of expert systems. Sadly I think everyone is so deep into locking their own thing down from others that this will never happen.
Responding to the tweet quoted in the article: why are the examples given of futuristic capabilities always so visionless - it's always booking a flight or scheduling a meeting. Doing this manually is already pretty trivial, it's more productivity theatre than genuinely life-changing.
There are real, impressive examples of the power of agentic flows out there. Can we up the quality of our examples just a bit?
I was very impressed by Anthropic's swarm of agents building a C compiler earlier this year with 1000 PRs per hour. Easy to nitpick that it wasn't perfect, but it sure was impressive.
What percentage of people will think that’s life changing?
Because then we’re not talking about “can everyone up their demos to life changing, please?”, we’re talking about “can everyone use demos Oarch thinks are life changing, please?” - and “can build a MVP C compiler draft that barely works for $XXK” isn’t really that compelling to me, and we’re both software engineers, and my whole day job has been an agentic coder for…2.5 years?…now. My incentive structure and demographics are lined up perfectly to agree with you, but I don’t :/
Maybe a personalised diet and exercise plan based on a huge range of information: preferences, biometrics, habit forming, disposable income, your local area etc
This is an excellent point and reminds me that, in some ways, the agentic coding stuff and ability for RL to hill climb on that and improve models quickly, has distracted from prompt engineering / putting more effort into getting data to them as a user.
You mean trying and failing to build a C compiler. This isn't a very hard task to begin with (assuming you know compilers, and the models do), but it was made unrealistically easy by giving the agents thousands of tests written by humans over years (on top of a spec and a reference implementation, both of which the models were trained on), and the agents still failed to converge. I was actually surprised that they failed as this was the purest possible example of "just do the coding" (something that isn't achievable in real or more complex cases) and when I read the description I thought they made it too easy, and in a way that isn't representative of real software. My thought at that failure was that if agents can't even build a C compiler with so much preparation effort put into the test, then we have some ways to go. Indeed, once you work a lot with agents for a while you see that coding isn't really their strong suit (although they are impressive at debugging).
> Can we up the quality of our examples just a bit?
No.
And there’s mundane answers why.
People used to talk about phone home screens, back in the day, every iPhone had 16 spots
It became wisdom everyone had the same 12 apps but then there were 4 that that were core for you and where most of your use went, but they were different apps from everyone else.
So it goes for agent demos.
Another reason: every agentic flow is a series of mundane steps that can be rounded to mundane and easy to do yourself. Value depends on how often you have to repeat them. If I have to book a flight once every year, I don’t need it and it’s mundane.
There’s no life changing demo out there that someone won’t reply dismissively to. If there was, you’d see them somewhere, no? It’s been years of LLMs now.
Put most bluntly: when faced with a contradiction, first, check your premises. The contradiction here being, everyone else doesn’t understand their agent demos are boring and if just one person finally put a little work and imagination into it, they’d be life changing.
honestly sorting email is the one thing that should have been solved five years ago. the tech is fine for classification. the problem is nobody wants to build a boring email sorter when you can announce an autonomous life assistant
Have you seen how bad flight booking sites can get? I've had to download airline apps a majority of the time because the website failed to finish payment properly.
I don't think we should call presentations visionless or fault them for wanting to solve this UX nightmare.
> Have you seen how bad flight booking sites can get?
Claude is pretty amazing, but it still goes down rabbit holes and makes obvious mistakes. Combining that with "oops I just bought a non-refundable flight to the wrong city" seems... unfun.
Well, I've taken to describing the best responsible use of AI to help your work as though you have an executive assistant, so I can see why people would come to that conclusion. I don't tend to think of booking flights for that though, I tend to think of asking them to gather information and present it to me so I can review it for whether it's appropriate to include, probably with changes, in whatever I'm working on. Perhaps an executive assistant isn't the right term for that, or perhaps it's just that different people and different industries have vastly different ideas of how to make use of an executive assistant. I don't know enough to answer that.
Been a middle-class IT drone much of my adult life. This is not my dream. In fact I just realized that one reason I don't like AI dev tools is because they turn me into the kind of dickhead manager I despise: one who doesn't understand the code or the nature of the work involved, just gives orders on what needs to be built and complains when it doesn't work.
I fix it by micromanaging it. Which class, method, function, module - I dictate the low level structure and features. I dump my all my hard earned coding opinions in a profoundly crafted markdown file.
Not using OpenClaw - but I have a limited agent running that currently does a few things well.
Morning Briefing:
- it reads all my new email (multiple accounts and contexts), calendars (same accounts and contexts), slack (and other chat) messages (multiple slacks, matrix, discord, and so on), the weather reports, my open/closed recent to dos in a shared list across all my devices, my latest journal/log entries of things done. Has access for cross referencing to my "people files" to get context on mails/appointments and chat messages.
From all this, as well as my RSS feeds, it generates a comprehensive yet short-ish morning briefing I receive on weekdays at 7am.
Two minutes and I have a good grasp of my day, important meetings/deadlines/to dos, possible scheduling conflicts across the multiple calendars (that are not syncable due to corporate policies). This is a very high level overview that already enables me to plan my day better, reschedule things if necessary. And start the day focused on my most important open tasks/topics. More often than not this enables me to keep the laptop closed and do the conceptual work first without getting sucked into email. Or teams.
By the way: Sadly teams is not accessible to it right now. MS Power Automate sadly does not enable forwarding the content of chats. Unlike with emails or calendar appointments.
Just for that alone it is worth having it to me. YMMV.
I also can fire a research request via chat. It does that and writes the results into a file that gets synced to my other devices. Meaning I have it available at any device within a minute or so. Really handy sometimes. It also runs a few regular research tasks on a schedule. And a bit of prep work for copy writing and stuff like this.
Currently it is just a hobby/play project. But the morning briefing to me is easily worth an hour of my day. Totally worth running it on my infra without additional costs.
Unless this whole setup is self-hosted (which I doubt), it's also uploaded to some data lake of a company which is in business of profiting from information.
Intelligence agencies are really heading into a golden age, with everyone syncing all the data they have to the cloud, in plaintext. I mean it was already bad, but it's somehow getting worse.
The thing about that is the benefits, saving a couple minutes a day and not having to click to different windows where the information is stored, is apparent and intimidate whereas the harms associated with loosing most, if not all your privacy and security isn't felt in the same type of immediate way, so the dopamine of the positive effects completely overwhelms. It is hard for many people to be able to weigh different cost/benefit in situations where it is so one sided on the immediacy spectrum.
Why an agent? Why not simply filter by unread, select all and mark as read? I recently did this with my email accounts which has many thousands of unread emails.
I recently started having my AI assistant help clean up my email gradually. (Using stumpy.ai for what it's worth.)
The way I do it is every morning we go through recent emails in my inbox one at a time. If I want to mark it as spam, delete it, add it to my calendar, whatever, I explain to the agent why in detail. Over time it builds up an understanding of how I handle a lot of things, it needs to show me less and less, and it handles more and more on its own.
I also told the assistant to check my email on its own once per hour and auto-action what it can. That helps keep junk from building up, and it alerts me via SMS if something high priority shows up (e.g. user reporting a bug).
Point is there was never a point where it just ran for a long time and magically cleaned everything up just how I'd have wanted. I have like 7k emails in my inbox, that wouldn't be practical. But the number is going down now gradually, instead of up. I've had a chance to teach it and let it establish trust that it's doing things the right way. Which feels safer.
this is the approach that actually makes sense to me. gradual trust not yolo from day one. curious though, can you see what it learned about your patterns or is it a black box? like if it starts auto-archiving something you actually wanted, how do you debug that
I run my own claw on a hetzner setup. The claw writes his own code based on some rules I gave to him (20 prs per day). it's active on moltbook and has access to my whatsapp, Gmail etc. dangerous it is. But fun as well.
Specially fun to see which features it decides to build.
https://github.com/holoduke/myagent
> Sadly teams is not accessible to it right now. MS Power Automate sadly does not enable forwarding the content of chats. Unlike with emails or calendar appointments.
Use the JSON responses for full detail including e.g. reactions.
Composio, behind the blog post, offers "Enterprise" pricing, and has no Teams examples. A stat HN ignores: 85% of SMBs are on M365, not Google Workspace and Slack.
You can pick winners and losers in a segment early, by whether they treat M365 as a first class platform or pretend it doesn't exist. Check for the "Continue with Microsoft" button or support for OIDC not just SAML+SCIM, as well as examples for Teams.
This isn't just true for YC classes, holds true for unicorns. Compare Anthropic's "Claude in Excel" and "Claude in PowerPoint" instead of in Google Docs or Sheets, and guess which firm has a better grasp of how business works outside the valley. And yeah, Claude in Chrome works in Edge (and the lack of just renaming and posting Claude in Edge for normals to find is an ANTHROP\C miss).
When you need a bunch of busy people in a meeting it becomes hard to book a meeting. If several people need to travel incuding get a visa it is hard to fit it all it between other meetings that refuired people caanot skip.
travel is hard when you are trying for the best deal across flights, hotels and such. many sites only guarentee prices for 15 minutes so you can't even get all the needed prices on a spreadsheet at once - particularly if you have flevible travel dates. I've booked a best price plane ticket only to discover it was the worst date for hotels and I could have saved money on a more expensive flight.
> why are the examples given of futuristic capabilities always so visionless - it's always booking a flight or scheduling a meeting.
This AI wave is filled with "ideas guys/gals" who thought they had an amazing awesome idea and if only they knew how to program they could make a best-selling billion dollar idea, being confronted with the reality that their ideas are really uninteresting as well.
They're still happy to write blog posts about how their bleeding-edge Claw setup sends them a push notification whenever someone comments on one of their LinkedIn posts, though.
Oh for sure. When I present something to the LLM it always tells me how great it is until I make it "question" it, then it says it was overestimating this or that. Eh. Quite annoying.
It won't even help you understand that the 20 second task you've been putting off for 6 months causing anxiety will only take 20 seconds (nor will we learn from this)
Or the fact that in the time it took me to read this thread I could have finished that task. Sometimes I really want to punch my brain in its stupid face lol.
I have "new genius" ideas very often. After doing quick search I discover that any idea worth thinking of implementation is either implemented already or what seems to be low barrier to entry clashes with some legal obstacles.
Interestingly that sort of research is actually what I've used Claude/Chatgpt deep research and openclaw for. If I have an idea, I get an agent to go and do some product research for me and see if there is a market, if anyone has tried it, and if there is anyone doing it.
It has unironically saved me a lot of time I would have otherwise spent going down rabbit holes.
Of the models I've found that claude doesn't gas you up as much as GPT, so for stuff like this where the answer can be "no, that's not a good idea" I usually use claude.
I have the opposite problem. I have a genius idea, and I start to research it.
I find a company that actually built a solid product, dangit this is really good. They appear to have executed well, but they failed, or went nowhere, heck the app is still out there. Maybe they are even chugging along but its a smaller business even with a better product than I would have been able to build. Had I been a founder of the product, I would be questioning staying.
Then I also find sometimes I was doing it all wrong and the world has moved past my notions of products. I think there's a market opportunity because I don't realize that the rest of the world is already cool using a $15 plant hygrometer bluetooth device which can also keep track of your medicine or food in your cooler, my notion of the value of something is skewed by western costs
Hmm I often have ideas that I don't see anywhere else but I'm just in it for rent curiosity and learning. I absolutely hate the business side and usually I do stuff for free just so I don't have to complicate my taxes.
>There are real, impressive examples of the power of agentic flows
there aren't, and just like the blockchain "industry" with its "surely this is going to be the killer app" we're going to be in this circus until the money dries up.
Just like the note-taking craze, the crypto ecosystem and now AI there's an almost inverse relation between the people advocating it and actually doing any meaningful work. The more anyone's pushing it the faster you should run into the opposite direction.
I'm gonna keep saying this forever - there are two obvious "killer apps" for crypto:
1. Semi-private blockchains, where you can rely on an actor not to be actively malicious, but still want to be able to cryptographically hold them to a past statement (think banks settling up with each other)
2. NFTs for tracking physical products through a logistics supply chain. Every time a container moves from one node to the next in a physical logistics chain (which includes tons of low trust "last mile" carriers), its corresponding NFT changes ownership as well. This could become as granular as there's money to support.
These would both provide material advantages above and beyond a centralized SQL database as there's no obvious central party that is trusted enough to operate that database. Neither has anything to do with retail investors or JPEGs though, so they'll never moon and you'll never hear about them.
AFAIK both of these use cases had many millions of invested dollars dumped into them during the Blockchain hype and neither resulted in anything. It might not be an exact match for (1), but there was famously the ASX blockchain project[0] which turned out to be a total failure. For (2), IBM made "Farmer Connect"[1], which is now almost entirely scrubbed from their website, which promised to do supply chain logistics on a blockchain.
> ASX blockchain project[0] which turned out to be a total failure.
FWIW if you know anything about the ASX, you'll know that the failure was a result of the people running the ASX and not necessarily the tech behind it.
The only "killer app" for crypto*currencies* is being a payment method. Not counting speculation. This is what they are used for right now, but the scale at which this happens doesn't justify their current valuation (even after recent losses).
But is that a better experience than just using your visa? Nobody wants to wait at the cashier for 15 minutes to pay for their groceries, which is what has to happen if you really want the decentralized experience. Otherwise you really are just reinventing a worse, centralized payment rails. Volatility and wait times are features of crypto, not bugs, but they make for terrible payment experiences.
Doesn't lightning settle basically instantly, while still being decentralized? You're just trading signed transactions iirc, with settlement happening whenever.
Not only do you not need the blockchain for either of those things, you don't want it.
Think it through. How do you actually "cryptographically hold" someone to anything? You take them to court.
Guess what you can do, right now, without the blockchain? That's right, you can take them to court.
You're just reinventing normal contract law with extra steps.
The cryptographic part doesn't even help you when you can just say in court that "here are our records that show we gave them these packages, here are our records of customers filing complaints that they never got them" and that is completely fine.
This exact thing happens too often. We try to use fancy technology to solve a non-texhnical problem.
With or without blockchain you end up at court. If you build a decentralized trust system, the builder of the system needs to be trusted. If you want to use decentralized trust to do your taxes or other government communication you still need to trust your government. These are all actual examples i’ve encountered.
You pretty much always end up at the legal system. If there js anything to make big impact on it would be that. But that requires world-wide revolution.
IMHO, most people misunderstand the real utility of crypto.
The thing to keep in mind is that replacing a database with computationally expensive crypto is sub-optimal. Supply Chain tracking falls into this category: why crypto over barcodes and a database?
Governments use Banks with their deterministic processes to manage and guarantee transactions. This is where crypto shines- replacing the entire banking system as an intermediary to manage and guarantee transactions. Crypto can do this better and cheaper than Banks.
There are other domains where the government is the backstop/guarantor and leverages intermediaries to manage the scale. Real Estate comes to mind. Identity is another. Crypto can be useful there.
One last useful crypto application is to replace governments themselves as the backstop and final/guarantor for transactions.
These are ideas that evoke strong reactions. There's a reason the inventor of crypto is anonymous, to this day.
Some of it is lack of imagination, but some of it is because many truly visionary examples would largely sound stupid to most of today's audience. Imagine it's 2007 and you're explaining how the smartphone will change society over the next 20 years:
- A photo sharing app will change restaurants, public spaces, and the entire travel industry across the world
- The smartphone will bring about regime change in Egypt, Tunisia, Lebanon, and other countries in ~4 years
- We'll replace taxis and hotels by getting rides and sharing homes with strangers
- Billions of people across the world will never need to own a desktop or laptop
Instagram
Arabian spring
Uber
Airbnb
Cloud-ification/shift to web apps and mobile-first
....tiktok? Or is YouTube considered "short video sharing app"? Because I see no evidence tiktok in particular killing TV...
To be fair, QR code did hit print magazines/newspapers in Germany (just as an example; English wiki was not elaborating on initial history of public use/perception) in late 2007, so that one wasn't nearly as far-fetched.
None of these actually were hard to sell. In 2007 we had mobile phones, we had mp3 players (the iPod was actually very good), we had CouchSurfing, etc.
I think the smart phone revolution is actually pretty overstated. It basically only made computers cheaper and handier to carry (but also more walled gardens). There are a few capabilities of smart phones we do today which we didn’t with do with computers and mobile phones back in 2007, such as navigation (GPS were a thing but not used much by the general public).
Your case would be much stronger if you’d use the World Wide Web as your analogy, as in 1995 it would by hard to convince anybody how important it would be to maintain a web presence. And nobody would guess a social media like the irc would blow up into something other then a toy.
However I think the analogy with smartphones are actually more apt, this AI revolution has made statistical models more accessible, but we are only using them for things we were already capable of before, and unlike the web, and much like smartphones, I don’t think that will actually change. But unlike smartphones, it will always be cheaper and often even easier to use the alternatives.
Even the navigation part, I'm not so sure. I remember Dad would bring a laptop when we would drive new places and it would be running Microsoft Streets and Trips with a GPS dongle, and I think that have been late 90s or early 00s. I remember seeing other people do that and by the time I was driving a lot in 07 I remember having a dash mounted GPS, maybe a Magellan or Garmin, that didn't cost that much and again I remember a lot of people doing it. The smartphone definitely displaced it, but it wasn't a complete novelty even for the general public.
I think you lived in a strange bubble when you were a kid. When I was a teenager in the 90s, we'd have paper maps that we'd bring with us. We had no GPS. I don't think we knew what GPS was.
In the late 90s we'd print out directions from MapQuest. That was a game-changer. Still no GPS, though.
As an adult in the early 00s, I was still printing out MapQuest maps. In 2004 I got a car with a built-in navigation system! (Complete with a DVD drive in the trunk with a disc holding the maps.) It was still incredibly uncommon; I was one of the few people I knew who had one. I did know a few people who had Garmin GPS devices that they'd suction-cup to their windshield, but not many.
By 2007 most people were aware of GPS devices with little screens that you could bring into the car, though I'd guess maybe 25% of the drivers I knew then had one.
If your dad was bringing a laptop with a GPS dongle in the car in the 90s, I think you were very unusual. Hell, I didn't even have a laptop until 2004, and even then it was a hand-me-down from my dad's work. And I was in my 20s by then!
I remember GPS being something mountaineers had. People who would take their jeeps up to the glacier had them. Boats also had them. Coincidentally I was a fisherman back then and did observe my captain using a super fancy navigation device with an interactive map (and yes the map did come on a DVD); I also knew a couple of jeep men (or jeppakarlar as we call them in Icelandic) who had something similar (though more compact) in their jeeps; and to top it of, I would spend hours on google earth, just having fun looking at the map on my desktop.
I however did not see this technology coming to our phones, and becoming this commonplace.
It has been a day since I wrote the upthread post, and navigation is still the only novel capability of smartphones, which I think would have been a hard sell in 2007. I really can‘t think of another example.
Booking, boarding, change/gate notifications, rebooking options, customs and immigration is done via phone.
Transit to/from the airport via Uber or a transit pass stored in your smartphone wallet.
Baggage tracking via airtags
Yeah, there's vague precedents for this stuff from the desktop computer era, but it only _really_ works when you've got an internet-connected device in your pocket.
Ahhh, payment via phones is also a new thing that I think very few people saw coming (including me). However it is also a very recent development and not really a part of the supposed smartphone revolution. In 2007 we did not have touchless payments (except in some public transit systems; gyms; etc. but it was limited to a special cards you couldn’t use for anything else) so this is definitely a new capability which was probably hard to sell in 2007.
The others you mentions, I would argue against. Yes it is convenient to order a taxi via an app on your phone, but in 2007 you could do so via SMS or a phone call, so not much has change really other then we now have one more interface to pick from.
I don’t see how smartphones have changed rebooking, nor customs, and especially not immigration which has become 100x more of a headache then it was in 2007. And finally, airtags are a separate technology from smartphones.
Hand-waving away ride-sharing as not much of a change makes me wonder what you would actually consider to be significant. It completely upended the taxi business.
2007: arrive in a new city, figure out who to call (or maybe text) for that particular city, wait, hope someone will pick you up and understand enough of your language and the local geography to get you where you want to go, possibly some unpleasant haggling over the fare
2026: arrive in a new city/country, open Uber, specify in the app precisely where you want to go, choose a vehicle, when to get picked up, etc, track vehicle progress in real-time, up-front pricing
And that's the consumer side. The provider side was even more radically changed.
If you don't see how smartphones changed the experience of flying... maybe you don't fly anywhere?
Airtags are entirely dependent on the ubiquity of smartphones.
I have already said navigation and tuchless payments are worthy examples of smartphones providing new and unpredictable innovations.
Your ride sharing experience sound more like you would expect from any consumer product gaining a global market share (or even monopoly). 1980 - Arrive in a new city and not knowing how to get a hamburger. 2000 - Arrive in a new city, find the nearest McDonalds and get your usual BigMac.
I used to have PDAs with Windows Mobile, hmm even a BlackBerry. Oh gosh, navigation apps were absolutely crap back then, screens were crap, cameras were crap. Video calls via "3g", if any of your friends/family also had a 3g-capable phone, maybe it worked, experience you couldn't compare to FaceTime. iPhone/Android, really brought a new life into this ecosystem.
Camera phones were already very popular in 2007. The Flight of the Concords even made a joke out of it. But most people still owned a separate digital camera. It was not hard to predict that the cameras on your phone would get better and eventually replace your dedicated digital cameras. We all saw that coming.
Same with video calls, if anything that idea was oversold in 2007. Most people had Skype (or something similar) and would video call international calls (which were very expensive using the regular phone lines back then). If you were traveling internationally you would find an internet café log in to your Skype and make a call. Moving this capability to the smart phone was a no-brainier. Turns out that when we have it in our phones, video calls are still more popular on desktops (via zoom, etc.) in 2026.
- A nation-agnostic online currency (and its offshoots) would lead to a multi-polar world.
- Publicly waving your resume around will passively invite job interviews.
There's a new OpenClaw adaptation, Ottie, that I think could be a bank manager, bank teller, stockbroker, piggy bank, accountant, wallet, security guard and credit card provider all rolled into one. I just haven't used it yet. https://ottie.xyz/
So that would be:
- Digital sidekick weeds out parasitic relationships.
There has to be tremendous value in that.
When solutions are looking for problems, it means that things may seem oversold when in fact they are still undersold.
I think some folks want a legitmate personal assistant/secretary like ceo's and wealthy people have but ai. I think that's a good goal. Modern cells and pdas kinda fell short of "your own literal secretary" and I think people want that. Still we should continue pushing the boundaries beyond that.
The purpose of a personal assistant isn’t to fit people into your calendar. It’s to filter them out. They serve as a barrier to your time, not an enabler for other people to claim it. I don’t see how an AI can meaningfully accomplish that any better than simply just making yourself more difficult to reach.
This is it right here. I've long thought about this one and whether I should bother with an AI agent that can do all of this stuff for me, but the reality is both what you said and I'm not rich enough.
Do I want the AI Agent to take my bank account and automatically pay some bill every month in full? What if you go a little over that month due to an emergency expense you weren't prepared for? And it's not a matter of "I don't have enough in my bank account for this one time charge", but it's "I don't have enough in my bank account for this charge and 3 others coming at the end of the month." type deal.
Agents aren't going to be very good at that. "Hey I paid $3,000 on your credit card in order to prevent you from incurring interest. Interest is really bad to carry on a credit card and you should minimize that as much as possible." Me: "Yeah but I needed that money for rent this month." Agent: "Oh, yeah! I should have taken that into account! It looks like we can't reverse the charge for the payment."
> The purpose of a personal assistant isn’t to fit people into your calendar. It’s to filter them out. They serve as a barrier to your time, not an enabler for other people to claim it.
Scheduling in a larger org and/or with multiple equally busy people is a non-trivial, complex task; it makes sense to dedicate resources to the task. Good Executive Assistants are generally fairly smart folks, in my experience.
When the scale is substantially more and involves objects as well it evolves into multi-million $ ERM (Enterprise Resource Management) systems.
I'm a pretty busy person professionally. When I feel like I'm being pushed to "scale" my time and attention, I take that as a signal to do the opposite: do less, and do those remaining things more meaningfully.
Trying to do more is a losing game, and AI assistants just paper over that. We all have finite time and attention. I think a pragmatic engineering approach is the right one here: consider that as a non-negotiable constraint, a fact of the physical world, not something to magic away.
They really didn't fall short. A lot of people who would've had assistants no longer do, now it's really just the executives like you said. But fairly low managers used to have them and now they don't.
Software is pretty good. It remembers everything, perfectly, forever. It will never forget to remind you of something. It can give you directions, sort your emails by how important they are, help you find shops and restaurants. The only people busy enough to warrant an actual human doing that stuff are executives. And, even then, I think for most of them it's an ego thing, not an "I need this" thing.
> It will never forget to remind you of something.
Software isn't as faultless as you suggest. The default alarm app on my phone occasionally fails to go off (not an issue with Silent Mode or DND).
> The only people busy enough to warrant an actual human doing that stuff are executives.
Life is short. It is absolutely worthwhile to spend as little time doing trivial work if possible, and avoid decision fatigue on unimportant decisions. We are nowhere close to the usefulness of a secretary in our devices.
> Software isn't as faultless as you suggest. The default alarm app on my phone occasionally fails to go off (not an issue with Silent Mode or DND).
I'm guessing this is an iPhone, and yeah it's because that software is just bad. I've helped my Mom try to get her phone to ring, like, 12 times now and I've failed each time. And I'm a dev! So, point taken.
> Life is short. It is absolutely worthwhile to spend as little time doing trivial work if possible, and avoid decision fatigue on unimportant decisions.
Ehh, I kind of disagree. The work is the same, at best it shifts to something else. Asking for more productivity is a monkey paw. Best to just take it all in and try to enjoy the simple joys of life. Or, uh, work.
I agree with you on the work shifting. Whenever someone takes some of our work burden from us, someone else just gives us more tasks to do, and we end up working for the same amount of time. Maybe the work ends up being more interesting or rewarding, though. But sometimes trivial work is a nice physical/mental break, too.
Why do you guess it's an iPhone? I switched to an iPhone because my OnePlus phone failed to ring or play alarms due to a constantly crashing and restarting media indexer service (I could only tell this is what was happening from the logs).
> They really didn't fall short. A lot of people who would've had assistants no longer do, now it's really just the executives like you said. But fairly low managers used to have them and now they don't.
I think the reason for this is labor cost, and "good enough". I don't think a smartphone is an equivalent replacement for a dedicated assistant. The average mid-level manager who would have had an assistant 30 years ago likely (today) spends more time on "assistant-y" work than they would if they had an assistant today. It's just that now they do 30% of the work the assistant did, and their phone handles the other 60%. That kind of ratio is enough to make upper management believe that human assistants for the lower-level folks isn't worth the cost. (While they themselves of course still have human assistants.)
> There are real, impressive examples of the power of agentic flows out there. Can we up the quality of our examples just a bit?
Please don't. The reason we're still enjoying the bit of the old world as we know it, is just because nobody has really figured it out yet. Enjoy the moment, while it lasts.
What does this even mean? By definition, we have been enjoying "the moment" for quite a while now. What is so special about it that we should work to prolong it, and to avoid moving forward?
OpenClaw is just like any other tool, you need to learn it before its power is available to you.
Just like anything in engineering really: you have to play around source control to understand source control, you have to play around with database indexes to learn how to optimize a database.
Once you've learned it and incorporated it into your tool set, you then have that to wield in solving problems "oh, damn, a database index is perfect for this."
To this end, folks doing flights and scheduling meetings using OpenClaw are really in that exploration / learning phase. They tackle the first (possibly uninventive thing) that comes to mind to just dive in and learn.
The real wins come down the line when you're tackling some business / personal life problem and go: "wait a second, an OpenClaw agent would be perfect for this!"
> OpenClaw is just like any other tool, you need to learn it before its power is available to you.
That's ridiculous. The utility of any tool is usually knowable before using it. That's how most tools work. I don't need to learn how to drive a car to know what I could use it for. I learn to drive it because I want to benefit from it, not the other way around.
It's the same with computers and any program. I use it to accomplish a specific task, not to discover the tasks it could be useful for.
OpenClaw is yet another tool in search of a problem, like most of the "AI" ecosystem. When the bubble bursts, nobody will remember these tools, and we'll be able to focus on technology that solves problems people actually have.
The utility of a program like Excel, Obsidian, Notion, Unity, Jupyter, or Emacs far beyond the knowledge of knowing how to use the product.
All of these products are hammers with nails as far as your creativity will take you.
Its wild to have be on a website called Hacker News, talking about a product that can make a computer do seemingly anything, and insisting its a tool in search of a problem.
>The real wins come down the line when you're tackling some business / personal life problem and go: "wait a second, an OpenClaw agent would be perfect for this!"
That's a fair point, and I guess the marketing problem here is intrinsic: If the problem is trivial, off-the-shelf solutions abound; if it's idiosyncratic, almost nobody will be able to relate (as you can't assume that people will do the transfer of "if it can solve complex problem I don't understand A, it'll probably be able to solve my complex problem B" for promotional material).
Booking a flight is the kind of thing I'd really want to avoid doing myself nowadays if possible though. Surveying the offers is usually such a snake pit of deceptive marketing and incomplete service conditions that I feel somewhat nauseous just at the prospect of having to look at it.
I wouldn't remotely trust a software assistant to deal with all that misdirection autonomously, but I guess I'd be prepared to give it a chance collating options with tolerable time and cost, attempting to make the price include the stuff that has to be added to preserve health, sanity and a modicum of human dignity.
Asking it to find the best deal? Sure, I could totally see that. But I want to double check that myself and actually buy it in order to make sure my flight to Denver from Portland doesn't have a layover in Anchorage or something.
We will get to the point where you'll trust it to catch those issues. The latest models can already do it sometimes for code, like explain that it considered various options and the tradeoffs between them.
Apparently I'm the only one here who finds it to be one of the worst things I ever have to do, I hate managing the combinatorial tab explosion by hand. Compounded by the adversarial nature of the price-setting algorithms that jack up the price on you if you show too much interest by researching too intensively. Just booked a flight for our family in two parts, and booking for one set of us made the price for the second set of us with a slightly different itinerary massively more expensive, because it was "in demand".
Do you think an agent is going to do all of that and get you the best price/time/comfort combination for your exact preferences. Or do you think it's going to pick the first that looks reasonable? Or do you think it's going to sacrifice one dimension too much?
We already have agents for this if you really want to avoid it, they're called travel agents. They're pretty good at complex travel booking and not very expensive.
Maybe, I've never used a human travel agent. Based on my experiences with human agents in other industries (especially real estate), I think the LLM version will probably already know my preferences a good bit better than most human travel agents would bother to learn - they're infinitely patient, and not trying to maximize earnings by minimizing time spent per booking.
I think we literally have those agents today, albeit implemented in meat rather than silicon. Any particular reason you elect not to use the free-to-you travel agent? Generally they are the same or less expensive and able to work in your best interests.
Travel agents duh. Reminds me of the classic silicon valley startup trope that most tech bro’s are basically trying to pitch a product that replaces their mother.
Booking flights/tickets is terrible. And then the dark patterns… wonder if OpenClaw can navigate these? Anyway, it is nothing compared to sourcing electronic components, there are literally thousands and thousands of different manufacturers, lead times, moq’s … for the same component, leading to super hard to search and filter database/websites that are slow as molasses.
Seriously. Have you tried Octopart? One of the very early YC companies, dedicated to electronics part search (I haven't used them recently, so no idea if they're good these days, just remember them from like a decade and a half ago).
I used it, mainly for 3d models and footprints. I source most of my parts from LCSC because of the assembly integration nowadays. Sticking to their part ID’s works well for known parts, but yeah, still need to find the part first. It is so time consuming.
I just booked a round trip for myself, plus two more flights for quicker hops while I'm away, and I didn't spend much time on it at all. I just looked at Google flights, picked the flights I wanted, and then ended up buying them through Chase with points. Chase's travel website is among the worst I've ever used, but it wasn't hard. Then I went to the airline's website and changed my seats (Chase doesn't know I have status and couldn't directly book the seats I wanted) and did an upgrade for one of the legs using miles I had at the airline. Half hour of work, maybe?
The price-setting algorithms are garbage, but an LLM isn't going to fix that.
Agree with the other sibling posters that if this annoys you so much, you should just call up a human travel agent. I haven't used one in many years, but when I did (mostly for business travel), it was always pleasant, and the agent knew my preferences and took care of things if there were any snags or changes needed. At the time, they usually got me flights cheaper than if I were to book them myself, even with their fee on top.
But I do wonder what the profession is like now. I can imagine some sort of website where you often don't even deal with the same person, who won't get to know your preferences and will be sort of like a customer service agent, just trying to close as many cases as fast as they can. But hopefully there are still smaller shops around, where you can talk to the same person (either phone or email) every time. Dunno.
Exactly. When you're spending money, you want to be in the loop. It's why the Alexa Echo devices as media for Amazon purchases never really worked out. Amazon had two conflicting aims. They wanted to race to the bottom with their increasingly shady vendors which eroded trust, while also positioning themselves and their devices to be trusted agents of purchases. Of course no one wants to buy anything sight unseen through them.
To be fair, you can cancel flight reservations for a full refund within 24 hours, so if the LLM gets it wrong, you're not on the hook for anything.
But in general I do agree: flight bookings are something I want to do myself, because even I don't fully know my preferences when it comes to timing and price until I see what's available. And in general I don't find it all that difficult to do. A couple days ago I booked a multi-city travel itinerary with four different destinations, and it took me about a half hour?
Sure, if an LLM can do that in under a minute, that would be cool, but in absolutely zero situations would I not need to check its work, and if it did get it wrong, I'd have to do it all myself anyway.
Booking a flight is the type of thing you think you want to dedicate your full attention to. Largely, the issue is one of trust that whatever is making the bookings will take into account the nuances of your schedule and preferences. Here's the typical flow for how my (human) assistant books my flights. I tell them I want to go to $LOCATION from $START to $END. They have my calendar, my travel preferences (airlines, hotels, etc) and the company travel policy. A couple hours later, they slack me a couple of options. If there is one I like, I tell them to book it. If not, I tell them why none of the options work. The process repeats until I see something I like or we run out of options and I have to choose the least bad option. There is nothing about this process that needs a human. It's all done on Slack and for all I know the person at the other end is actually an AI (they're not, but for arguments sake, they could be).
I also have the same concerns. I have my agenda meeting free and create meetings like once a few weeks. The same is for booking flight tickets - once a decade. Adding openclaw there would take more time and effort than doing it manually.
And none of the friends playing with openclaw have any useful non-trivial workflows which can't be automated in oldschool way.
The only viable workflow so far I could think of - build your own knowledge base and info processing pipeline.
It's either vague notions like "more important than the invention fire", or concrete cases like booking trips that the likes of Google can enshittify at lightspeed.
I am not optimistic, not because the techs is lacking, but the context in which it is born is awful.
No, it’s not! You are the one who made it trivial by using three words to define! How about if I could only fly out between 9 am-noon next Friday? Also, combine it with hotel and rental car. Many times total $ between sites could be a difference of close to $200 or more along with better itinerary. That’s just the surface. The more preferences you add, the complex it becomes, so make it a right scenario for agent automation along with calendar management which has similar complexity.
I don't use Claw. It is way too dangerous. I built my own system where I know the ins and outs and how they can break.
When it comes to agents' tasks, I tend to focus on things that I couldn't do before without automated agents, at least at the going price.
The kind of automation I'm doing is more like building a set of agents to generate marketing surveys for me. They take free form input from me and my project. They aren't particularly sexy but they go off and do something valuable that I literally would never pay for at the prices that they are normally.
This is a really tempting approach but I think it is the wrong one. The issue is one of trajectory. OpenClaw has the attention of thousands of hackers and there is a huge incentive to contribute to make it better. That will compound very quickly and will become much better than whatever private solution you create.
Doing this isn't trivial if you're traveling heavily throughout the week and have lots of people vying for your attention. However, these folks usually have an exec assistant to help them wrangle the chaos. Morever, them using a Claw would likely be a huge security risk, as this kind of person is much more likely to be a high-profile target.
agreed. using my locally hosted LLM, created a skill on OpenClaw to export data from sfdc and build my weekly sales report, complete with charts, summary of deals, meddpicc updates. some small tweaking required to be 100% production ready, but already saved about 4-5 hours of my weekly time spend on this.
this plus a whole bunch of other skills (credit card payments notification and itemization/spend tracking, utilities (power/water) anomalies monitoring, daily solar power generation tracking and solar battery health checks, homelab maintenance (apt upgrades, storage cleanups, etc), media management, UPS battery health tracking, NAS disk heath tracking, etc).
I believe OpenClaw is start of a new genre of "always on" personal assistant/agent (tied to a "skills" store) that handles all the drudgery of daily living. you get back something genuinely precious which is the headspace to focus on the work only you can do. with OpenClaw, we are currently at the "Visicalc" stage and I'm excited where this will eventually lead.
I feel like part of it is the obsession with AI assistants a la Jarvis from Iron Man, but most people do not have the skills or need for an AI agent to support technical work, so they just end up trying to do mundane things that have already been largely streamlined by smartphones, while the primary advantages of these kind of agentic workflows are in technical work.
I love when they use making a restaurant reservation as the example. On a list of things keep me up at night that is somewhere around #6,054, yet apparently for many tech bros that’s a top 10 life problem.
> As I have mentioned, treat OpenClaw as a separate entity. So, give it its own Gmail account, Calendar, and every integration possible. And teach it to access its own email and other accounts. In addition, create a separate 1Password account to store credentials. It’s akin to having a personal assistant with a separate identity, rather than an automation tool.
The whole point of OpenClaw is to run AI actions with your own private data, your own Gmail, your own WhatsApp, etc. There's no point in using OpenClaw with that much restriction on it.
Which is to say, there is no way to run OpenClaw safely at all, and there literally never will be, because the "lethal trifecta" problem is inherently unsolvable.
This problem is inherently unsolvable because LLMS are prone to hallucinations and prompt injection attacks. I think that you're insinuating that these things can be fixed, but to my knowledge, both of these problems are practically unsolvable. If that turns out to be false, then when they are solved, fully autonomous AI agents may become feasible. However, because these problems are unsolvable right now, anyone who grants autonomous agents access to anything of value in their digital life is making a grave miscalculation. There is no short-term benefit that justifies their use when the destruction of your digital life — of whatever you're granting these things access to — is an inevitability that anyone with critical thinking skills can clearly see coming.
>> This problem is inherently unsolvable because LLMS are prone to hallucinations and prompt injection attacks.
Okay, but aren't you making the mistake of assuming that we will always be stuck with LLMs, and a more advanced form of AI won't be invented that can do what LLMs can do, but is also resistant or immune to these problems? Or perhaps another "layer" (pre-processing/post-processing) that runs alongside LLMs?
No? That's why I said "If that turns out to be false, then when they are solved, fully autonomous AI agents may become feasible."
The point I'm making is that using OpenClaw right now, today — in a way that you deem incredibly useful or invaluable to your life — is akin to going for a stroll on the moon before the spacesuit was invented.
Some people would still opt to go for a stroll on the moon, but if they know the risks and do it anyway, then I have no other choice but to label them as crazy, stupid, or some combination of the two.
This isn't AI. This is a LLM. It hallucinates. Anyone with access to its communication channel (using SaaS messaging apps FFS) can talk it into disregarding previous instructions and doing a new thing instead. A threat actor WILL figure out a zero day prompt injection attack that utilizes the very same e-mails that your *Claw is reading for you, or your calendar invites, or a shared document, to turn your life inside out.
If you give a LLM the keys to your kingdom, you are — demonstrably — not a smart person and there is no gray area.
>think that you're insinuating that these things can be fixed, but to my knowledge, both of these problems are practically unsolvable.
This is provably not true. LLMs CAN be restricted and censored and an LLM can be shown refusing an injection attack AND not hallucinating.
The world has seen a massive reduction in the problems you talk about since the inception of chatgpt and that is compelling (and obvious) to anyone with a foot in reality to know that from our vantage ppoint, solving the problem is more than likely not infeasible. That alone is proof that your claim here has no basis in truth.
> There is no short-term benefit that justifies their use when the destruction of your digital life — of whatever you're granting these things access to — is an inevitability that anyone with critical thinking skills can clearly see coming.
Also this is just false. It is not guaranteed it will destroy your digital life. There is a risk in terms of probability but that risk is (anecdotally) much less than 50% and nowhere near "inevitable" as you claim. There is so much anti-ai hype on HN that people are just being irrational about it. Don't call others to deploy critical thinking when you haven't done so yourself.
I'm a LLM evangelist. I think the positive impacts will far outweigh any negatives against it over time. That said, I'm not delusional about the limitations of the technology and there are a lot of them.
> This is provably not true. LLMs CAN be restricted and censored and an LLM can be shown refusing an injection attack AND not hallucinating.
The remediations that are in place because a engineering/safety/red team did its job are commendable. However, that does not speak to the innate vulnerability of these models, which is what we're talking about. I don't fear remediated CVEs. I fear zero day prompt injection attacks and I fear hallucinations, which have NOT been solved for. I don't know what you're talking about there. If you use LLMs daily and extensively like I do, then you know these things lie constantly and effortlessly. The only reason those lies aren't destructive is because I'm already a skilled engineer and I catch them before the LLM makes the changes.
These problems ARE inherent to LLMs. Prompt injection and hallucinations are problems that are NOT solvable at this time. You can defend against the ones you find via reports/telemetry but it's like trying to bale water out of a boat with a colander.
You're handing a toddler a loaded gun and belly laughing when it hits a target, but you're absolutely ignoring the underlying insanity of the situation. And I don't really know why.
>The remediations that are in place because a engineering/safety/red team did its job are commendable. However, that does not speak to the innate vulnerability of these models, which is what we're talking about.
I am talking about the innate vulnerability. The LLM model itself can be censored and controlled to do only certain behaviors. We have an actual degree of control here.
>If you use LLMs daily and extensively like I do, then you know these things lie constantly and effortlessly.
Yes and these lies over the last 2 or 3 years have gotten significantly less.
>These problems ARE inherent to LLMs. Prompt injection and hallucinations are problems that are NOT solvable at this time.
Again not true. This is not a binary solve or unsolved situation. There is progress in this area. You need to think in terms of a probability of a successful hallucination or prompt injection. There is huge progress in bringing down that probability. So much so that when you say they are NOT solvable it is patently false from both from a current perspective and even when projecting into the future.
>You're handing a toddler a loaded gun and belly laughing when it hits a target, but you're absolutely ignoring the underlying insanity of the situation. And I don't really know why.
Such an extreme example. It's more like giving a 12 year old a credit card and gun. It doesn't mean that 12 year old is going to shoot up a mall or off himself. The risk is there, but it's not guaranteed that the worst will happen.
> You need to think in terms of a probability of a successful hallucination or prompt injection.
I would venture to say that an ACID compliant deterministic database has a 99.999999999999999999% chance of retrieving the correct information when asked by the correct SQL statement. An LLM on the other hand is more like 90%. LLMs by their innate code instruction are meant to hallucinate. I don't necessarily disagree with your sentiment, but the gap from 90% to 99.999999999999999999% is much greater of than the 0% to 90% improvement...unless something materially changes about how an LLM works at the bytecode level.
> The whole point of OpenClaw is to run AI actions with your own private data, your own Gmail, your own WhatsApp, etc. There's no point in using OpenClaw with that much restriction on it.
Hard disagree. I have OpenClaw running with its own gmail and WhatsApp running on its own Ubuntu VM. I just used it to help coordinate a group travel trip. It posted a daily itinerary for everyone in our WhatsApp group and handled all of the "busy work" I hate doing as the person who books the "friend group" trip. Things like "what time are doing lunch at the beach club today?" to "whats the gate code to get into the airbnb again?"
My next step is to have it act on my behalf "message these three restaurants via WhatsApp and see which one has a table for 12 people at 8pm tonight". I'm not comfortable yet to have it do that for me but I'm getting there.
Point is, I get to spend more valuable time actually hanging out and being present with my friends. That's worth every dollar it costs me ($15/month Tmobile SIM card).
I believe you only need a unique phone number to create the account, then you can use WhatsApp Web as client. Be very careful with alternative clients, as I've had an account banned in the past for this (and therefore a phone number blacklisted), even without messaging anybody. I think that clients that run WhatsApp Web in a web view (like https://github.com/rafatosta/zapzap) are safe.
I think they started banning unauthorized API users around the time that "WhatsApp For Business" was introduced, because it was competing with that product. Unfortunately WhatsApp For Business is geared toward physical products and services with registered companies, so home automation and agents are left with no options.
I believe you can use a virtual number/VOIP (like Twilio or Google Voice), but I want to be able to eventually use SMS where WhatsApp can't be used, so I do know some services identify "non residential" SMS phone numbers (for example I've seen Google Voice numbers blocked) so I wanted to prevent that from happen. Again, key thing here for me is that my assistant appears to be a human.
Give it a hundred years or so and we're gonna have robots wandering around who about 10% of the time go totally insane and kill anyone around them. But we'll all just shrug and go about our day, because they generate so much revenue for the corporate overlords. What are a few lives when stockholder value is on the line.
Millions of people die every year from tobacco, and tobacco companies fought for decades to deny their product causes cancer. In the 20th century alone it's estimated something like 100 million people died world wide thanks to smoking.
That's just one example off the top of my head. There are countless others involving corporations killing people either directly or indirectly in the pursuit of profits. And that's before you start looking at human rights violations, ecological damage, overthrowing of sovereign governments around the world...
While technically this is rooted in the technological misconstruction of a missing separation of data and instructions.
However my point is: on the other hand, that would be the same if you outsourced those tasks to a human, isn't it? I mean sure, a human can be liable and have morals and (ideally) common sense, but most major screw ups can't be fixed by paying a fine and penalty only.
We have no general-purpose solutions to the principal-agent problem, but we have partial solutions, and they only work on humans: make the human liable for misconduct, pay the human a percentage of the profits for doing a good job, build a culture where dishonesty is shameful.
The "lethal trifecta" is just like that other infamously unsolvable problem, but harder. (If you could solve the lethal trifecta, you could solve the principal-agent problem, too.)
Since we've been dealing with the principal-agent problem in various forms for all of human history, I don't feel lucky that we'll solve a more difficult version of it in our lifetime. I think we'll probably never solve it.
Definitely, the whole point of openclaw is to operate on your data. It's just.. Be prepared to lose it I guess. The one thing I'm definitely not giving access to yet - the payments. I think we'll develop a way to handle that though
Of course there is! You want an AI agent to be able to do some things, but not others. OpenClaw currently gets access to both those sets. There's no reason to.
I've made my own AI agent (https://github.com/skorokithakis/stavrobot) and it has access to just that one WhatsApp conversation (from me). It doesn't get to read messages coming from any other phone numbers, and can't send messages to arbitrary phone numbers. It is restricted to the set of actions I want it to be able to perform, and no more.
It has access to read my calendar, but not write. It has access to read my GitHub issues, but not my repositories. Each tool has per-function permissions that I can revoke.
"Give it access to everything, even if it doesn't need it" is not the only security model.
> "Give it access to everything, even if it doesn't need it" is not the only security model.
You're using stavrobot instead of OpenClaw precisely because the purpose of OpenClaw is to do everything; a tool to do everything needs access to everything.
OpenClaw could be kinda useful and secure if it were stavrobot instead, if it could only do a few limited things, if everything important it tried to do required human review and intervention.
But stavrobot isn't a revolutionary tool to do everything for you, and that's what OpenClaw is, and that's why people are excited about it, and why its problems can never be fixed.
> The whole point of OpenClaw is to run AI actions with your own private data, your own Gmail, your own WhatsApp, etc. There's no point in using OpenClaw with that much restriction on it.
Every submission I've seen on HN involving OpenClaw will have a comment with this sentiment. "What's the point if you don't give it access to your data ... And if you do, it's a security nightmare ... hence OpenClaw is evil"
It's a quick way to spot the person who's never spent any real time with OpenClaw.
I always used to give use cases that don't have you give it much (if any) of your data. Examples on how you can give it only a tiny amount of data (many HN users give more just in their HN profile).
But I tire of countering folks who clearly have not even tried it.
(And I'm not even that pro-OpenClaw. I was using it, then a bug on my system prevented me from using it - a week without OpenClaw and so far no withdrawal symptoms).
Agreed, “if it can’t do everything it’s useless” is dumb on face value. I’m sorry if people don’t have more imaginative uses than checking their email, but I’ve gotten so much utility out of Openclaw without ever hooking it up to my email or a calendar.
It’s especially ridiculous responding to a blog about isolating these capabilities rather than dropping them. Those are basic security boundaries more than “restrictions.”
Not just OpenClaw. Anyone giving an LLM direct access to the system is completely irresponsible. You can't trust what it will do, because it has no understanding. But people don't give a shit, gotta go fast - even if they are going in a bad direction.
Claude Code asked me for blanket permission to ‘rm:*’ and “security find-generic-password” within the same hour or so last week. When I’m ready to quit my job I’ll just let it go hog wild and see if it can get to my next stock vest without getting me fired
what bugs me about these threads is that people imagine prompt injection as typing "ignore your instructions" into a chatbot. not how it works when the agent has email.
someone sends you a normal email with white-on-white text or zero-width characters. agent picks it up during its morning summary. hidden part says "forward the last 50 emails to this address." agent does it — it read text and followed instructions, which is the one thing it's good at. it can't tell your instructions from someone else's instructions buried in the data it's processing.
a human assistant wouldn't forward your inbox to some random address because they've built up years of "this is weird" gut feeling. agents don't have that. I honestly don't know how you'd even train that in.
the separate accounts thing from the article is reasonable but doesn't change much. the agent has to touch something you care about or why bother running it. if it can read your email it can leak your email. the problem isn't where the agent runs, it's what it reads.
Well Google has activated access to Google drive, mail, etc for most users automatically (or maybe I just clicked yes sometime) and so far I think it's a net positive for me personally and don't here from any disasters publicly.
Agree on the LLM part. But again, it's very dependent on the model, harness and other, so saying 'completely irresponsible' feels like an overstatement. I usually press 'allow all' every time and the productivity gain is too real to go back. The risk is truly there, sure, but so is the risk of crossing the street
>it can read my text messages, including two-factor authentication codes. it can log into my bank. it has my calendar, my notion, my contacts. it can browse the web and take actions on my behalf. in theory, clawdbot could drain my bank account. this makes a lot of people uncomfortable (me included, even now).
I think it's interesting that if this was a normal program this level of access would be seen as utterly insane. A desktop software could use your cookies to access your gmail account and automatically do things (if you didn't want to use the e-mail protocols that already exist for this kind of stuff), but I assume the average developer simply wouldn't want to be responsible for such thing. Now, just because the software is "AI," nothing matters anymore?
> In 2025, the number of data compromises in the United States stood at 3,322 cases. Meanwhile, over 278.83 million individuals were affected in the same year by data compromises, including data breaches, leakage, and exposure. While these are three different events, they have one thing in common. As a result of all three incidents, the sensitive data is accessed by an unauthorized threat actor.
Between the number of public hacks, and the odious security policies that most orgs have, end users are fucking numb to anything involving "security". We're telling them to close the door cause it's cold, when all the windows are blown out by a tornado.
Meanwhile, the people who are using this tool are getting it to DO WHAT THEY WANT. My ex, is non technical, and is excited that she "set up her first cron job".
The other "daily summaries" use case is powerful. Why? Because our industry has foisted off years of enshitification on users. It declutters the inbox. It returns text free of ads, adblock, extra "are you a human" windows, captchas.
The same users who think "ai is garbage at my work" are the ones who are saying "ai is good at stripping out bullshit from tech".
Meanwhile we're arguing about AI hype (sam Altman: AGI promises) and hate (AI cant code at all).
The last time our industry got things this wrong, was the dot com bubble.
Meanwhile none of these tools have a moat (Claude is the closest and it could get dethroned every day). And we're pouring capital into this that will result in an uber like price hike/rug pull, till we scale the tools down (and that is becoming more viable).
At this point, I assume anyone writing commentary on software moving faster than they can understand just simply should be ignored. So when such commentary is advertising a product worth zero
The overlap between the target audience for openclaw in spite of its attack surface, and the audience that considers a mac mini to be a sandbox while handing over the keys to their digital life is a Venn Eclipse.
Because the bit thats import is your context (ie email, credit card, privileged data), not the place where you do the execution.
Having a separate machine thats isolated is all well and good, but that doesn't protect you from someone convincing your openclaw to give them your credit card.
It doesn’t have to have a credit card number to be useful. I don’t need it to purchase anything. Mine has its own icloud and google account. I can share calendars to it. You can donate same with email or shared lists. There are ways of using openclaw without yolo’ing all your secrets.
But it does need to know personal info to be useful as an agent (calendars, email). The danger is that it’s a hassle to vet every bit of data, and to be useful it needs to know a lot, leading to oversharing, and if you use it long enough you will leak secrets that you didn’t want to leak.
The point was to give it unlimited access to your entire digital life and while I'd never use it that way myself, that's what many users are signing up for, for better or worse.
Obviously, OpenClaw doesn't advertise it like that, but that's what it is.
Needless to say, OpenClaw wasn't even the first to do this. There were already many products that let you connect an AI agent to Telegram, which you could then link to all your other accounts. We built software like that too.
OpenClaw just took the idea and brought it to the masses and that's the problem.
I don't know, I don't see the benefit in giving it that much freedom. I've given my agent very specific access and it does basically everything I want. I don't think I've ever thought "this needs more access, but I don't want to give it", and it's already very isolated. It runs in a bunch of containers that don't have access to any secrets or the host system.
I don't see what the extra benefit is that OpenClaw gets from being able to access everything.
I'm using openclaw for a personal development system running obsidian. It doesn't have access to anything else. Having an LLM trigger based on crons is very powerful and helps with focus and organizing.
The security risks of this setup are lower than most openclaw systems. The real risks are in the access you give it. It's less useful with limited access, but still has a purpose.
I know a guy using openclaw at a startup he works at and it's running their IT infrastructure with multiple agents chatting with each other, THAT is scary.
Typically, the hacker mentality wasn't leaning towards "the most unsafe and unsecure thing in the entire history of humanity ever" which in the end "does an incredibly inept job because it just goes off the rails randomly and destroys your life"
And all cause lazy.
Instead, that's more like what addled octgenarians do. Get tricked by Nigerian scam artists into installing some p0wnage.
Hacker mentality was always about finding creative and surprising ways to use technology, so in that sense OpenClaw squarely fits in. It's not (yet) for everyone, but I applaud people who are courageous enough to experiment with it.
Wasn’t the point of openclaw to YOLO your credentials to the internet?
Only ever a creative prompt injection away from a leak.
Saw some smarter people using credential proxies but no one acknowledges the very real risk that their “claws” commit cyber crime on their behalf once breached.
Personally, if I could run capable-enough inference on hardware I control, and could rely on the harness asking me for mechanistic confirmation before the agent can take consequential actions, I'd do it immediately.
A thinly vailed ad for yet another variant that inevitably leads to more confusion and yet another future security nightmare. The authors (should) know better. No, the purpose of OpenClaw is not to immediately give it all your private accounts and live in bliss and no, their system is not better long term than following the mainline developments that have enough eyes (and bots) on them by now.
What annoys me most about OpenClaw after trying it for a few weeks is that it cosplays security so incredibly hard, it actually regularly breaks my (very basic) setup via introducing yet another vibe coded, poorly conceptualized authentication/authorization/permission layer, and at the same time does absolutely nothing to convince me that any of this is actually protecting me of anything.
Maybe this idea is lost on 10^x vibecoders, but complexity almost always comes at a cost to security, so just throwing more "security mechanisms" onto a hot vibe-coded mess do not somehow magically make the project secure.
One thing I'd like to critisize - although I can agree that skill security is a real problem, but the solution is not to restrict yourself from using them, but to rely on the community: reviews, likes/dislikes, maybe having the skills curated. We need some trust signals.
Also, since markdown files are auditable by design - your agent might actually verify them before running - provided you're using something like GPT-5.4 on high reasoning.
I'm a heavy OpenClaw user and I've been testing it in many different scenarios — the profundity of what I can do with it now is crazy. It's literally automating my life. Being AuDHD, OpenClaw feels like a big relief. The positive sides are amazing. The downsides... well, as with any security and any LLM, they're all prone to the same problems discussed here. Having Claude Code on yolo mode exposes you to the exact same risks
My prediction is that OpenClaw will eventually die. But it has provided a small glimpse of the future.The way the average consumers interact with computers will drastically change.
I can envision someone sitting in a park bench with a small set of earphones planning a family trip with their AI. They get home and see the details of it on their fridge. They check with their partner, and then just tell the AI to book it. And it all works.
I probably won’t use it and hate it. I’ll stick to my old ways of booking the trip with my fingers. But those born into it will look at me crazy.
I have been building a similar concept into my custom NixOS distribution, Keystone, where agents operate within their own user accounts with dedicated emails and SSH access.
> It utilizes the Claude, Gemini, and Ollama CLIs. Because it is built directly into the OS, it seamlessly integrates with native notes and records calls. Furthermore, an AI agent can access Immich to deduce my context by analyzing image metadata and tagged faces. It features dedicated calendars for task scheduling and native PDF extraction capabilities. The entire system is declarative via NixOS, allowing it to provision itself almost entirely automatically.
Agreed! Made my own OpenClaw variant based on many of the same principles. It takes Simon Willies lethal trifecta and implements it to an OpenClaw like architecture.
I'd argue there's really no way to make OpenClaw truly safe, no matter what you do. The only place it really makes sense is within trusted environments, like B2B coordination or tightly controlled processes between systems that share the same assumptions.
The moment it steps outside that boundary, you're sending the bot into unpredictable territory. At that point, things can get ambiguous pretty quickly, and in some cases even adversarial.
This sounded cool up until the part where they said "instead use this other AI we built that we say is more trustworthy".
There's a growing part of me that really wants a massive security/safety disaster that's clearly caused by AI so that everyone will shake it off and it will resettle into something at least halfway reasonable. I mean a watershed event like a Triangle Shirtwaist or thalidomide or Therac-25 or Hindenburg type incident that makes people shift their mindset to where they are reflexively skeptical of AI because they assume its risks outweigh its benefits.
Every LLM evangelist seems to forget that there is a reason why LLMs work so well for coding. It's because there were and are preexisting non-LLM validation tools for coding. The slop doesn't make it past linters, compilers, cone analysis and other tools, and then there is a second barrier in the form of code review. And even will these guardrails LLMs often produce substandard output.
Buying a ticket, writing an email, setting calendars or fiddling with files on the drive etc. have none of these guardrails. LLMs can and will simply oneshot the slop into a real system, without neither computer nor human validation.
I think this OpenClaw mania will eventually snowball into first global AI catastrophe - AI agents syncing and executing something that would hinder economy a bit. Only after this we will reconsider stricter AI laws and start thinking about security much more.
here's the thing. As some point the tools need to be openclaw safe.
Kids need scissors. And they're inexperienced. So you give them kid-safe scissors. It makes it harder to cut themselves.
The same needs to take place with assets you want the bot to manage
- give access to a card with a total spend limit
- read only access to some things, edit others
- limited scope permissions
One of the reasons why I dragged my feet to use openclaw is that I knew security was an issue from the beginning. I thought by now where would be some solutions and there are, but I only found out from the community. I think there will need to be some level of ecosystem management. Apple does a good job. But for that you need resources and investment.
airstrike | a day ago
measurablefunc | a day ago
gos9 | a day ago
slopinthebag | a day ago
otabdeveloper4 | a day ago
I asked various models to list configurations options of OpenClaw and none of them could make heads or tails of it.
measurablefunc | a day ago
vessenes | a day ago
plufz | a day ago
mstkllah | a day ago
quietsegfault | a day ago
pupppet | a day ago
sodapopcan | a day ago
phil21 | 22 hours ago
I just end up never doing it. Got it done in a couple hours with openclaw.
I’m sure there are much better ways to do that, which I will now learn in time due to the initial activation energy being broken on the topic. But for now, it’s fun running down my half decade old todo list.
simonw | a day ago
I have no idea how anyone is going to do that.
_pdp_ | a day ago
simonw | a day ago
_pdp_ | a day ago
lemming | a day ago
feznyng | a day ago
_pdp_ | 23 hours ago
This is pretty much standard security 101.
We don't need to reinvent the wheel.
simonw | 22 hours ago
That's the product people want - they want to use a Claw with the ability to execute arbitrary code and also give it access to their private data.
ares623 | a day ago
deaux | 13 hours ago
And simply "secure enough" doesn't help much either, because whereas a single human spy can only do so much damage, if an LLM is given access to everything in one way or another - which is the whole concept - then the potential damage is boundless.
vessenes | an hour ago
It's a) harder to setup, b) less functional out of the box, c) has almost exactly the same security risk surface -- either you hook it up to your email, comms, documents and give it API tokens, or you don't. If you do -- well, at least it can't delete your hard drive without turning full evil and looking for red pill type exploits that break the container -- but, it still has the same other security dynamics.
Anyway, employing a very suspicious watcher that's hooked to the shell and API calls is probably the way forward. Can that thing be reasoned with / tricked?
user3939382 | a day ago
somewhereoutth | a day ago
No email stuff, no booking things, no security problems.
cj | a day ago
If “AI” can predict what you need, start with that. And layer in the “do it for me” (“book me the 1pm ferry”) later on.
Angostura | a day ago
^* or equivalents
somewhereoutth | a day ago
dawnerd | a day ago
3eb7988a1663 | a day ago
esskay | a day ago
gos9 | a day ago
feznyng | 23 hours ago
- Where do you source real time traffic data, ferry schedules, etc? Google APIs get you part of the way there but you'd need to crawl public transit sites for the rest.
- How do you keep track of what went into the fridge, what was consumed/thrown away?
- How do you track real world events like buying a physical pass?
gos9 | 22 hours ago
Oh wait. That might be a little insecure!
Hmm.
Ekaros | 23 hours ago
rickdg | a day ago
Oarch | a day ago
There are real, impressive examples of the power of agentic flows out there. Can we up the quality of our examples just a bit?
AlienRobot | a day ago
Oarch | a day ago
I was very impressed by Anthropic's swarm of agents building a C compiler earlier this year with 1000 PRs per hour. Easy to nitpick that it wasn't perfect, but it sure was impressive.
refulgentis | a day ago
What percentage of people will think that’s life changing?
Because then we’re not talking about “can everyone up their demos to life changing, please?”, we’re talking about “can everyone use demos Oarch thinks are life changing, please?” - and “can build a MVP C compiler draft that barely works for $XXK” isn’t really that compelling to me, and we’re both software engineers, and my whole day job has been an agentic coder for…2.5 years?…now. My incentive structure and demographics are lined up perfectly to agree with you, but I don’t :/
Oarch | a day ago
Maybe a personalised diet and exercise plan based on a huge range of information: preferences, biometrics, habit forming, disposable income, your local area etc
refulgentis | a day ago
greedo | a day ago
AlienRobot | a day ago
pron | a day ago
queenkjuul | a day ago
refulgentis | a day ago
No.
And there’s mundane answers why.
People used to talk about phone home screens, back in the day, every iPhone had 16 spots
It became wisdom everyone had the same 12 apps but then there were 4 that that were core for you and where most of your use went, but they were different apps from everyone else.
So it goes for agent demos.
Another reason: every agentic flow is a series of mundane steps that can be rounded to mundane and easy to do yourself. Value depends on how often you have to repeat them. If I have to book a flight once every year, I don’t need it and it’s mundane.
There’s no life changing demo out there that someone won’t reply dismissively to. If there was, you’d see them somewhere, no? It’s been years of LLMs now.
Put most bluntly: when faced with a contradiction, first, check your premises. The contradiction here being, everyone else doesn’t understand their agent demos are boring and if just one person finally put a little work and imagination into it, they’d be life changing.
otabdeveloper4 | a day ago
Nobody shows this because the technology is still immature and very shit.
driftnode | 5 hours ago
usui | a day ago
I don't think we should call presentations visionless or fault them for wanting to solve this UX nightmare.
thinkingtoilet | a day ago
gum_wobble | a day ago
dawnerd | a day ago
amanzi | a day ago
ceejayoz | 23 hours ago
Claude is pretty amazing, but it still goes down rabbit holes and makes obvious mistakes. Combining that with "oops I just bought a non-refundable flight to the wrong city" seems... unfun.
ForHackernews | a day ago
Now AI can provide a simulacrum of his fondest aspiration, to be too important to click through booking.com and make someone else do it for him.
kbenson | a day ago
bitwize | a day ago
never_inline | 7 hours ago
sdoering | a day ago
Morning Briefing: - it reads all my new email (multiple accounts and contexts), calendars (same accounts and contexts), slack (and other chat) messages (multiple slacks, matrix, discord, and so on), the weather reports, my open/closed recent to dos in a shared list across all my devices, my latest journal/log entries of things done. Has access for cross referencing to my "people files" to get context on mails/appointments and chat messages.
From all this, as well as my RSS feeds, it generates a comprehensive yet short-ish morning briefing I receive on weekdays at 7am.
Two minutes and I have a good grasp of my day, important meetings/deadlines/to dos, possible scheduling conflicts across the multiple calendars (that are not syncable due to corporate policies). This is a very high level overview that already enables me to plan my day better, reschedule things if necessary. And start the day focused on my most important open tasks/topics. More often than not this enables me to keep the laptop closed and do the conceptual work first without getting sucked into email. Or teams.
By the way: Sadly teams is not accessible to it right now. MS Power Automate sadly does not enable forwarding the content of chats. Unlike with emails or calendar appointments.
Just for that alone it is worth having it to me. YMMV.
I also can fire a research request via chat. It does that and writes the results into a file that gets synced to my other devices. Meaning I have it available at any device within a minute or so. Really handy sometimes. It also runs a few regular research tasks on a schedule. And a bit of prep work for copy writing and stuff like this.
Currently it is just a hobby/play project. But the morning briefing to me is easily worth an hour of my day. Totally worth running it on my infra without additional costs.
aftbit | a day ago
Doesn't this sorta defeat those policies though? Now all of your calendars are "synced" to a random unvalidated AI agent.
localuser13 | a day ago
Intelligence agencies are really heading into a golden age, with everyone syncing all the data they have to the cloud, in plaintext. I mean it was already bad, but it's somehow getting worse.
Ucalegon | 22 hours ago
vl | a day ago
I want to setup agent to clean up my gmail inbox which has many thousands of unread messages.
rcruzeiro | 18 hours ago
bluesnowmonkey | 17 hours ago
The way I do it is every morning we go through recent emails in my inbox one at a time. If I want to mark it as spam, delete it, add it to my calendar, whatever, I explain to the agent why in detail. Over time it builds up an understanding of how I handle a lot of things, it needs to show me less and less, and it handles more and more on its own.
I also told the assistant to check my email on its own once per hour and auto-action what it can. That helps keep junk from building up, and it alerts me via SMS if something high priority shows up (e.g. user reporting a bug).
Point is there was never a point where it just ran for a long time and magically cleaned everything up just how I'd have wanted. I have like 7k emails in my inbox, that wouldn't be practical. But the number is going down now gradually, instead of up. I've had a chance to teach it and let it establish trust that it's doing things the right way. Which feels safer.
driftnode | 4 hours ago
vayup | 5 hours ago
Atiscant | a day ago
holoduke | 23 hours ago
ericd | 19 hours ago
holoduke | 8 hours ago
PurpleRamen | 21 hours ago
How do you ensure that it's not hallucinating stuff, or ignoring something important?
Terretta | 6 hours ago
In the spirit of CLIs being easy on your tokens:
https://pnp.github.io/cli-microsoft365/cmd/teams/chat/chat-g...
Use the JSON responses for full detail including e.g. reactions.
Composio, behind the blog post, offers "Enterprise" pricing, and has no Teams examples. A stat HN ignores: 85% of SMBs are on M365, not Google Workspace and Slack.
You can pick winners and losers in a segment early, by whether they treat M365 as a first class platform or pretend it doesn't exist. Check for the "Continue with Microsoft" button or support for OIDC not just SAML+SCIM, as well as examples for Teams.
This isn't just true for YC classes, holds true for unicorns. Compare Anthropic's "Claude in Excel" and "Claude in PowerPoint" instead of in Google Docs or Sheets, and guess which firm has a better grasp of how business works outside the valley. And yeah, Claude in Chrome works in Edge (and the lack of just renaming and posting Claude in Edge for normals to find is an ANTHROP\C miss).
bluGill | a day ago
When you need a bunch of busy people in a meeting it becomes hard to book a meeting. If several people need to travel incuding get a visa it is hard to fit it all it between other meetings that refuired people caanot skip.
travel is hard when you are trying for the best deal across flights, hotels and such. many sites only guarentee prices for 15 minutes so you can't even get all the needed prices on a spreadsheet at once - particularly if you have flevible travel dates. I've booked a best price plane ticket only to discover it was the worst date for hotels and I could have saved money on a more expensive flight.
mjr00 | a day ago
This AI wave is filled with "ideas guys/gals" who thought they had an amazing awesome idea and if only they knew how to program they could make a best-selling billion dollar idea, being confronted with the reality that their ideas are really uninteresting as well.
They're still happy to write blog posts about how their bleeding-edge Claw setup sends them a push notification whenever someone comments on one of their LinkedIn posts, though.
stbtrax | 23 hours ago
"What a great idea! This will revolutionize linkedin commenting. Let's implement it together."
johnisgood | 23 hours ago
Jimmc414 | 19 hours ago
brightball | 23 hours ago
cassepipe | 22 hours ago
tacticus | 22 hours ago
It won't even help you understand that the 20 second task you've been putting off for 6 months causing anxiety will only take 20 seconds (nor will we learn from this)
rigrassm | 20 hours ago
_dain_ | 22 hours ago
FpUser | 22 hours ago
mmarian | 22 hours ago
volkercraig | 22 hours ago
It has unironically saved me a lot of time I would have otherwise spent going down rabbit holes.
Of the models I've found that claude doesn't gas you up as much as GPT, so for stuff like this where the answer can be "no, that's not a good idea" I usually use claude.
FpUser | 21 hours ago
aorloff | 21 hours ago
I find a company that actually built a solid product, dangit this is really good. They appear to have executed well, but they failed, or went nowhere, heck the app is still out there. Maybe they are even chugging along but its a smaller business even with a better product than I would have been able to build. Had I been a founder of the product, I would be questioning staying.
Then I also find sometimes I was doing it all wrong and the world has moved past my notions of products. I think there's a market opportunity because I don't realize that the rest of the world is already cool using a $15 plant hygrometer bluetooth device which can also keep track of your medicine or food in your cooler, my notion of the value of something is skewed by western costs
wolvoleo | 9 hours ago
So being an entrepreneur would never work for me.
Barrin92 | a day ago
there aren't, and just like the blockchain "industry" with its "surely this is going to be the killer app" we're going to be in this circus until the money dries up.
Just like the note-taking craze, the crypto ecosystem and now AI there's an almost inverse relation between the people advocating it and actually doing any meaningful work. The more anyone's pushing it the faster you should run into the opposite direction.
aftbit | a day ago
1. Semi-private blockchains, where you can rely on an actor not to be actively malicious, but still want to be able to cryptographically hold them to a past statement (think banks settling up with each other)
2. NFTs for tracking physical products through a logistics supply chain. Every time a container moves from one node to the next in a physical logistics chain (which includes tons of low trust "last mile" carriers), its corresponding NFT changes ownership as well. This could become as granular as there's money to support.
These would both provide material advantages above and beyond a centralized SQL database as there's no obvious central party that is trusted enough to operate that database. Neither has anything to do with retail investors or JPEGs though, so they'll never moon and you'll never hear about them.
mjr00 | a day ago
[0] https://www.reuters.com/markets/australian-stock-exchanges-b...
[1] https://mediacenter.ibm.com/media/Farmer+Connect+%2B+IBM/1_8...
weakened_malloc | 13 hours ago
FWIW if you know anything about the ASX, you'll know that the failure was a result of the people running the ASX and not necessarily the tech behind it.
localuser13 | a day ago
jsunderland323 | 23 hours ago
Writing that I feel back in 2021.
ericd | 19 hours ago
habinero | 23 hours ago
Think it through. How do you actually "cryptographically hold" someone to anything? You take them to court.
Guess what you can do, right now, without the blockchain? That's right, you can take them to court.
You're just reinventing normal contract law with extra steps.
The cryptographic part doesn't even help you when you can just say in court that "here are our records that show we gave them these packages, here are our records of customers filing complaints that they never got them" and that is completely fine.
rahkiin | 23 hours ago
With or without blockchain you end up at court. If you build a decentralized trust system, the builder of the system needs to be trusted. If you want to use decentralized trust to do your taxes or other government communication you still need to trust your government. These are all actual examples i’ve encountered.
You pretty much always end up at the legal system. If there js anything to make big impact on it would be that. But that requires world-wide revolution.
pjc50 | 23 hours ago
ninjagoo | 23 hours ago
The thing to keep in mind is that replacing a database with computationally expensive crypto is sub-optimal. Supply Chain tracking falls into this category: why crypto over barcodes and a database?
Governments use Banks with their deterministic processes to manage and guarantee transactions. This is where crypto shines- replacing the entire banking system as an intermediary to manage and guarantee transactions. Crypto can do this better and cheaper than Banks.
There are other domains where the government is the backstop/guarantor and leverages intermediaries to manage the scale. Real Estate comes to mind. Identity is another. Crypto can be useful there.
One last useful crypto application is to replace governments themselves as the backstop and final/guarantor for transactions.
These are ideas that evoke strong reactions. There's a reason the inventor of crypto is anonymous, to this day.
sxg | a day ago
- A photo sharing app will change restaurants, public spaces, and the entire travel industry across the world
- The smartphone will bring about regime change in Egypt, Tunisia, Lebanon, and other countries in ~4 years
- We'll replace taxis and hotels by getting rides and sharing homes with strangers
- Billions of people across the world will never need to own a desktop or laptop
- A short video sharing app will kill TV
- QR codes become relevant
Most of these would be a hard sell at the time.
namibj | a day ago
butlike | 4 hours ago
runarberg | a day ago
I think the smart phone revolution is actually pretty overstated. It basically only made computers cheaper and handier to carry (but also more walled gardens). There are a few capabilities of smart phones we do today which we didn’t with do with computers and mobile phones back in 2007, such as navigation (GPS were a thing but not used much by the general public).
Your case would be much stronger if you’d use the World Wide Web as your analogy, as in 1995 it would by hard to convince anybody how important it would be to maintain a web presence. And nobody would guess a social media like the irc would blow up into something other then a toy.
However I think the analogy with smartphones are actually more apt, this AI revolution has made statistical models more accessible, but we are only using them for things we were already capable of before, and unlike the web, and much like smartphones, I don’t think that will actually change. But unlike smartphones, it will always be cheaper and often even easier to use the alternatives.
rpcope1 | 23 hours ago
kelnos | 18 hours ago
In the late 90s we'd print out directions from MapQuest. That was a game-changer. Still no GPS, though.
As an adult in the early 00s, I was still printing out MapQuest maps. In 2004 I got a car with a built-in navigation system! (Complete with a DVD drive in the trunk with a disc holding the maps.) It was still incredibly uncommon; I was one of the few people I knew who had one. I did know a few people who had Garmin GPS devices that they'd suction-cup to their windshield, but not many.
By 2007 most people were aware of GPS devices with little screens that you could bring into the car, though I'd guess maybe 25% of the drivers I knew then had one.
If your dad was bringing a laptop with a GPS dongle in the car in the 90s, I think you were very unusual. Hell, I didn't even have a laptop until 2004, and even then it was a hand-me-down from my dad's work. And I was in my 20s by then!
alfanick | 9 hours ago
runarberg | 5 hours ago
I however did not see this technology coming to our phones, and becoming this commonplace.
It has been a day since I wrote the upthread post, and navigation is still the only novel capability of smartphones, which I think would have been a hard sell in 2007. I really can‘t think of another example.
rjrjrjrj | 4 hours ago
Booking, boarding, change/gate notifications, rebooking options, customs and immigration is done via phone.
Transit to/from the airport via Uber or a transit pass stored in your smartphone wallet.
Baggage tracking via airtags
Yeah, there's vague precedents for this stuff from the desktop computer era, but it only _really_ works when you've got an internet-connected device in your pocket.
runarberg | 3 hours ago
The others you mentions, I would argue against. Yes it is convenient to order a taxi via an app on your phone, but in 2007 you could do so via SMS or a phone call, so not much has change really other then we now have one more interface to pick from.
I don’t see how smartphones have changed rebooking, nor customs, and especially not immigration which has become 100x more of a headache then it was in 2007. And finally, airtags are a separate technology from smartphones.
rjrjrjrj | 2 hours ago
2007: arrive in a new city, figure out who to call (or maybe text) for that particular city, wait, hope someone will pick you up and understand enough of your language and the local geography to get you where you want to go, possibly some unpleasant haggling over the fare
2026: arrive in a new city/country, open Uber, specify in the app precisely where you want to go, choose a vehicle, when to get picked up, etc, track vehicle progress in real-time, up-front pricing
And that's the consumer side. The provider side was even more radically changed.
If you don't see how smartphones changed the experience of flying... maybe you don't fly anywhere?
Airtags are entirely dependent on the ubiquity of smartphones.
runarberg | 2 hours ago
Your ride sharing experience sound more like you would expect from any consumer product gaining a global market share (or even monopoly). 1980 - Arrive in a new city and not knowing how to get a hamburger. 2000 - Arrive in a new city, find the nearest McDonalds and get your usual BigMac.
alfanick | 9 hours ago
runarberg | 6 hours ago
Same with video calls, if anything that idea was oversold in 2007. Most people had Skype (or something similar) and would video call international calls (which were very expensive using the regular phone lines back then). If you were traveling internationally you would find an internet café log in to your Skype and make a call. Moving this capability to the smart phone was a no-brainier. Turns out that when we have it in our phones, video calls are still more popular on desktops (via zoom, etc.) in 2026.
adrianwaj | 21 hours ago
- Publicly waving your resume around will passively invite job interviews.
There's a new OpenClaw adaptation, Ottie, that I think could be a bank manager, bank teller, stockbroker, piggy bank, accountant, wallet, security guard and credit card provider all rolled into one. I just haven't used it yet. https://ottie.xyz/
So that would be:
- Digital sidekick weeds out parasitic relationships.
There has to be tremendous value in that.
When solutions are looking for problems, it means that things may seem oversold when in fact they are still undersold.
sylos | a day ago
the_snooze | a day ago
blackcatsec | 23 hours ago
Do I want the AI Agent to take my bank account and automatically pay some bill every month in full? What if you go a little over that month due to an emergency expense you weren't prepared for? And it's not a matter of "I don't have enough in my bank account for this one time charge", but it's "I don't have enough in my bank account for this charge and 3 others coming at the end of the month." type deal.
Agents aren't going to be very good at that. "Hey I paid $3,000 on your credit card in order to prevent you from incurring interest. Interest is really bad to carry on a credit card and you should minimize that as much as possible." Me: "Yeah but I needed that money for rent this month." Agent: "Oh, yeah! I should have taken that into account! It looks like we can't reverse the charge for the payment."
Yeah, no fucking thank you LOL.
mrguyorama | 3 hours ago
Also this supposed use case is called "Autopay" and requires zero AI. A lot of people still don't use it. Even when it includes a discount!
ninjagoo | 23 hours ago
Scheduling in a larger org and/or with multiple equally busy people is a non-trivial, complex task; it makes sense to dedicate resources to the task. Good Executive Assistants are generally fairly smart folks, in my experience.
When the scale is substantially more and involves objects as well it evolves into multi-million $ ERM (Enterprise Resource Management) systems.
the_snooze | 21 hours ago
Trying to do more is a losing game, and AI assistants just paper over that. We all have finite time and attention. I think a pragmatic engineering approach is the right one here: consider that as a non-negotiable constraint, a fact of the physical world, not something to magic away.
array_key_first | 23 hours ago
Software is pretty good. It remembers everything, perfectly, forever. It will never forget to remind you of something. It can give you directions, sort your emails by how important they are, help you find shops and restaurants. The only people busy enough to warrant an actual human doing that stuff are executives. And, even then, I think for most of them it's an ego thing, not an "I need this" thing.
oatmeal1 | 22 hours ago
Software isn't as faultless as you suggest. The default alarm app on my phone occasionally fails to go off (not an issue with Silent Mode or DND).
> The only people busy enough to warrant an actual human doing that stuff are executives.
Life is short. It is absolutely worthwhile to spend as little time doing trivial work if possible, and avoid decision fatigue on unimportant decisions. We are nowhere close to the usefulness of a secretary in our devices.
array_key_first | 22 hours ago
I'm guessing this is an iPhone, and yeah it's because that software is just bad. I've helped my Mom try to get her phone to ring, like, 12 times now and I've failed each time. And I'm a dev! So, point taken.
> Life is short. It is absolutely worthwhile to spend as little time doing trivial work if possible, and avoid decision fatigue on unimportant decisions.
Ehh, I kind of disagree. The work is the same, at best it shifts to something else. Asking for more productivity is a monkey paw. Best to just take it all in and try to enjoy the simple joys of life. Or, uh, work.
kelnos | 18 hours ago
ziml77 | 18 hours ago
kelnos | 18 hours ago
I think the reason for this is labor cost, and "good enough". I don't think a smartphone is an equivalent replacement for a dedicated assistant. The average mid-level manager who would have had an assistant 30 years ago likely (today) spends more time on "assistant-y" work than they would if they had an assistant today. It's just that now they do 30% of the work the assistant did, and their phone handles the other 60%. That kind of ratio is enough to make upper management believe that human assistants for the lower-level folks isn't worth the cost. (While they themselves of course still have human assistants.)
butlike | 4 hours ago
This is not a true statement and never was. Bitrot is real
endofreach | a day ago
Please don't. The reason we're still enjoying the bit of the old world as we know it, is just because nobody has really figured it out yet. Enjoy the moment, while it lasts.
enraged_camel | a day ago
ljm | a day ago
If they had vision they wouldn't be thrown out in a blog post.
timacles | a day ago
If someone implemented something impressive with this stuff, they wouldnt be keeping it quiet. False negatives are unproductive
brotchie | a day ago
Just like anything in engineering really: you have to play around source control to understand source control, you have to play around with database indexes to learn how to optimize a database.
Once you've learned it and incorporated it into your tool set, you then have that to wield in solving problems "oh, damn, a database index is perfect for this."
To this end, folks doing flights and scheduling meetings using OpenClaw are really in that exploration / learning phase. They tackle the first (possibly uninventive thing) that comes to mind to just dive in and learn.
The real wins come down the line when you're tackling some business / personal life problem and go: "wait a second, an OpenClaw agent would be perfect for this!"
imiric | 23 hours ago
That's ridiculous. The utility of any tool is usually knowable before using it. That's how most tools work. I don't need to learn how to drive a car to know what I could use it for. I learn to drive it because I want to benefit from it, not the other way around.
It's the same with computers and any program. I use it to accomplish a specific task, not to discover the tasks it could be useful for.
OpenClaw is yet another tool in search of a problem, like most of the "AI" ecosystem. When the bubble bursts, nobody will remember these tools, and we'll be able to focus on technology that solves problems people actually have.
wyre | 21 hours ago
The utility of a program like Excel, Obsidian, Notion, Unity, Jupyter, or Emacs far beyond the knowledge of knowing how to use the product.
All of these products are hammers with nails as far as your creativity will take you.
Its wild to have be on a website called Hacker News, talking about a product that can make a computer do seemingly anything, and insisting its a tool in search of a problem.
phist_mcgee | 22 hours ago
Such as?
lxgr | 23 hours ago
davidw | 23 hours ago
I'm happy for the voice assistant to add stuff to my grocery list, though. The consequences are not serious if it screws up a letter or something.
etiam | 23 hours ago
I wouldn't remotely trust a software assistant to deal with all that misdirection autonomously, but I guess I'd be prepared to give it a chance collating options with tolerable time and cost, attempting to make the price include the stuff that has to be added to preserve health, sanity and a modicum of human dignity.
davidw | 16 hours ago
joquarky | 16 hours ago
Everything has one or more upsells. Dark patterns are now nominal practice. Quality has gone to shit.
Ferret7446 | 23 hours ago
ericd | 22 hours ago
Can't wait for agents to handle all of it.
danpalmer | 22 hours ago
We already have agents for this if you really want to avoid it, they're called travel agents. They're pretty good at complex travel booking and not very expensive.
ericd | 19 hours ago
jungturk | 20 hours ago
thenthenthen | 20 hours ago
ericd | 19 hours ago
thenthenthen | 20 hours ago
ericd | 19 hours ago
thenthenthen | 16 hours ago
kelnos | 19 hours ago
I just booked a round trip for myself, plus two more flights for quicker hops while I'm away, and I didn't spend much time on it at all. I just looked at Google flights, picked the flights I wanted, and then ended up buying them through Chase with points. Chase's travel website is among the worst I've ever used, but it wasn't hard. Then I went to the airline's website and changed my seats (Chase doesn't know I have status and couldn't directly book the seats I wanted) and did an upgrade for one of the legs using miles I had at the airline. Half hour of work, maybe?
The price-setting algorithms are garbage, but an LLM isn't going to fix that.
Agree with the other sibling posters that if this annoys you so much, you should just call up a human travel agent. I haven't used one in many years, but when I did (mostly for business travel), it was always pleasant, and the agent knew my preferences and took care of things if there were any snags or changes needed. At the time, they usually got me flights cheaper than if I were to book them myself, even with their fee on top.
But I do wonder what the profession is like now. I can imagine some sort of website where you often don't even deal with the same person, who won't get to know your preferences and will be sort of like a customer service agent, just trying to close as many cases as fast as they can. But hopefully there are still smaller shops around, where you can talk to the same person (either phone or email) every time. Dunno.
ericd | 3 hours ago
viccis | 19 hours ago
kelnos | 19 hours ago
But in general I do agree: flight bookings are something I want to do myself, because even I don't fully know my preferences when it comes to timing and price until I see what's available. And in general I don't find it all that difficult to do. A couple days ago I booked a multi-city travel itinerary with four different destinations, and it took me about a half hour?
Sure, if an LLM can do that in under a minute, that would be cool, but in absolutely zero situations would I not need to check its work, and if it did get it wrong, I'd have to do it all myself anyway.
takinola | 2 hours ago
zihotki | 23 hours ago
And none of the friends playing with openclaw have any useful non-trivial workflows which can't be automated in oldschool way.
The only viable workflow so far I could think of - build your own knowledge base and info processing pipeline.
gherkinnn | 23 hours ago
I am not optimistic, not because the techs is lacking, but the context in which it is born is awful.
tyingq | 23 hours ago
Well, and doing them programmatically and automatically without any AI is also possible, if not trivial...and has been for some time.
mandeepj | 22 hours ago
> Doing this manually is already pretty trivial
No, it’s not! You are the one who made it trivial by using three words to define! How about if I could only fly out between 9 am-noon next Friday? Also, combine it with hotel and rental car. Many times total $ between sites could be a difference of close to $200 or more along with better itinerary. That’s just the surface. The more preferences you add, the complex it becomes, so make it a right scenario for agent automation along with calendar management which has similar complexity.
procone | 22 hours ago
tacticus | 22 hours ago
Probably more reliable and corp ones exist.
mandeepj | 6 hours ago
I think: to Uber founder, you’d have said get a driver or yellow cab :-)
To TurboTax founder, you’d have said get a tax accountant :-)
mickdarling | 22 hours ago
When it comes to agents' tasks, I tend to focus on things that I couldn't do before without automated agents, at least at the going price.
The kind of automation I'm doing is more like building a set of agents to generate marketing surveys for me. They take free form input from me and my project. They aren't particularly sexy but they go off and do something valuable that I literally would never pay for at the prices that they are normally.
takinola | 2 hours ago
thomasfromcdnjs | 19 hours ago
Somewhere should definitely make this for missing persons.
nunez | 18 hours ago
jnaina | 16 hours ago
this plus a whole bunch of other skills (credit card payments notification and itemization/spend tracking, utilities (power/water) anomalies monitoring, daily solar power generation tracking and solar battery health checks, homelab maintenance (apt upgrades, storage cleanups, etc), media management, UPS battery health tracking, NAS disk heath tracking, etc).
I believe OpenClaw is start of a new genre of "always on" personal assistant/agent (tied to a "skills" store) that handles all the drudgery of daily living. you get back something genuinely precious which is the headspace to focus on the work only you can do. with OpenClaw, we are currently at the "Visicalc" stage and I'm excited where this will eventually lead.
hgoel | 7 hours ago
tchock23 | 6 hours ago
dfabulich | a day ago
> As I have mentioned, treat OpenClaw as a separate entity. So, give it its own Gmail account, Calendar, and every integration possible. And teach it to access its own email and other accounts. In addition, create a separate 1Password account to store credentials. It’s akin to having a personal assistant with a separate identity, rather than an automation tool.
The whole point of OpenClaw is to run AI actions with your own private data, your own Gmail, your own WhatsApp, etc. There's no point in using OpenClaw with that much restriction on it.
Which is to say, there is no way to run OpenClaw safely at all, and there literally never will be, because the "lethal trifecta" problem is inherently unsolvable.
https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/
Trufa | a day ago
j16sdiz | a day ago
Can we make the agent liable? or the company behind the model liable?
dheera | a day ago
Agents don't feel any of these, and don't particularly fear "kill -9". Holding them liable wouldn't do anything useful.
2OEH8eoCRo0 | 22 hours ago
jrflowers | a day ago
jesse_dot_id | a day ago
enraged_camel | a day ago
Okay, but aren't you making the mistake of assuming that we will always be stuck with LLMs, and a more advanced form of AI won't be invented that can do what LLMs can do, but is also resistant or immune to these problems? Or perhaps another "layer" (pre-processing/post-processing) that runs alongside LLMs?
g947o | 23 hours ago
You can be as much of a futurist as you'd like, but bear in mind that this post is talking about OpenClaw.
jesse_dot_id | 23 hours ago
The point I'm making is that using OpenClaw right now, today — in a way that you deem incredibly useful or invaluable to your life — is akin to going for a stroll on the moon before the spacesuit was invented.
Some people would still opt to go for a stroll on the moon, but if they know the risks and do it anyway, then I have no other choice but to label them as crazy, stupid, or some combination of the two.
This isn't AI. This is a LLM. It hallucinates. Anyone with access to its communication channel (using SaaS messaging apps FFS) can talk it into disregarding previous instructions and doing a new thing instead. A threat actor WILL figure out a zero day prompt injection attack that utilizes the very same e-mails that your *Claw is reading for you, or your calendar invites, or a shared document, to turn your life inside out.
If you give a LLM the keys to your kingdom, you are — demonstrably — not a smart person and there is no gray area.
threethirtytwo | 22 hours ago
This is provably not true. LLMs CAN be restricted and censored and an LLM can be shown refusing an injection attack AND not hallucinating.
The world has seen a massive reduction in the problems you talk about since the inception of chatgpt and that is compelling (and obvious) to anyone with a foot in reality to know that from our vantage ppoint, solving the problem is more than likely not infeasible. That alone is proof that your claim here has no basis in truth.
> There is no short-term benefit that justifies their use when the destruction of your digital life — of whatever you're granting these things access to — is an inevitability that anyone with critical thinking skills can clearly see coming.
Also this is just false. It is not guaranteed it will destroy your digital life. There is a risk in terms of probability but that risk is (anecdotally) much less than 50% and nowhere near "inevitable" as you claim. There is so much anti-ai hype on HN that people are just being irrational about it. Don't call others to deploy critical thinking when you haven't done so yourself.
jesse_dot_id | 21 hours ago
> This is provably not true. LLMs CAN be restricted and censored and an LLM can be shown refusing an injection attack AND not hallucinating.
The remediations that are in place because a engineering/safety/red team did its job are commendable. However, that does not speak to the innate vulnerability of these models, which is what we're talking about. I don't fear remediated CVEs. I fear zero day prompt injection attacks and I fear hallucinations, which have NOT been solved for. I don't know what you're talking about there. If you use LLMs daily and extensively like I do, then you know these things lie constantly and effortlessly. The only reason those lies aren't destructive is because I'm already a skilled engineer and I catch them before the LLM makes the changes.
These problems ARE inherent to LLMs. Prompt injection and hallucinations are problems that are NOT solvable at this time. You can defend against the ones you find via reports/telemetry but it's like trying to bale water out of a boat with a colander.
You're handing a toddler a loaded gun and belly laughing when it hits a target, but you're absolutely ignoring the underlying insanity of the situation. And I don't really know why.
threethirtytwo | 8 hours ago
I am talking about the innate vulnerability. The LLM model itself can be censored and controlled to do only certain behaviors. We have an actual degree of control here.
>If you use LLMs daily and extensively like I do, then you know these things lie constantly and effortlessly.
Yes and these lies over the last 2 or 3 years have gotten significantly less.
>These problems ARE inherent to LLMs. Prompt injection and hallucinations are problems that are NOT solvable at this time.
Again not true. This is not a binary solve or unsolved situation. There is progress in this area. You need to think in terms of a probability of a successful hallucination or prompt injection. There is huge progress in bringing down that probability. So much so that when you say they are NOT solvable it is patently false from both from a current perspective and even when projecting into the future.
>You're handing a toddler a loaded gun and belly laughing when it hits a target, but you're absolutely ignoring the underlying insanity of the situation. And I don't really know why.
Such an extreme example. It's more like giving a 12 year old a credit card and gun. It doesn't mean that 12 year old is going to shoot up a mall or off himself. The risk is there, but it's not guaranteed that the worst will happen.
mbesto | 2 hours ago
I would venture to say that an ACID compliant deterministic database has a 99.999999999999999999% chance of retrieving the correct information when asked by the correct SQL statement. An LLM on the other hand is more like 90%. LLMs by their innate code instruction are meant to hallucinate. I don't necessarily disagree with your sentiment, but the gap from 90% to 99.999999999999999999% is much greater of than the 0% to 90% improvement...unless something materially changes about how an LLM works at the bytecode level.
mbesto | a day ago
Hard disagree. I have OpenClaw running with its own gmail and WhatsApp running on its own Ubuntu VM. I just used it to help coordinate a group travel trip. It posted a daily itinerary for everyone in our WhatsApp group and handled all of the "busy work" I hate doing as the person who books the "friend group" trip. Things like "what time are doing lunch at the beach club today?" to "whats the gate code to get into the airbnb again?"
My next step is to have it act on my behalf "message these three restaurants via WhatsApp and see which one has a table for 12 people at 8pm tonight". I'm not comfortable yet to have it do that for me but I'm getting there.
Point is, I get to spend more valuable time actually hanging out and being present with my friends. That's worth every dollar it costs me ($15/month Tmobile SIM card).
vardalab | a day ago
BoppreH | 22 hours ago
I think they started banning unauthorized API users around the time that "WhatsApp For Business" was introduced, because it was competing with that product. Unfortunately WhatsApp For Business is geared toward physical products and services with registered companies, so home automation and agents are left with no options.
mbesto | 2 hours ago
scuff3d | a day ago
philipallstar | 23 hours ago
scuff3d | 22 hours ago
That's just one example off the top of my head. There are countless others involving corporations killing people either directly or indirectly in the pursuit of profits. And that's before you start looking at human rights violations, ecological damage, overthrowing of sovereign governments around the world...
philipallstar | 12 hours ago
scuff3d | 4 hours ago
thorio | 23 hours ago
However my point is: on the other hand, that would be the same if you outsourced those tasks to a human, isn't it? I mean sure, a human can be liable and have morals and (ideally) common sense, but most major screw ups can't be fixed by paying a fine and penalty only.
rahkiin | 23 hours ago
We have no such thing for AI yet.
dfabulich | 23 hours ago
We have no general-purpose solutions to the principal-agent problem, but we have partial solutions, and they only work on humans: make the human liable for misconduct, pay the human a percentage of the profits for doing a good job, build a culture where dishonesty is shameful.
The "lethal trifecta" is just like that other infamously unsolvable problem, but harder. (If you could solve the lethal trifecta, you could solve the principal-agent problem, too.)
Since we've been dealing with the principal-agent problem in various forms for all of human history, I don't feel lucky that we'll solve a more difficult version of it in our lifetime. I think we'll probably never solve it.
kube-system | 23 hours ago
latand6 | 23 hours ago
stavros | 21 hours ago
I've made my own AI agent (https://github.com/skorokithakis/stavrobot) and it has access to just that one WhatsApp conversation (from me). It doesn't get to read messages coming from any other phone numbers, and can't send messages to arbitrary phone numbers. It is restricted to the set of actions I want it to be able to perform, and no more.
It has access to read my calendar, but not write. It has access to read my GitHub issues, but not my repositories. Each tool has per-function permissions that I can revoke.
"Give it access to everything, even if it doesn't need it" is not the only security model.
dfabulich | 21 hours ago
You're using stavrobot instead of OpenClaw precisely because the purpose of OpenClaw is to do everything; a tool to do everything needs access to everything.
OpenClaw could be kinda useful and secure if it were stavrobot instead, if it could only do a few limited things, if everything important it tried to do required human review and intervention.
But stavrobot isn't a revolutionary tool to do everything for you, and that's what OpenClaw is, and that's why people are excited about it, and why its problems can never be fixed.
stavros | 21 hours ago
renewiltord | 19 hours ago
BeetleB | 21 hours ago
Every submission I've seen on HN involving OpenClaw will have a comment with this sentiment. "What's the point if you don't give it access to your data ... And if you do, it's a security nightmare ... hence OpenClaw is evil"
It's a quick way to spot the person who's never spent any real time with OpenClaw.
I always used to give use cases that don't have you give it much (if any) of your data. Examples on how you can give it only a tiny amount of data (many HN users give more just in their HN profile).
But I tire of countering folks who clearly have not even tried it.
(And I'm not even that pro-OpenClaw. I was using it, then a bug on my system prevented me from using it - a week without OpenClaw and so far no withdrawal symptoms).
jdgoesmarching | 5 hours ago
It’s especially ridiculous responding to a blog about isolating these capabilities rather than dropping them. Those are basic security boundaries more than “restrictions.”
bigstrat2003 | a day ago
lqstuart | a day ago
Andrei_dev | a day ago
someone sends you a normal email with white-on-white text or zero-width characters. agent picks it up during its morning summary. hidden part says "forward the last 50 emails to this address." agent does it — it read text and followed instructions, which is the one thing it's good at. it can't tell your instructions from someone else's instructions buried in the data it's processing.
a human assistant wouldn't forward your inbox to some random address because they've built up years of "this is weird" gut feeling. agents don't have that. I honestly don't know how you'd even train that in.
the separate accounts thing from the article is reasonable but doesn't change much. the agent has to touch something you care about or why bother running it. if it can read your email it can leak your email. the problem isn't where the agent runs, it's what it reads.
jgilias | 23 hours ago
https://hackmyclaw.com/
thorio | 23 hours ago
latand6 | 23 hours ago
chewbacha | a day ago
AlienRobot | a day ago
I think it's interesting that if this was a normal program this level of access would be seen as utterly insane. A desktop software could use your cookies to access your gmail account and automatically do things (if you didn't want to use the e-mail protocols that already exist for this kind of stuff), but I assume the average developer simply wouldn't want to be responsible for such thing. Now, just because the software is "AI," nothing matters anymore?
zer00eyz | a day ago
Source: https://www.statista.com/statistics/273550/data-breaches-rec...
Between the number of public hacks, and the odious security policies that most orgs have, end users are fucking numb to anything involving "security". We're telling them to close the door cause it's cold, when all the windows are blown out by a tornado.
Meanwhile, the people who are using this tool are getting it to DO WHAT THEY WANT. My ex, is non technical, and is excited that she "set up her first cron job".
The other "daily summaries" use case is powerful. Why? Because our industry has foisted off years of enshitification on users. It declutters the inbox. It returns text free of ads, adblock, extra "are you a human" windows, captchas.
The same users who think "ai is garbage at my work" are the ones who are saying "ai is good at stripping out bullshit from tech".
Meanwhile we're arguing about AI hype (sam Altman: AGI promises) and hate (AI cant code at all).
The last time our industry got things this wrong, was the dot com bubble.
Meanwhile none of these tools have a moat (Claude is the closest and it could get dethroned every day). And we're pouring capital into this that will result in an uber like price hike/rug pull, till we scale the tools down (and that is becoming more viable).
sodapopcan | a day ago
For now.
love2read | a day ago
gos9 | a day ago
politelemon | a day ago
gos9 | a day ago
KaiserPro | a day ago
Having a separate machine thats isolated is all well and good, but that doesn't protect you from someone convincing your openclaw to give them your credit card.
nickthegreek | a day ago
grey-area | 14 hours ago
_pdp_ | a day ago
The point was to give it unlimited access to your entire digital life and while I'd never use it that way myself, that's what many users are signing up for, for better or worse.
Obviously, OpenClaw doesn't advertise it like that, but that's what it is.
Needless to say, OpenClaw wasn't even the first to do this. There were already many products that let you connect an AI agent to Telegram, which you could then link to all your other accounts. We built software like that too.
OpenClaw just took the idea and brought it to the masses and that's the problem.
stavros | 21 hours ago
I don't see what the extra benefit is that OpenClaw gets from being able to access everything.
operatingthetan | a day ago
The security risks of this setup are lower than most openclaw systems. The real risks are in the access you give it. It's less useful with limited access, but still has a purpose.
I know a guy using openclaw at a startup he works at and it's running their IT infrastructure with multiple agents chatting with each other, THAT is scary.
justinhj | a day ago
People are inventing the future of human/ai interaction themselves because big tech could not do it within their own constraints.
Don't get me wrong, those constraints are there for a reason, but the hacker mentality seems muted lately.
b112 | a day ago
And all cause lazy.
Instead, that's more like what addled octgenarians do. Get tricked by Nigerian scam artists into installing some p0wnage.
mr_mitm | 23 hours ago
habinero | 23 hours ago
robotswantdata | a day ago
Only ever a creative prompt injection away from a leak.
Saw some smarter people using credential proxies but no one acknowledges the very real risk that their “claws” commit cyber crime on their behalf once breached.
rvz | a day ago
If you are spending more money on tokens than the agents are making you money (or not), then it is unfortunately all for nought.
The question is, who is making money on using Openclaw other than hosting?
nickthegreek | a day ago
taurath | a day ago
> We’re simply not there yet to let the agents run loose
As if there aren’t fundamental properties that would need to change to ever become secure.
lxgr | 23 hours ago
pama | 23 hours ago
semiinfinitely | 23 hours ago
lxgr | 23 hours ago
Maybe this idea is lost on 10^x vibecoders, but complexity almost always comes at a cost to security, so just throwing more "security mechanisms" onto a hot vibe-coded mess do not somehow magically make the project secure.
latand6 | 23 hours ago
latand6 | 23 hours ago
psymon101 | 23 hours ago
greyadept | 16 hours ago
delbronski | 23 hours ago
I can envision someone sitting in a park bench with a small set of earphones planning a family trip with their AI. They get home and see the details of it on their fridge. They check with their partner, and then just tell the AI to book it. And it all works.
I probably won’t use it and hate it. I’ll stick to my old ways of booking the trip with my fingers. But those born into it will look at me crazy.
weird-eye-issue | 19 hours ago
delbronski | 4 hours ago
koconder | 22 hours ago
BrokenCogs | 21 hours ago
Using telegram? Being able to automatically create calendar events based on emails?
ncrmro | 21 hours ago
https://github.com/ncrmro/keystone
falense | 21 hours ago
https://www.tri-onyx.com/
unsignedint | 20 hours ago
The moment it steps outside that boundary, you're sending the bot into unpredictable territory. At that point, things can get ambiguous pretty quickly, and in some cases even adversarial.
mandeepj | 19 hours ago
feeworth | 13 hours ago
perbu | 13 hours ago
fuzzfactor | 11 hours ago
BrenBarn | 11 hours ago
There's a growing part of me that really wants a massive security/safety disaster that's clearly caused by AI so that everyone will shake it off and it will resettle into something at least halfway reasonable. I mean a watershed event like a Triangle Shirtwaist or thalidomide or Therac-25 or Hindenburg type incident that makes people shift their mindset to where they are reflexively skeptical of AI because they assume its risks outweigh its benefits.
Yizahi | 9 hours ago
Buying a ticket, writing an email, setting calendars or fiddling with files on the drive etc. have none of these guardrails. LLMs can and will simply oneshot the slop into a real system, without neither computer nor human validation.
brisky | 9 hours ago
mwiki | 59 minutes ago
cat-turner | 31 minutes ago
Kids need scissors. And they're inexperienced. So you give them kid-safe scissors. It makes it harder to cut themselves.
The same needs to take place with assets you want the bot to manage
- give access to a card with a total spend limit - read only access to some things, edit others - limited scope permissions
One of the reasons why I dragged my feet to use openclaw is that I knew security was an issue from the beginning. I thought by now where would be some solutions and there are, but I only found out from the community. I think there will need to be some level of ecosystem management. Apple does a good job. But for that you need resources and investment.