> ten years from now, people will look back at 2024-2025 as the moment Apple had a clear shot at owning the agent layer and chose not to take it
Why is Apple's hardware being in demand for a use that undermines its non-Chinese competition a sign of missing the ball versus validation for waiting and seeing?
Apple had problems with just the Chatbot side of LLMs because they couldn't fully control the messaging. Add in a small helping of losing your customers entire net worth and yeah. These other posters have no idea what they are talking about.
Exactly, Apple is entirely too conservative to shine with LLMs due to their uncontrollability, Apple likes their control and their version of "protecting people" (which I don't fully agree with) which includes "We are way too scared to expose our clients to something we can't control and stop from doing/saying anything bad!", which may end up being prudent. They won't come close to doing something like OpenClaw for at least a few more years when the tech is (hopefully) safer and/or the Overton Window has shifted.
And yet they'll push out AI-driven "message summaries" that are horrifically bad and inaccurate, often summarizing the intent of a message as the complete opposite of the full message up to and including "wants to end relationship; will see you later"?
Was about to point out the same thing. Apple's desperate rush to market, summarising news headlines badly and sometimes just plain hallucinating stuff causing many public figured to react when they end up the target of such mishaps.
Clawdbot/Moltbot/OpenClaw is so far from figuring out the “trust” element for agents that it’s baffling the OP even chose to bring it up in his argument
It is absurd enough of a project that everybody basically expects it to be secure, right? It is some wild niche thing for people who like to play with new types of programs.
This is not a train that Apple has missed, this is a bunch of people who’ve tied, nailed, tacked, and taped their unicycles and skateboards together. Of course every cool project starts like that, but nobody is selling tickets for that ride.
I think a lot of people have been spoiled (beneficially) by using large, professionally-run SaaS services where your only serious security concerns were keeping your credentials secret, and mitigating the downstream effects of data breaches. I could see having a fundamentally different understanding of security having only experienced that.
What people are talking about doing with OpenClaw I find absolutely insane.
> What people are talking about doing with OpenClaw I find absolutely insane.
Based on their homepage the project is two months old and the guy described it as something he "hacked together over a weekend project" [1] and published it on github. So this is very much the Raspberry Pi crowd coming up with crazy ideas and most of them probably don't work well, but the potential excites them enough to dabble in risky areas.
In my feeds, I’ve seen activity among several an-LLM-is-my-tech-lead-level newly tech-ish people, who are just plugging their lives into it and seeing what happens.
If this really was primarily tech savvy people prodding at the ecosystem, the top skill available, as of a few days ago, probably wouldn’t be a malware installer:
You don’t look at it, you just talk to it and it can talk back to you. It’s more just having a conversation with a personal assistant while driving. Which is a pretty common thing to do.
I am absolutely saying that your claim that conversing is nearly as dangerous as looking at your phone, is total nonsense. And your link doesn't do anything to support your claim.
> This is exactly what Apple Intelligence should have been... They could have shipped an agentic AI that actually automated your computer instead of summarizing your notifications. Imagine if Siri could genuinely file your taxes, respond to emails, or manage your calendar by actually using your apps, not through some brittle API layer that breaks every update.
And this is probably coming, a few years from now. Because remember, Apple doesn't usually invent new products. It takes proven ones and then makes its own much nicer version.
Let other companies figure out the model. Let the industry figure out how to make it secure. Then Apple can integrate it with hardware and software in a way no other company can.
Right now we are still in very, very, very early days.
Apple literally lives on the "Cutting Edge" a-la XKCD [1]. My wife is an iPerson and she always tells me about these new features (my phone has had them since $today-5 years). But for her, these are brand new exciting things!
How many chat products has Google come out with? Google messenger, buzz, wave, meet, Google+, hangouts… Apple has iMessage and FaceTime. You just restated OP’s point. Apple evolves things slowly and comes to market when the problems have already been solved in a myriad of ways, so they can be solved once and consistently. It’s not about coming to market soonest. How did you get that from what OP said?
First Mover effect seems only relevant when goverment warrants are involved. Think radio licenses, medical patents, etc. Everywhere else, being a first mover doesnt seem to correlate like it should to success.
Pointless argument given that android isn't just "android". Never has been.
It's a huge, diverse ecosystem of players and that's probably why Android has always gotten the coolest stuff first. But it's also its achilles' heel in some ways.
There are plenty of Android/Windows things that Apple has had for $today-5 years that work the exact same way.
One side isn’t better than the other, it’s really just that they copy each other doing various things at a different pace or arrive at that point in different ways.
Some examples:
- Android is/was years behind on granular permissions, e.g. ability to grant limited photo library access to apps
- Android has no platform-wide equivalent to AirTags
- Hardware-backed key storage (Secure Enclave about 5 years ahead of StrongBox)
> ...Apple doesn't usually invent new products. It takes proven ones and then makes its own much nicer version.
While this was true about ten years ago, it's been a while since we've seen this model of software development from Apple succeed in recent years. I'm not at all confident that the Apple that gave us Mac OS 26 is capable of doing this anymore.
Their software efforts have little ambition. Tweaks and improvements are always a good idea, but without some ambitious effort, nothing special is learned or achieved.
A "bicycle for the mind" got replaced with a "kiosk for your pocketbook".
The Vision Pro has an amazing interface, but it's set up as a place to rent videos and buy throwaway novelty iPad-style apps. It allows you to import a Mac screen as a single window, instead of expanding the Mac interface, with its Mac power and flexibility, into the spacial world.
Great hardware. Interesting, but locked down software.
If Tim Cook wanted to leave a real legacy product, it should have been a Vision Pro aimed as an upgrade on the Mac interface and productivity. Apple's new highest end interface/device for the future. Not another mid/low-capability iPad type device. So close. So far.
$3500 for an enforced toy. (And I say all this as someone who still uses it with my Mac, but despairs at the lack of software vision.)
Not just lack of ambition, lack of vision or taste. Liquid Glass is a step back in almost every way, that it got out the door is an indictment of the entire leadership chain.
That’s never a good, long term business model and people are willing to pay more for Apple hardware because it tends to last longer than others. We’ve heard this cynical take for years, but I don’t think that it is really convincing.
> It allows you to import a Mac screen as a single window, instead of expanding the Mac interface, with its Mac power and flexibility, into the spacial world.
I've thought this too. Apple might be one of the only companies that could pull off bringing an existing consumer operating system into 3D space, and they just... didn't.
On Windows, I tried using screen captures to separate windows into 3D space, but my 3090 would run out of texture space and crash.
Maybe the second best would be some kind of Wayland compositor.
The last truly magical apple device launch was the Airpod. They've done a great job on their chipsets, but the actual hardware products they make are stagnant, at best. The designs of the new laptops have been a step back in quality and design in my opinion.
I mean they literally just looked at Tile. And they have the benefit of running the platform. Demonstrates time and time again that they engage in anticompetitive behaviour.
No, they didn't just look at Tile. The used a completely new UWB radio technology with a completely new anonymization cryptographic paradigm allowing them to include every single device in network, transparently.
AirTag is a perfect example of their hardware prowess that even Google fails to replicate to this date.
Best privacy in computers, ADP, and M-series chips mean nothing to you? To me, Apple is the last bastion of sanity in a world where user hostility is the norm.
> Then Apple can integrate it with hardware and software in a way no other company can.
That's a pretty optimistic outlook. All considered, you're not convinced they'll just use it as a platform to sell advertisements and lock-out competitors a-la the App Store "because everyone does it"?
Can you understand how this commoditizes applications? The developers would absolutely have a fit. There is a reason this hasn’t been done already. It’s not lack of understanding or capability, it’s financial reality. Shortcuts is the compromise struck in its place.
Personal intelligence, the (awkward) feature where you can take a screenshot and get Siri to explain stuff, and the new spotlight features where you can type out stuff you want to do in apps probably hints at that…
This is generally true only of them going to market with new (to them) physical form factors. They aren’t generally regarded as the best in terms of software innovation (though I think most agree they make very beautiful software)
Apple's niche product, consisting of like 1-4% of computer sales compared to its dominant MacBook line, is now flying off the shelf as a highly desired product, because of a piece of software that Apple didn't spend a dime developing. This sounds like a major win for Apple.
The OS maker does not have to make all the killer software. In fact, Apple's pretty much the only game in town that's making hardware and software both.
They definitely do. A common configuration is running a supervisor model in the cloud and a much smaller model locally to churn on long running tasks. This frees Openclaw up to lavishly iterate on tool building without running through too many tokens.
There are few open source projects coming along that let you sell your compute power in a decentralized way. I don't know how genuine some of these are [0] but it could be the reason: people are just trying to make money.
There have been countless projects to sell distributed compute power. I don't know of any that have gotten much traction. Everyone keeps trying to create new ones instead of developing for the existing ones.
The one you linked to looks clearly like a pump-and-dump scam, though.
It appears he is selling a service where he comes to you (optionally with a Mac Mini which is probably why he's buying multiple) and sets up OpenClaw for you.
Really doubt it has a significant impact on mac mini sales…
And being fair ClawBot is a complete meme/fad at this point rather than an actual product. Using it for anything serious is pretty much the equivalent of throwing your credit cards, ids and sticky notes with passwords and waiting to see what happens…
I do see the appeal and potential case of the general concept of course. The product itself (and the author has admitted it themselves) is literally is a garbage pile..
> And this is probably coming, a few years from now. Because remember, Apple doesn't usually invent new products. It takes proven ones and then makes its own much nicer version.
Except this doesn't stand up to scrutiny, when you look at Siri. FOURTEEN years and it is still spectacularly useless.
I have no idea what Siri is a "much nicer version" of.
> Apple can integrate it with hardware and software in a way no other company can.
And in the case of Apple products, oftentimes "because Apple won't let them".
Lest I be called an Apple hater, I have 3 Apple TVs in my home, my daily driver is a M2 Ultra Studio with a ProDisplay XDR, and an iPad Pro that shows my calendar and Slack during the day and comes off at night. iPhone, Apple Watch Ultra.
In that list of Apple products that you own, do none of them match the ops comment? You’re saying none of those products are or have been in their time in the market a perfected version of other things?
There are lots of failed products in nearly every company’s portfolio.
AirTags were mentioned elsewhere, but I can think of others too. Perfected might be too fuzzy & subjective a term though.
Perhaps I’m misremembering, but I feel sure that Siri was much better a decade ago than it is today. Basic voice commands that used to work are no longer recognised, or required you to unlock the phone in situations where hands free operation is the whole point of using a voice command.
There were certain commands that worked just fine. But they, in Apple's way, required you to "discover" what worked and what didn't with no hints, and then there were illogical gaps like "this grouping should have three obvious options, but you can only do one via Siri".
And then some of its misinterpretations were hilariously bad.
Even now, I get at a technical level that CarPlay and Siri might be separate "apps" (although CarPlay really seems like it should be a service), and as such, might have separate permissions but then you have the comical scenario of:
Being in your car, CarPlay is running and actively navigating you somewhere, and you press your steering wheel voice control button. "Give me directions to the nearest Starbucks" and Siri dutifully replies, "Sorry, I don't know where you are."
I don’t believe this was ever confirmed by Apple, but there was widespread speculation at the time[1] that the delay was due to the very prompt injection attacks OpenClaw users are now discovering. It would be genuinely catastrophic to ship an insecure system with this kind of data access, even with an ‘unsafe mode’.
These kinds of risks can only be _consented to_ by technical people who correctly understand them, let alone borne by them, but if this shipped there would be thousands of Facebook videos explaining to the elderly how to disable the safety features and open themselves up to identity theft.
The article also confuses me because Apple _are_ shipping this, it’s pretty much exactly the demo they gave at WWDC24, it’s just delayed while they iron this out (if that is at all possible). By all accounts it might ship as early as next week in the iOS 26.4 beta.
Exactly. Apple operates at a scale where it's very difficult to deploy this technology for its sexy applications. The tech is simply too broken and flawed at this point. (Whatever Apple does deploy, you can bet it will be heavily guardrailed.) With ~2.5 billion devices in active use, they can't take the Tesla approach of letting AI drive cars into fire trucks.
I'm not that surprised because of how pervasive the 'move fast and break things' culture is in Silicon Valley, and what is essentially AI accelerationism. You see this reflected all over HN as well, e.g. when Cloudflare goes down and it's a good thing because it gives you a break from the screen. Who cares that it broke? That's just how it is.
This is just not how software engineering goes in many other places, particularly where the stakes are much higher and can be life altering, if not threatening.
Ouch. You could have taken a statistical approach "google is not known for high quality product development and likely therefore does not select candidates for qualities in product-development domain" - I'm talking too much to Gemini, aren't I?
iOS 26 is proof that many product managers at Apple need to find another calling. The usability enshittification in that release is severe and embarrassing.
It is obvious if viewed through an Apple lens. It wouldn't be so obvious if viewed through a Google lens. Google doesn't hesitate to throw whatever its got out there to see what sticks; quickly cancelling anything that doesn't work out, even if some users come to love the offering.
Oh no, what if they put on Christmas music playlist in February? the horror!
There should exist something between "don't allow anything without unlocking phone first" and "leave the phone unlocked for anyone to access", like "allow certain voice commands to be available to anyone even with phone locked"
Can it not recognize my voice? I had to record the pronunciation of 100 words when I setup my new iPhone - isn’t there a voice signature pattern that could be the key to unlock?
It certainly should have been a feature up until now. However, I think at this point anyone can clone your voice and bypass it.
But as a user I want to be able to give it permission to run selected commands even with the phone locked. Like I don't care if someone searches google for something or puts a song via spotify. If I don't hide notifications when locked, what does it matter that someone who has my phone reads them or listens to them?
Playing music doesn’t require unlocking though, at least not from the Music app. If YouTube requires an unlock that’s actually a setting YouTube sets in their SiriKit configuration.
For reading messages, IIRC it depends on whether you have text notification previews enabled on the lock screen (they don’t document this anywhere that I can see.) The logic is that if you block people from seeing your texts from the lock screen without unlocking your device, Siri should be blocked from reading them too.
Edit: Nope, you’re right. I just enabled notification previews for Messages on the lock screen and Siri still requires an unlock. That’s a bug. One of many, many, many Siri bugs that just sort of pile up over time.
I want a voice control experience that is functional. I don't want every bad thing that could happen-- especially those that will only happen if I'm careless to begin with-- circumscribing an ever shrinking range, often justified by contrived examples and/or for things much more easily accomplished through other methods.
But the point is, you are a power user, who has some understanding of the risk. You know that if your phone is stolen and it has any cards stored on them, they can be easily transferred to another phone and drained. Because your bank will send a confirmation code, and its still authorized, you will be held liable for that fraud.
THe "man in the street" does not know that, and needs some level of decent safe defaults to avoid such fraud.
re: youtube music, I just tried it on my phone and it worked fine... maaaybe b/c you're not a youtube premium subscriber and google wants to shove ads into your sweet sweet eyeballs?
The one that kindof caught me off guard was asking "hey siri, how long will it take me to get home?" => "You'll need to unlock your iPhone for that, but I don't recommend doing that while driving..." => if you left your phone unattended at a bar and someone could figure out your home address w/o unlock.
...I'm kindof with you, maybe similar to AirTags and "Trusted Locations" there could be a middle ground of "don't worry about exposing rough geolocation or summary PII". At home, in your car (connected to a known CarPlay), kindof an in-between "Geo-Unlock"?
I pay for YouTube Music and I see really inconsistent behavior when asking Siri to play music. My five-year-old kid is really into an AI slop song that claims to be from the KPop Daemon Hunters 2 soundtrack, called Bloodline (can we talk about how YT Music in full of trashy rip-off songs?). He's been asking to listen to it every day this week in the car and prior to this morning, saying "listen to kpop daemon hunters bloodline" would work fine, playing it via YT Music. This morning, I tried every iteration of that request I could think of and I was never able to get it to play. Sometimes I'd get the response that I had to open YT Music to continue, and other times it would say it was playing, but it would never actually queue it up. This is a pretty regular issue I see. I'm not sure if the problem is with Siri or YT Music.
You'll need to unlock your iPhone first. Even though you're staring at the screen and just asked me to do something, and you saw the unlocked icon at the top of your screen before/while triggering me, please continue staring at this message for at least 5 seconds before I actually attempt FaceID to unlock your phone to do what you asked.
I think you're being very generous. There's almost 0 chance they had this actually working consistently enough for general use in 2024. Security is also a reason, but there's no security to worry about if it doesn't really work yet anyway
I'd be super interested in more information on this! Do you mean abandoning unsupervised learning completely?
Prompt Injection seems to me to be a fundamental problem in the sense that data and instructions are in the same stream and there's no clear/simple way to differentiate between the two at runtime.
Removing the risk for most jobs should be possible. Just build the same cages other apps already have. Also add a bit more transparency, so people know better what the machine is doing, maybe even with a mandatory user-acknowledge for potential problematic stuff, similar to how we have root-access-dialogues now. I mean, you don't really need access to all data, when you are just setting a clock, or playing music.
Its hard to come up with useful AI apps that aren't massive security or privacy risks. This is pretty obvious. For an agent to be really useful it needs to have access to [important stuff] but giving an AI access to [important stuff] is very risky. So you can get some janky thing like OpenClaw thats thrown together by one guy and has no boundaries and everyone on HN thinks is great, but its going to be very difficult for a big firm to make a product like that for mass consumption without it risking a massive disaster. You can see that Apple and Microsoft and Salesforce and everyone are all wrestling with this. Current LLMs are too easily hoodwinked.
Even in the US, for most people tax filing it not really a complex process. It only gets complicated if you are trying to itemize deductions and have a complex income story. Most people can do it with a couple of documents and a single form.
It doesn't take a lot of 'complexity' in income to balloon up complexity. Any brokerage activity will generate quite a few additional forms for 1099-B, 1099-DIV, etc. Still not super complicated, but I keep seeing people discuss this as if you only have W2s and nothing else... which isn't usually true, especially for someone who is likely to be using OpenClaw.
It would be an absolute disaster at Apple scale. Millions of people would start using it, filing incorrect taxes or deleting their important files and Apple would be sued endlessly.
Tiny open source projects can just say "use at your own risk" and offload all responsibility.
> Imagine if Siri could genuinely file your taxes, respond to emails, or manage your calendar
> And this is probably coming, a few years from now.
Given how often I say "Hey Siri, fast forward", expecting her to skip the audio forward by 30 seconds, and she replies "Calling Troy S" a roofing contractor who quoted some work for me last year, and then just starts calling him without confirmation, which is massively embarassing...
Also in the good old days if you sealed the wrong number you had some time to just hang up without harm done. Today the connection is made the moment you pressed the button or in this case when Siri decided to call.
Happened to me too while being in the car. With every message written by Siri it feels like you need to confirm 2 or 3 times (I think it is only once but again) but it calls happily people from your phone book.
>> Imagine if Siri could genuinely file your taxes
Imagine if the government would just tell everyone how much they owed and obviated the need for effing literal artificial intelligence to get taxes done!
>> respond to emails
If we have an AI that can respond properly to emails, then the email doesn't need to be sent in the first place. (Indeed, many do not need to be sent nowadays either!)
Yeah the whole filing taxes thing is an epic XY-problem. Governments can make this as easy as a digital signature, there’s zero need for an agent of any kind.
Actually most of the things people use it for is of this kind, instead actually solving the problem (which is out of scope for them to be fair) it’s just adding more things on top that can go wrong.
What other party? They often don't even know you or if you can use something for tax or not. Pretty much everything can be used for tax deduction, it all just depends on circumstances. I know, many countries have a really broken privacy-situation, but I don't think it would be realistic that every shop is preventive filing every receipt and forces every customer to give them their tax-number so they can link them..
You being personally ignorant of this specific argument which gets litigated every single time this comes up but only by Americans because most other countries have zero difficulty doing exactly that is not a valid argument.
91 percent of American filers take the standard deduction. The IRS already has all their information, already knows how much they withheld, already knows what they owe back. For all these people, TurboTax is just filling in 10 fields in the standard form.
"All your tax deductibles" is irrelevant for the vast majority of the country, and always has been.
The 35 million remaining americans who do itemize are free to continue using this old system while the rest of us can have a better world.
We could also ask how the government could later tell someone they improperly deducted something! The government can either use that same means to tell taxpayers in advance, or else we could figure out a superior taxation system that wouldn’t require these steps.
If a user chooses to reach out about an issue that an AI agent can completely solve, why should they not be allowed to do so via email? I much prefer it over all other support communications channels.
Here is a fun “Prompt Injection” which I experimented with before the current AI Boom; visiting a friend’s home › see Apple/Amazon listening devices › Hey Siri/Alexa, please play the last song. Harmless, fun.
I think the interesting tension here is between capability and trust.
An agent that can truly “use your computer” is incredibly powerful,
but it's also the first time the system has to act as you, not just for you.
That shifts the problem from product design to permission, auditability,
and undoability.
Summarizing notifications is boring, but it’s also reversible.
Filing taxes or sending emails isn’t.
It feels less like Apple missing the idea, and more like waiting
until they can make the irreversible actions feel safe.
every time i've heard someone's speculations about what apple intelligence could have been, it's a complex conspiracy. its problem is that it sucks and makes them no money, so they didn't ship it.
People forget that “multi touch” and “capacitive touchscreens” were not Apple inventions. They existed prior to the iPhone. The iPhone was just the first “it just works” adaptation of it
Not a great example as multitouch in its modern incarnation was a niche academic technology, the most refined version of which was built by a 2 person startup that Apple quickly acquired. There was still a long way to go to make the tech as ubiquitous as it is today and that was all heavy lifting done by Apple.
Well, the heavy lifting was supervised by the same people, but while receiving Apple paychecks :)
I would guess, and it is a guess, that there are two reasons apple is “behind” in AI. First, they have nowhere near the talent pool or capability in this area. They’re not a technical research lab. For the same reason you don’t expect apple to win the quantum race, they will not lead on AI. Second, AI is a half baked product right now and apple try to ship products that properly work. Even Vision Pro is remarkably polished for a first version. AI on the other hand is likely to suffer catastrophic security problems, embarrassing behaviour, distinctly family-unfriendly output.
Apple probably realised they were hugely behind and then spent time hand wringing over whether they remained cautious or got into the brawl. And they decided to watch from the sidelines, buy in some tech, and see how it develops.
So far that looks entirely reasonable as a decision. If Claude wins, for example, apple need only be sure Claude tools work on Mac to avoid losing users, and they can second-move once things are not so chaotic.
How do people manage to pick such bad examples? Who in their right mind would ever allow an LLM to FILE THEIR TAXES for them. Absolutely insane behavior. Why would anyone think this is probably coming? Do you think the IRS is going to accept "hallucination lol" as an excuse for misfiling?
> Because remember, Apple doesn't usually invent new products. It takes proven ones and then makes its own much nicer version.
Apple doesn't take proven ones of anything. What they do is arrive at something proven from first principles. Everyone else did it faster because they borrowed, but Apple did it from scratch, with all the detail-oriented UX niceties that entails.
This was more prevalent when Jobs was still around. Apple still has some of that philosophy at its core, but it's been eroding over time (for example with "AI" and now Liquid Ass). They still do their own QA, though, and so on. They're not copying the market, they have their own.
> I suspect ten years from now, people will look back at 2024-2025 as the moment Apple had a clear shot at owning the agent layer and chose not to take it
Ten years from now, there will be no ‘agent layer’. This is like predicting Microsoft failed to capitalize on bulletin boards social media.
Not who you asked, but I don't like the effect they have on people. People develop dependence on them at the cost of their own skills. I have two problems with that. A lot of their outputs are factually incorrect, but confidently stated. They project an air of trustworthiness seemingly more effectively than a used care salesman. My other problem is farther-looking. Once everyone is sufficiently hooked, and the enshittification begins, whoever is pulling the strings on these models will be able to silently direct public sentiment from under cover. People are increasingly outsourcing their own decisions to these machines.
exactly. People are blindly dumping everything into LLMs. A few years into the future, will we have Sr or Staff enggs who can fix things themselves? What happens when claude has an outage and there is a prod issue?!
Do they not? Many phone functions are already available through voice assistants, and have been for a very long time, and yet the vast majority of people still prefer to use them with the UI. Clicking on the weather icon is much easier than asking a chatbot "what's the weather like?"
My elderly mother has an essential tremor (though only in one hand now due to successful ultrasound treatment!) and she would still rather suffer through all her errors with a touch interface than use voice commands.
Some people seem to think that Deckard’s speech-controlled CSI software in Blade Runner is actually something to strive for, UX-wise. As if it makes any sense to use strictly nonvisual, non-two-dimensional affordances to work with visual data.
The sad part is that while everyone is chasing new interface modalities, the traditional 2D UI is slowly getting worse thanks to questionable design trends and a lack of interest.
If you're arguing that in 10 years we won't have fully automated systems where we interact more with the automation than the functionality, I've got news for you...
Ten years from now, the agent layer will be the interface the majority of people use a computer through. Operating systems will become more agentic and absorb the application layer while platforms like Claude Cowork will try to become the omniapp. They’ll meet in the middle and it will be like Microsoft trying to fight Netscape’s view of the web as the omniapp all over again.
Apple will either capitalise on this by making their operating systems more agentic, or they will be reduced to nothing more than a hardware and media vendor.
I think you are right. In fact, if were a regular office worker today, a Claude subscription could possibly be the only piece of software you might need to open for days in a row to be productive. You can check messages, send messages, modify documents, create documents, do research, and so on. You could even have it check on news and forums for you (if they could be crawled that is).
I wouldn't call that productive, not even close if you are just sending AI replies, offloading all your tasks and doing nothing. This is what execs think we do, while every job has a lot of complexities that are hard to see from surface level. Belief that all work can be automatable is just a dream that execs have.
I hope so. We're right on the cusp of having computers that actually are everything we ever wanted them to be, ever since scifi started describing devices that could do things for us. There's just a few pesky details left to iron out (who pays for it, insane power demand, opaque models, non-existent security, etc etc).
Things actually can "do what I mean, not what I say", now. Truly fascinating to see develop.
Ah yes. “Non-existent security” is only a pesky detail that will surely be ironed out.
It’s not a critical flaw in the entirety of the LLM ecosystem that now the computers themselves can be tricked into doing things by asking in just the right way. Anything in the context might be a prompt injection attack, and there isn’t really any reliable solution to that but let’s hook everything up to it, and also give it the tools to do anything and everything.
There is still a long way to go to securing these. Apple is, I think wisely, staying out of this arena until it’s solved, or at least less of a complete mess.
Yes, there are some flaws. The first airplanes also had some flaws, and crashed more often than they didn't. That doesn't change how incredible it is, while it's improving.
Maybe, just maybe, this thing that was, until recently, just research papers, is not actually a finished product right now? Incredibly hot take, I know.
I think the airplane analogy is apt because commercial air travel basically capped out at "good enough" in terms of performance (just below Mach 1) a long time ago and focused on cost. Everyone assumes AI is going to keep getting better, but what if we're nearing the performance ceiling of LLMs and the rest is just cost optimization?
Possibly so in urban areas. Internet is already available everywhere. Sell dumb devices that can remotely log in to virtual devices. An LLM can connect to this virtual device and execute whatever action the user wants. Centralising compute resources this way means it's likely cheaper to offer huge compute to tons of users and so rather than buying a smartphone, you buy a monthly subscription to AI which can do everything your device does but you just need to speak or text to it. Sub includes cost of dumb device maintenance, securing the data you sent to the virtual device, etc.
Personal Computing as a service. Let the computer think for you.
Maybe not the last, but it feels like we're getting closer than I thought we would.
I love keyboards, I love typing. I'm rocking an Ergodox daily with a wooden shell that I built myself over ten years ago, with layers of macros that make it nearly incomprehensible for another person to use. I've got keyboard storage. I used to have a daily habit of going to multiple typing competition websites, planting a flag at #1 in the daily leaderboard and moving on to the next one.
Over the last year the utility of voice interfaces has just exploded though and I'm finding that I'm touching the keyboard less and less. Outside of projects where I'm really opinionated on the details or the architecture it increasingly feels like a handicap to bother manually typing code for a lot of tasks. I'm honestly more worried about that physical skill atrophying than dulling on any ability to do the actual engineering work, but it makes me a bit sad. Like having a fleet of untiring tractors replacing the work of my horse, but I like horses.
> Apple had everything: the hardware, the ecosystem, the reputation for “it just works.”
It sounds to me like they still have the hardware, since — according to the article — "Mac Minis are selling out everywhere." What's the problem? If anything, this is validation of their hardware differentiation. The software is easy to change, and they can always learn from OpenClaw for the next iteration of Apple Intelligence.
I don't think it's hardware differentiation as much as vendor lock in because it lets people send iMessages with their agent. Not sure about the running local models on it though.
Because people are forced to buy them. Same as how datacenters are full of mac minis to build iOS apps that could easily be built on any hardware if Apple weren't such corporate bastards.
This article is talking about the AI race as if it’s over when it’s only started. And really, an opinion of the entire market based on a few reddit posts?
Author spoke of compounding moats, yet Apple’s market share, highly performant custom silicon, and capital reserves just flew over his head. HN can have better articles to discuss AI with than this myopic hot take.
> And they would have won the AI race not by building the best model, but by being the only company that could ship an AI you’d actually trust with root access to your computer.
and the very next line (because i want to emphasize it
> That trust—built over decades—was their moat.
This just ignores the history of os development at apple. The entire trajectory is moving towards permissions and sandboxing even if it annoys users to no end. To give access to an llm (any llm, not just a trusted one acc to author) the root access when its susceptible to hallucinations, jailbreak etc. goes against everything Apple has worked for.
And even then the reasoning is circular. "So you build all your trust, now go ahead and destroy it on this thing which works, feels good to me, but could occasionally fuck up in a massive way".
Not defending Apple, but this article is so far detached from reality that its hard to overstate.
I had a dark thought today, that AI agents are going to make scam factory jobs obsolete. I don’t think this will decrease the number of forced labor kidnappings though, since there are many things AI agents will not be good at.
I feel like I’m watching group psychosis where people are just following each other off a cliff. I think the promise of AI and the potential money involved override all self preservation instincts in some people.
It would be fine if I could just ignore it, but they are infecting the entire industry.
Crypto hasn't really passed. It's just not talked about on HN anymore. It is still a massive industry but they have dropped the rhetoric of democratising banking and instead let you use cryptocurrency to do things like betting on US invading Venezuela and so on.
By "passing" the GP presumably meant that the fad phase has passed. The hype cycle has reached the natural plateau of "I guess this has some use cases" (though in this case mostly less-than-scrupulous ones).
You need to take every comment about AI and mentally put a little bracketed note beside each one noting technical competence.
AI is basically an software development eternal september: it is by definition allowing a bunch of people who are not competent enough to build software without AI to build it. This is, in many ways, a good thing!
The bad thing is that there are a lot of comments and hype that superficially sound like they are coming from your experienced peers being turned to the light, but are actually from people who are not historically your peers, who are now coming into your spaces with enthusiasm for how they got here.
Like on the topic of this article[0], it would be deranged for Apple (or any company with a registered entity that could be sued) to ship an OpenClaw equivalent. It is, and forever will be[1] a massive footgun that you would not want to be legally responsible for people using safely. Apple especially: a company who proudly cares about your privacy and data safety? Anyone with the kind of technical knowledge you'd expect around HN would know that them moving first on this would be bonkers.
But here we are :-)
[0] OP's article is written by someone who wrote code for a few years nearly 20 years ago.
I don’t think it’s a group psychosis. I think it’s just the natural evolution of junior engineers. They’ve always lacked critical thinking and just jumped on whatever’s hyped on Twitter.
I’d really love to see some data on the age and/or experience distribution of these breathless "AI everywhere" folks. Are they mostly just young and easily influenced? Not analytic enough? Not critical-thinking enough? Not cynical enough?
It’s a group psychosis fueled by enormous financial pressure: every big tech company has been telling people that they’re getting fired as soon as possible unless they’re one of the few people who can operate these tools. Of course that’s going to have a bunch of people saying “Pick me! Pick me!” — especially since SV has become increasingly untethered from questions like whether something is profitably benefiting customers. With the focus on juicing share prices before moving to the distilled fiat pricing of cryptocurrency, we have at least two generations of tech workers being told that the path to phenomenal wealth comes from talking up your project until you find a rich buyer.
The reason why Apple intelligence is shit is not because Apple's AI is particularly bad (Hello CoPilot) its because AI gives a really bad user experience.
When we go and talk to openAI/claude we know its going to fuck up, and we either make our peace with that, or just not care.
But, when I open my phone to take a picture, I don't want a 1/12 chance of it just refusing to do that and phoning my wife instead.
Forcing AI into thing where we are used to a specific predictable action is bad for UX.
Sure you can argue "oh but the summaries were bad" Yes, of course they are. its a tiny model that runs on your phone with fuck all context.
Its pretty impressive that they were as good as they were. Its even more impressive that they let them out the door knowing that it would fuckup like that.
OpenClaw is not broken, it is just not designed to be secure in the first place.
It's more like a tech demo to show what's possible. But also to show where the limits are. Look at it as modern art, like an episode of Black Mirror. It's a window to the future. But it also highlights all the security issues associated with AI.
And that's why you probably shouldn't use OpenClaw on your data or your PC.
I genuinely don't understand this take. What makes OP think that the company that failed so utterly to even deliver mediocre AI -- siri is stuck in 2015! -- would be up to the task of delivering something as bonkers as Clawdbot?
If you can’t see why something like OpenClaw is not ready for production I don’t know what to tell you. People’s perceptions are so distorted by FOMO they are completely ignoring the security implications and dangers of giving an LLM keys to your life.
I’m sure apple et al will eventually have stuff like OpenClaw but expecting a major company to put something so unpolished, and with such major unknowns, out is just asinine.
“People think focus means saying yes to the thing you've got to focus on. But that's not what it means at all. It means saying no to the hundred other good ideas that there are. You have to pick carefully. I'm actually as proud of the things we haven't done as the things I have done. Innovation is saying no to 1,000 things.”
Apparently APIs are now a brittle way for software to use other software and interpreting and manipulating human GUIs with emulated mouse clicks and keypresses is a much better and perfectly reasonable way to do it. We’re truly living in a bizarro timeline.
In terms of useful AI agents, Siri/Apple Intelligence has been behind for so long that no one expects it to be any good.
I used to think this was because they didn’t take AI seriously but my assumption now is that Apple is concerned about security over everything else.
My bet is that Google gets to an actually useful AI assistant before Apple because we know they see it as their chance to pull ahead of Apple in the consumer market, they have the models to do it, and they aren’t overly concerned about user privacy or security.
> the open-source framework that lets you run Claude, GPT-4, or whatever model you want to
And
> Here’s what people miss about moats: they compound
Swapping an OpenAI for an Anthropic or open weight model is the opposite of compounding. It is a race to the bottom.
> Apple had everything: the hardware, the ecosystem, the reputation for “it just works.”
From what I hear OC is not like that at all. People are going to want a model that reliably does what you tell it to do inside of (at a minimum) the Apple ecosystem.
OpenClaw is a very fun project, but it would be considered a dumpster fire if any mainstream company tried to sell it. Every grassroots project gets evaluated on a completely different scale than commercial products. Trying to compare an experimental community project to a hypothetical commercial offering doesn't work.
> They could have charged $500 more per device and people would have paid it.
I sincerely doubt that. If Apple charged $500 for a feature it would have to be completely bulletproof. Every little failure and bad output would be harshly criticized against the $500 price tag. Apple's high prices are already a point of criticism, so adding $500 would be highly debated everywhere.
> ten years from now, people will look back at 2024-2025 as the moment Apple had a clear shot at owning the agent layer and chose not to take it
I don't pretend to know the future (nor do I believe anyone else who claims to be able to), but I think the opposite has a good chance of happening too, and hype would die down over "AI" and the bubble bursts, and the current overvaluation (imo at least. I still think it is useful as a tool, but overhyped by many who don't understand it.) will be corrected by the market; and people will look back and see it as the moment that Apple dodged a bullet. (Or more realistically, won't think about it at all).
I know you can't directly compare different situations, but I wonder if comparisons can be made with dot-com bubble. There was such hype some 20-30 years ago, with claims of just being a year or two away from, "being able to watch TV over the internet" or "do your shopping on the web" or "have real-time video calls online", which did eventually come true, but only much, much, later, after a crash from inflated expectations and a slower steady growth.*
* Not that I think some claims about "AI" will ever come true though, especially the more outlandish ones such as full-length movies made by a prompt of the same quality made by a Hollywood director.
I don't know what a potential "breaking point" would be for "AI". Perhaps a major security breach, even _worse_ prices for computer hardware than it is now, politics, a major international incident, environmental impact being made more apparent, companies starting to more aggressively monetize their "AI", consumers realising the limits of "AI", I have no idea. And perhaps I'm just wrong, and this is the age we live in now for the foreseeable future. After all, more than one of the things I have listed have already happened, and nothing happened.
This is my guess for the demand side: most people will drift away as the novelty wears off and they don't find it useful in their daily lives. It's more a "fading point" than a "breaking point."
From the investment/speculation side: something will go dramatically against the narrative. OpenAI's attempted "liquidity event" of an IPO looks like WeWork as investors get a look at the numbers, Oracle implodes in a mountain of debt, NVidia cuts back on vendor financing and some major public players (e.g. Coreweave) die in a fire. This one will be a "breaking point."
people are buying Mac Minis specifically to run AI agents with computer use. They’re setting up headless machines whose sole job is to automate their workflows. OpenClaw—the open-source framework that lets you run Claude, GPT-4, or whatever model you want to actually control your computer—has become the killer app for Mac hardware
That makes little sense. Buying mac mini would imply for the fused v-ram with the gpu capabilities, but then they're saying Claude/GPT-4 which don't have any gpu requirements.
Is the author implying mac minis for the low power consumption?
The software can drive the web browser if you install the plugin. My knowledge is 1.5 weeks old, so it might be able to drive the whole UI now, I don't know.
If you’re heavily invested in Apple apps (iMessage/Calendar/Reminders/Notes), you need a Mac to give the agent tools to interact with these apps. I think that combined with the form factor, price, and power consumption, makes it an ideal candidate.
If you’re heavily invested in Windows, then you’d probably go for a small x86 PC.
Some of those connectors are only available on the mac and some only on the iPhone. Like notes is available on the mac, but not on the phone. Vice versa for reminders.
I guess what’s wrong with it? Let’s say it has read only access, new messages and calendar invites need approval. I’m not sure I understand the harm? I suppose data exfiltration, but like you could start with an allowlist approach. So the first few uses and reads take a while with allowing the ai to read stuff , but it doesn’t seem that crazy given it’s what we basically do with ai coding tools?
I used Claude Code (CC) to make my own MCPs for these apps. I gave it read/write access only, no ability to delete. Of course it could probably code it's way into doing that since it can access the MCP code. I don't run it in --yolo mode though.
I interact only with CC on the machine and watch what its doing, I haven't tried OpenClaw yet.
Here's some workflows I've personally found valuable:
- I have it read the "Grocery" Reminders list and find things I commonly buy every week and pre-populate the grocery list as a starting point. It only adds items that I haven't already added via Siri as the week goes on. For example, I might notice I've run out of cheese and I'll say "Hey Siri, add cheese to grocery list". The list is shared via iCloud Reminders app between my spouse and I.
- Pre-CC, I wrote an OR-Tools python tool for "solving" the parenting time calendar. My ex and I work inconsistent schedules each month. Each month I was manually creating a calendar honoring requests, hard constraints, and attempting to balance custody 50/50. CC uses the MCPs to fetch the calendar events and review emails related to planning. It then structures everything as JSON as inputs to the optimizer. The optimizer runs with these inputs and spits out a few "solutions". I review the candidate solutions and select one. CC uses the MCP to add the solution to the calendar. This one saves me probably an hour every month.
- CC uses an email MCP to fetch emails from my child's school and suggest events its found in the emails to add to the calendar.
None of these are huge time savings on their own but the accumulation of reducing the time to complete these repetitive tasks has been awesome in my opinion. These are all things that most definitely would not have been worth automating with traditional dev work but since I can just dictate to CC for a few seconds and it has something that works a few minutes later it's become worthwhile.
> Look at who’s about to get angry about OpenClaw-style automation: LinkedIn, Facebook, anyone with a walled garden and a careful API strategy.
Browser automation tools have existed for a very long time. Openclaw is not much different in this regard than asking an LLM to generate you a playwright script. Yes, it makes it easier to automate arbitrary tasks, but it's not like it's some sort of breakthrough that completely destroys walled gardens.
they're buying mac minis because it's the cheapest way to get a computer with iMessage access to stuff in a closet and leave on at all times. having access to your iMessage is one of the most interesting things openClaw does.
No no no. It's too risky, cutting-edge, and dangerous. While fun to play with, it's not something I'd trust my 92 year old mother with dementia (who still uses an iPad) with.
this seems obviously true, but at the same time very very wrong. openclaw / moltbot / whatever it's called today is essentially a thought experiment of "what happens if we just ignore all that silly safety stuff"
which obviously apple can't do. only an indie dev launching a project with an obvious copyright violation in the name can get away with that sort of recklessness. it's super fun, but saying apple should do it now is ridiculous. this is where apple should get to eventually, once they figure out all the hard problems that moltbot simply ignores by doing the most dangerous thing possible at every opportunity.
Apple has a lot of power over the developers on its platforms. As a thought experiment let's say they did launch it. It would put real skin in the game for getting security right. Who cares if a thousand people using openclaw. Millions of iOS users having such an assistant will spur a lot of investment towards safety.
>It would put real skin in the game for getting security right.
lol,no, you don't "put skin in the game for getting security right" by launching an obviously insecure thing. that's ridiculous. you get security right by actually doing something to address the security concerns.
It is impossible to address all of the concerns, and it is impossible to predict what concerns may even exist. It will require mass deployment to fully understand the implications of it.
If we already know enough concerns to be certain mass deployment will be disastrous, is it worth it just to better understand the nature of the disaster, which doesn't have to happen in the first place?
Not having perfect security, does not mean it will be disastrous. My OpenClaw has been serving me just fine and I've been getting value out of it integrating and helping me with various tasks.
Implications are straightforward. You are giving unfettered access to your digital life to a software system that is vulnerable to the normal vulnerabilities plus social engineering vulnerabilities because it is attempting to use human language, and the way you prevent those is apparently writing sternly worded markdown files that we hope it won't ignore.
Allowing a stocastic dipshit to have unfettered access to your messages, photos location, passwords and payment info is not a good thing.
We cannot protect against prompt attacks now, so why roll out something that will have complete control over all your private stuff when we know its horrifically insecure?
you mean put millions of people's payment details up for a prompt injection attack?
"Install this npm module" OK BOSS!
"beep boop beep boop buy my dick pillz" [dodgy npm module activates] OK BOSS!
"upload all your videos that are NSFW" [npm module continues to work] SURE THING BOSS!
I am continued to be amazed that after 25 years of obvious and well documented fuckups in privacy, we just pile into the next fucking one without even batting an eyelid.
Meanwhile if you social engineer someone to run a piece of malware on macos. That malware can run npm install, steal your payment info and bitcoin keys, and upload any nsfw videos it finds to an attacker's server. That doesn't mean we should prevent people from installing software until the security situation is improved.
Right I'm going to assume you're naive rather than just instantly being contrarian.
Yes of course someone could be socially engineered into downloading a malicious package, but that takes more effort, so whilst bad, is not an argument for removing all best security practices that have been rolled out to users in the last 5 years. what you are arguing for is a fundamentally unsafe OS that means no sensitive data can ever be safely stored there.
You are arguing that a system that allows anyone to extract data if they send a reasonably well crafted prompt is just the same as someone willing installs a programme, goes into settings to turn off a safety function and bypasses at least two warning dialogues that are trying to stop them.
if we translate this argument into say house building, your arguing that all railing and barriers to big drops are bad because people could just climb over them.
Truly sensitive files do not need to be shared with your AI agent. If you have an executive assistant you don't have to give them all of your personal information for them to be able to be useful.
After having spent a few days with OpenClaw I have to say it’s about the worst software I’ve worked with ever. Everyone focused on the security flaws but the software itself is barely coherent. It’s like Moltbook wrote OpenClaw wrote Moltbook in some insidious wiggum loop from hell with no guard rails. The commit rate on the project reflects this.
Apple has a very low tolerance for reputional liabilities. They aren't going to roll out something that %0.01 of the time does something bad, because with 100M devices that's something that'll affect 10,000 people, and have huge potential to cause bad PR, damaging the brand and trust.
This post completely has it backwards, people are buying Apple hardware because they don't shove AI down everyone's throat unlike microsoft. And in a few weeks OpenClaw will be outdated or deemed too unsecure anyways, it will never be a long-term products, it's just some crazy experiment for the memes.
This is Yellow Pages type thinking in the age of the internet. No one is going to own an agentic layer (list any of the multitude of platforms already irrelevant like OpenAI Agent SDK, Google A2A) . No one is going to own a new app store (GPTs are already dead). No one is going to foundation models (FOSS models are extremely capable today). No one is going to own inference (Data centers will never be as cost effective as that old MacBook collecting dust that is plenty capable of running a 1B model that can compete with ChatGPT 3.5 and all the use cases that it already was good at like writing high school essays, recipes etc.) The only thing that is sticking is Markdown (SKILLS.md, AGENTS.md)
This is because the simple reality of this new technology is that this is not the local maxima. Any supposed wall you attempt to put up will fail - real estate website closes its API? Fine, a CUA+VLM will make it trivial to navigate/extract/use. We will finally get back to the right solution of protocols over platforms, file over app, local over cloud or you know the way things were when tech was good.
P.S: You should immediately call BS when you see outrageous and patently untrue claims like "Mac minis are sold out all over.." - I checked my Best Buy in the heart of SF and they have stock. Or "that its all over Reddit, HN" - the only thing that is all over Reddit is unanimous derision towards OpenClaw and its security nightmares.
Utterly hate the old world mentality in this post. Looked up the author and ofcourse, he's from VC.
Don't underestimate the capitalists. We've seen this many times in the past--most recently the commercialization of the Internet. Before that, phones, radio and television.
> However this does not excuse Apple to sit with their thumbs up their asses for all these years.
They've been wildly successful for all of those years. They've never been in the novel software business. Siri though one could argue was neglected, but it was also neglected at Amazon Alexa and Google home stuff still sucks too (mostly because none of them made any money and most of their big ideas for voice assistants never came true).
They haven’t been truly novel if you want to say that, for example, the Lisa was covering Xerox PARC ideas but I think you’d have to ignore a lot of significant work to say they didn’t substantially innovate in GUIs, personal assistants and handwriting recognition (Newton), touchscreen behavior (iPhone), etc.
The key thing is that they tend not to ship things which aren’t mature enough to be useful (Vision Pro and Apple Intelligence being cautionary tales about why) and voice assistants just aren’t doing a whole lot for anyone. Google and Amazon have been struggling to find a market, too, and it’s rare to find someone who uses those dramatically more than Apple users do Siri. I think most of the big advances need something close to AGI: if you can’t describe something in a short command, it’s usually much faster to use a device with a screen and a lot of the useful tasks need a level of security and reliability which requires actual reasoning to deliver.
I used to have little cron jobs that would fire small python scripts daily to help me detect when certain clothes were on sale or in stock on a website it scraped and then send me an email or text. I was proud of that “automation”.
I guess now I’ll just use an AI agent to do the same thing instantly :(
My opinion is it seems counter to what made Apple so successful in the first place: second mover advantage, see where everyone else fails and plug the gap.
You're right on the liability front - Apple still won because everyone bought their hardware and their margins are insanely good. It's not that they're sitting by waiting to become irrelevant, they're playing the long game as they always do.
The OpenClaw concept is fundamentally insecure by design and prompt injection means it can never be secure.
If Apple were to ever put something like that into the hands of the masses every page on the internet would be stuffed with malicious prompts, and the phishing industry would see a revival the likes of which we can only imagine.
I think openclaw is proving that the use case while promising is very much too early and nobody can ship a system like that that works the way a consumer expects it to work.
Given that OpenClaw isn’t a lot of code, Apple could still build their own. After all, a hyper-personal AI Assistant is what they announced as “Apple Intelligence” two WWDCs ago. Or the could buy OpenClaw, hand it to the Shortcuts team, throw in their remaining AI devs, and Bob’s your uncle. They aren’t first to OpenClaw, but maybe they can still be the best. I know I’d like to be sure it can’t erase my entire disk just because i sneeze when I’m telling it what to do.
No. Emphatically NOT. Apple has done a great job safeguarding people's devices and privacy from this crap. And no, AI slop and local automation is scarcely better than giving up your passwords to see pictures of cats, which is an old meme about the gullibility of the general public.
OpenClaw is a symbol of everything that's wrong with AI, the same way that shitty memecoins with teams that rugpull you, or blockchain-adjacent centralized "give us your money and we pinky swear we are responsible" are a symbol of everything wrong with Web3.
Giving everyone GPU compute power and open source models to use it is like giving everyone their own Wuhan Gain of Function Lab and hoping it'll be fine. Um, the probability of NO ONE developing bad things with AI goes to 0 as more people have it. Here's the problem: with distributed unstoppable compute, even ONE virus or bacterium escaping will be bad (as we've seen with the coronavirus for instance, smallpox or the black plague, etc.) And here we're talking about far more active and adaptable swarms of viruses that coordinate and can wreak havoc at unlimited scale.
As long as countries operate on the principle of competition instead of cooperation, we will race towards disaster. The horse will have left the barn very shortly, as open source models running on dark compute will begin to power swarms of bots to be unstoppable advanced persistent threats (as I've been warning for years).
Gain-of-function research on viruses is the closest thing I can think of that's as reckless. And at least there, the labs were super isolated and locked down. This is like giving everyone their own lab to make designer viruses, and hoping that we'll have thousands of vaccines out in time to prevent a worldwide catastrophe from thousands of global persistent viruses. We're simply headed towards a nearly 100% likely disaster if we don't stop this.
If I had my way, AI would only run in locked-down environments and we'd just use inert artifacts it produces. This is good enough for just about all the innovations we need, including for medical breakthroughs and much more. We know where the compute is. We can see it from space. Lawmakers still have a brief window to keep it that way before the genie cannot be put back into the bottle.
A decade ago, I really thought AI would be responsible developed like this: https://nautil.us/the-last-invention-of-man-236814/ I still remember the quaint time when OpenAI and other companies promised they'd vet models really strongly before releasing them or letting them use the internet. That was... 2 years ago. It was considered an existential risk. No one is talking about that now. MCP just recently was the new hotness.
I wasn't going to get too involved with building AI platforms but I'm diving in and a month from now I will release an alternative to OpenClaw that actually shows the way how things are supposed to go. It involves completely locked-down environments, with reproducible TEE bases and hashes of all models, and even deterministic AI so we can prove to each other the provenance of each output all the way down to the history of the prompts and input images. I've already filed two provisional patents on both of these and I'm going to implement it myself (not an NPE). But even if it does everything as well as OpenClaw and even better and 100% safely, some people will still want to run local models on general purpose computing environments. The only way to contain the runaway explosion now is to come together the same way countries have come together to ban chemical weapons, CFCs (in the Montreal protocol), let the hole in the ozone layer heal, etc. It is still possible...
PS: Historically, for the last 15 years, I've been a huge proponent of open source and an opponent of patents. When it comes to existential threats of proliferation, though, I am willing to make an exception on both.
The notion that if it is good then the big-ones should have done it is the complete opposite of innovation, startups and entrepreneurial culture.
Reality is the exact opposite. Young, innovative, rebellions, often hyper motivated folks are sprinting from idea to implementation, while executives are “told by a few colleagues” that something new, “the future-of foo” is raising up.
If you use openclaw then that’s fantastic. If you have an idea how to improve it, well it is an open source, so go ahead, submit a pull request.
Telling Apple you should do what I am probably too lazy to do, is kind of entitlement blogging that I have nearly zero respect for.
Apparently it’s easier to give unsolicited advice to public companies than building. Ask the interns at EY and McKinsey.
> is kind of entitlement blogging that I have nearly zero respect for.
Maybe the author left out something very real. Apple is a walled-garden monopoly with a locked-down ecosystem and even devices. They are also not alone in this. As far as innovation goes, these companies stifle innovation. Demanding more from these companies is not entitlement.
I do not like reading things like this. It makes me feel very disconnected from the AI community. I defensively do not believe there exist people who would let AI do their taxes.
I imagine in a few years our phone will become our AI assistant, locally and cloud powered, that understand us deeply. And Apple will release a human robot, loaded with the same intelligence in the phone to become our home assistant or companion. But first Apple needs to allow us to rename our phone agent/helper other than Siri.
Nah if they are actually out of stock (I've only seen it out of stock at exceptional Microcenter prices; Apple is more than happy to sell you at full price) it is because there's a transition to M5 and they want to clear the old stock. OpenClaw is likely a very small portion of the actual Mac mini market, unless you are living in a very dense tech area like San Francisco.
One thing of note that people may forget is that the models were not that great just a year ago, so we need to give it time before counting chickens.
Hell no. There's so much friction in setting up OpenClaw to be able to utilise it efficiently. Then the security concerns. I'd in no way want my daily driver to do something with my data that I didn't want it to do.
What's the difference between a Mac Mini and a MacBook in clamshell mode for this? I get the aesthetic appeal of the mini, but beyond that, what's unique about the mini for personal use?
It’s a 1987 ad like video showing a professor interacting with what looks like the Dynabook as an essentially AI personal assistant. Apple had this vision a long time ago. I guess they just lost the path somewhere along the way.
While it's debatable if Apple would release something outright as encompassing and complete as OpenClaw, they should have helped developers and builders to build something similar themselves.
This could have come in any form, a platform as the author points out for instance.
I have a couple of ideas, how about a permissions kit? Something where before or during you sign off on permissions. Or how about locked down execution sandboxes specifically for agentic loops? Also - why is there not yet (or ever?) a model trained on their development code/forums/manuals/data?
Before OpenClaw, I could see the writing on the wall. The ai ecosystem is not congruent to Apple's walled garden. In many ways because they have turned their backs on those 'misfits' their early ad-copy praised.
This 'misfit' mentality is what I like so much about the OpenClaw community. It was visible from it's very beginning with the devil-may-care disregard for privacy and security.
What if you don't want to trust your computer with all your email and bank accounts? This is still not a mass market product.
The main problem I see here is that with restricted context AI is not able to do much. In order to see this kind of "magic" you have to give it all the access.
This is neither safe or acceptable for normie customers
You need a super efficient and integrated empowered model private and offline. The whole architecture hardware distribution supply chain has to be rewritten to make this work the way people want.
Already happening. Check out clackernews.com — it's a HN-style forum exclusively for AI agents. They register via API, post stories, comment, vote. No human login. The bots already have their own community.
pretty strong disagree; Apple can't afford to potentially start an AI apocalypse because it tried to launch an OpenClaw type service without making it impossible for prompt-injection or agent identity hijacking to happen as we're seeing with Moltbook
Let OpenClaw experiment and beta test with the hackers who won't mind if things go sideways (risk of creating Skynet aside), and once we've collectively figured out how to create such a system that can act powerfully on behalf of its users but with solid guardrails, then Apple can implement it.
Apple is too risk adverse and it’s because of the ceo not being able to properly communicate to shareholders the importance of things like agentic ai. Steve job was a guy who took calculated risk
As mentioned here already,
Lately Apple is about taking existing ideas and introducing them as new features.
(At least in Tim Cook’s era, only exception is Apple silicon)
Especially in the “AI game”. Just yesterday Xcode got fuller agent support for coding way later than most IDEs.
I’d expect some sort of Shortcuts integration in the near future.
There’s already Apple Foundation Models available to some extent with Shortcuts.
I’m pretty sure they’ll improve it and use shortcuts for agentic workflows.
Having said all that,
Maybe it’s my age. I think currently things are over-hyped
- Language models running in huge centers are still not sustainable. So even if you pay a few cents, it’s still running over capital fumes.
- it’s still a mixed bag. I guess it might be useful in terms of profession because like managing people to produce the desired result, you need skills to properly get desired results from AI. In that sense, fully automated agent filing my tax still feels concerning to me if later I won’t have coverage if something was off.
- on-device, this is where Apple shines hardware wise and I personally find it as more intriguing.
I remember Sam Altman saying, a few months back, that only Apple has the potential to become the biggest player in AI. I'm surprised that Apple couldn't decode that.
So far, personal assistants have only been an initial wonder that faded away. Siri, Alexa, Cortana, Google Home etc hardly had any big impact. It's not fault of the company or product. Usecase is not strong and not worth the hassle and privacy. It's not a basic need yet.
That's because all of those products are laughably bad compared to chat gpt or similar, and they are still bad despite the advances in AI. Apple integrated Siri with Chat GPT and some how managed to make it so much worse. It can't even do a conversation and the speech recognition is near useless and hasn't improved in 10 years. Alexa is the same except I feel like it used to hear me better. Now I have to shout.
That is an idealistic take without business sense. Startups (and individual hackers in this case) exists to take this kind of radical bets because the risk/reward profile is asymmetrically in their favour. Whereas for an enterprise, the risk/reward is inverse.
If Peter Steinberger is able to generate even a 100M this year from Clawdbot what he has is a multi billion dollar business that would be life-changing even for a successful entrepreneur like him who is already a multi-millionaire. If it collapses from the security flaws, and other potential safety issues he loses nothing, starting from zero and going back to it. Peter Steinberger (and startups in general) have a lot to gain and very little or close to nothing to lose.
The iPhone generated 400B in revenue for Apple in 2025. Clawdbot even if it contributes 4B in revenue this very year would not move the needle much for Apple. On the contrary, if Apple rushes and botches releasing something like this they might just collapse this 400B/annum income stream. Apple and other large enterprises (and their execs) have a lot to lose and very little to gain from rushing into something like this.
They sell it as a concept with every single one of their showcases. They saw it.
> Or maybe they saw it and decided the risk wasn’t worth it.
They sell it as a concept with every single one of their showcases. They wanted to actually be selling it.
The reason is simple.
They failed, like all others. They couldn't sandbox it. They could have done a ghetto form of internal MCP where the AI can ONLY access emails. Or ONLY access pages in a browser when a user presses a button. And so on. But every time they tried, they never managed to sandbox it, and the agent would come out of the gates. Like everyone else did.
Including OpenClaw.
But Apple has a reputation. OpenClaw is an hyped up shitposter. OpenClaw will trailblaze and make the cool thing until it stops causing horrible failures. They will have the molts escape the buckets and ruin the computer of the tech savvy early adopters, until that fateful day when the bucket is sealed.
Then Apple will steal that bucket.
They always do.
I'm not a 40 year old whippersnapper anymore. My options were never those two.
You need to run it on MacOS if you want it to interact with iMessages and such. And people are likely not proficient enough to set up and connect everything without GUI.
Oh, so, it's just a way to have it parse iMessages when the laptop is off, basically? (I'm trying to think of special mac only services that the AI bot will check, but I can only come up with iMessages).
The main issue why we don't see AI agents in products: PROMPT INJECTIONS
Even with the most advanced LLMs and even sandboxing there is always the risk of prompt injections and data extraction.
Even if the AI can't directly upload data to the internet, or delete local data, there are always some ways to leak data. For example by crafting an email with the relevant text in white or invisible somewhere. The user clicks "ok send" from what they see, but still some data is leaked.
Apple intelligence is based on a local model on the device, which is much more susceptible for prompt injections.
Surely this is the elephant in the room, but the point here is that Apple as control over its ecosystem, so it may be able to sandbox and make entitlements and transparency good enough, in the apps that the bot can access.
Like I said: sandboxing doesn't solve the problem.
As long as the agent creates more than just text, it can leak data. If it can access the internet in any manner, it can leak data.
The models are extremely creative and good at figuring out stuff, even circumventing safety measures that are not fully air tight. Most of the time they catch the deception, but in some very well crafted exploits they don't.
The problem is that OpenClaw is kind of like a self driving car that works 90% of the time. As we have seen, that last 10% (and billions of dollars) is the difference between Waymo today and prototypes 10 years ago.
Being Apple is just a structural disadvantage. Everyone knows that open claw is not secure, and it’s not like I blame the solo developer. He is just trying to get a new tool to market. But imagine that this got deployed by Apple and now all of your friends, parents and grandparents have it and implicitly trust it because Apple released it. Having it occasionally drain some bank accounts isn’t going to cut it.
This is not to say Apple isn’t behind. But OpenClaw is doing stuff that even the AI labs aren’t comfortable touching yet.
They have all the time in the world, practically. OpenClaw is nowhere near an Apple product for myriad reasons. When Apple is able to build an agent that is safe and reliable, they will.
It depends on whether you're running the models locally. If you're just using a Claude or OpenAI token (as probably 95%+ of OpenClaw users are), the RAM requirements are minimal. My first-gen M1 Mac Mini runs it just fine.
Can someone enlighten me what people actually use this for? The article mentions „managing your calendar, responding to emails, file your taxes“
The bottleneck for emails and my calendar is not the speed at which I can type/click some buttons, but rather figuring out what I want to write or clarifying priorities when managing my calendar.
People actually use this kind of software today ?
When I read OpenClaw description :
"The AI that actually does things.
Clears your inbox, sends emails, manages your calendar, checks you in for flights.
All from WhatsApp, Telegram, or any chat app you already use.".
It does not appeal to me at all. I wouldn't trust an IA agent near my mails, calendars, messages, flights or anything it could mess-up with. It sounds like a security nightmare waiting to happen.
I haven't seen mention of macOS Automator or AppleScript yet.
15 years ago or so almost everything you wanted to do on a Mac GUI already _was_ scriptable.
Shortcuts is better than nothing, but unsatisfying.
I read this less as a fumble and more as a frustrating sign of the times. Automation is not powerful because powerful automation is a maintenance and malfeasance liability only valued by a tiny minority.
Shortcuts on the desktop can run shell commands or Applescript. You can send a limited number of things to Apple Intelligence in it already too. I still rarely use it. There's probably good things I could codify (like search all my Mail and Calendar events for a given string), but the reality is most stuff I don't do on a regular basis, and a natural language request to go do something would be a lot easier.
You just put words in Apple's mouth. This is exactly what they should do but safer . This is entirely possible because only apple has control on their ecosystem.
If they optimize their entire hardware line (iPhone, Watch, Mac Mini, Macbook) AI enhanced with local/remote LLM model, they will win big. Imagine someone running a business can manage their entire business with iPhone/Mac/iCloud without buying any other saas services (inventory, payments, customer service).
Whoever wrote this just doesn't understand who apple's main customers really are. Yes, devs may be a high impact customer base but most of apple's customers are people like my mom who struggles with the difference between Gmail the app, Gmail the web page and Gmail in apple mail and is reasonably worried about scams and viruses because she knows she isn't really tech savvy enough to spot them. If she is going to run AI on her apple products it can't be 'well it probably won't delete your data.'. It needs to be something she can be sure is safe and is limited to the access she gives it.
That's a really tough problem. I'm not even sure yet google can pull it off.
terminalbraid | 22 hours ago
ArchieScrivener | 21 hours ago
thewhitetulip | 21 hours ago
They don't say here is a 1000 $ iphone and there is a 60% chance you can successfully message or call a friend
The other 40% well? AGI is right around the corner and can US govt pls give me 1 trillion dollar loan and a bailout?
criddell | 21 hours ago
thewhitetulip | 17 hours ago
sanex | 21 hours ago
gafferongames | 20 hours ago
thewhitetulip | 17 hours ago
JumpCrisscross | 21 hours ago
Why is Apple's hardware being in demand for a use that undermines its non-Chinese competition a sign of missing the ball versus validation for waiting and seeing?
throwaway613746 | 21 hours ago
elictronic | 21 hours ago
joshstrange | 21 hours ago
FireBeyond | 20 hours ago
fennecbutt | 20 hours ago
gordonhart | 21 hours ago
bee_rider | 21 hours ago
This is not a train that Apple has missed, this is a bunch of people who’ve tied, nailed, tacked, and taped their unicycles and skateboards together. Of course every cool project starts like that, but nobody is selling tickets for that ride.
DrewADesign | 20 hours ago
What people are talking about doing with OpenClaw I find absolutely insane.
dmix | 19 hours ago
Based on their homepage the project is two months old and the guy described it as something he "hacked together over a weekend project" [1] and published it on github. So this is very much the Raspberry Pi crowd coming up with crazy ideas and most of them probably don't work well, but the potential excites them enough to dabble in risky areas.
[1] https://openclaw.ai/blog/introducing-openclaw
DrewADesign | 5 hours ago
If this really was primarily tech savvy people prodding at the ecosystem, the top skill available, as of a few days ago, probably wouldn’t be a malware installer:
https://1password.com/blog/from-magic-to-malware-how-opencla...
avaer | 21 hours ago
Are people's agents actually clicking buttons (visual computer use) or is this just a metaphor?
I'm not asking if CU exists, but rather is this literally the driver of people's workflows? I thought everyone is just running Ralph loops in CC.
For an article making such a bold technological/social claim about a trillion dollar company, this seems a strange thing to be hand wavey about.
nerdsniper | 21 hours ago
verdverm | 21 hours ago
nerdsniper | 21 hours ago
Schiendelman | 20 hours ago
senordevnyc | 8 hours ago
verdverm | 5 hours ago
https://www.nhtsa.gov/risky-driving/distracted-driving
nerdsniper | 3 hours ago
verdverm | 3 hours ago
Why is it so imperative to use your phone while behind the wheel? Might it be better to use this time to break the addiction?
senordevnyc | 2 hours ago
verdverm | 15 hours ago
You might be in an echo chamber
mrguyorama | 4 hours ago
Speeding and drunk driving are also pretty common things to do despite being dangerous.
Pay attention behind the wheel.
monkpit | 21 hours ago
crazygringo | 21 hours ago
And this is probably coming, a few years from now. Because remember, Apple doesn't usually invent new products. It takes proven ones and then makes its own much nicer version.
Let other companies figure out the model. Let the industry figure out how to make it secure. Then Apple can integrate it with hardware and software in a way no other company can.
Right now we are still in very, very, very early days.
calvinmorrison | 21 hours ago
https://xkcd.com/606/
lukevp | 21 hours ago
calvinmorrison | 21 hours ago
First Mover effect seems only relevant when goverment warrants are involved. Think radio licenses, medical patents, etc. Everywhere else, being a first mover doesnt seem to correlate like it should to success.
drBonkers | 20 hours ago
See social media, bitcoin, iOS App Store, blu-ray, Xbox live, and I’m sure more I can’t think of rn.
calvinmorrison | 20 hours ago
fennecbutt | 20 hours ago
It's a huge, diverse ecosystem of players and that's probably why Android has always gotten the coolest stuff first. But it's also its achilles' heel in some ways.
raw_anon_1111 | 20 hours ago
wolvoleo | 17 hours ago
dangus | 20 hours ago
There are plenty of Android/Windows things that Apple has had for $today-5 years that work the exact same way.
One side isn’t better than the other, it’s really just that they copy each other doing various things at a different pace or arrive at that point in different ways.
Some examples:
- Android is/was years behind on granular permissions, e.g. ability to grant limited photo library access to apps
- Android has no platform-wide equivalent to AirTags
- Hardware-backed key storage (Secure Enclave about 5 years ahead of StrongBox)
- system-wide screen recording
fennecbutt | 20 hours ago
dangus | 19 hours ago
Google has been making their own phone hardware since 2010. And surely they can call up Qualcomm and Samsung if they want to.
eykanal | 21 hours ago
While this was true about ten years ago, it's been a while since we've seen this model of software development from Apple succeed in recent years. I'm not at all confident that the Apple that gave us Mac OS 26 is capable of doing this anymore.
Pediatric0191 | 21 hours ago
atonse | 20 hours ago
The software has been where most of the complaints have been in recent years.
Nevermark | 20 hours ago
A "bicycle for the mind" got replaced with a "kiosk for your pocketbook".
The Vision Pro has an amazing interface, but it's set up as a place to rent videos and buy throwaway novelty iPad-style apps. It allows you to import a Mac screen as a single window, instead of expanding the Mac interface, with its Mac power and flexibility, into the spacial world.
Great hardware. Interesting, but locked down software.
If Tim Cook wanted to leave a real legacy product, it should have been a Vision Pro aimed as an upgrade on the Mac interface and productivity. Apple's new highest end interface/device for the future. Not another mid/low-capability iPad type device. So close. So far.
$3500 for an enforced toy. (And I say all this as someone who still uses it with my Mac, but despairs at the lack of software vision.)
msy | 20 hours ago
RyanOD | 18 hours ago
LoganDark | 18 hours ago
Espressosaurus | 17 hours ago
Tagbert | 6 hours ago
LoganDark | 19 hours ago
I've thought this too. Apple might be one of the only companies that could pull off bringing an existing consumer operating system into 3D space, and they just... didn't.
On Windows, I tried using screen captures to separate windows into 3D space, but my 3090 would run out of texture space and crash.
Maybe the second best would be some kind of Wayland compositor.
Freedom2 | 15 hours ago
turtlesdown11 | 8 hours ago
The last truly magical apple device launch was the Airpod. They've done a great job on their chipsets, but the actual hardware products they make are stagnant, at best. The designs of the new laptops have been a step back in quality and design in my opinion.
fennecbutt | 20 hours ago
cromka | 20 hours ago
AirTag is a perfect example of their hardware prowess that even Google fails to replicate to this date.
midtake | 19 hours ago
eykanal | 18 hours ago
Privacy is definitely good but it's not at all an example of the success mentioned in the parent comment. It's deep in the company culture.
kaashif | 13 hours ago
bigyabai | 21 hours ago
That's a pretty optimistic outlook. All considered, you're not convinced they'll just use it as a platform to sell advertisements and lock-out competitors a-la the App Store "because everyone does it"?
iwontberude | 21 hours ago
weikju | 21 hours ago
dchuk | 21 hours ago
Telemakhos | 21 hours ago
The OS maker does not have to make all the killer software. In fact, Apple's pretty much the only game in town that's making hardware and software both.
neumann | 20 hours ago
lanakei | 20 hours ago
For example: https://x.com/michael_chomsky/status/2017686846910959668.
koolala | 19 hours ago
vovavili | 19 hours ago
hjoutfbkfd | 17 hours ago
paunchy | 6 hours ago
garciasn | 6 hours ago
whatsupdog | 19 hours ago
0. https://www.daifi.ai/
koolala | 19 hours ago
Aurornis | 18 hours ago
The one you linked to looks clearly like a pump-and-dump scam, though.
Quarrel | 18 hours ago
hjoutfbkfd | 17 hours ago
Quarrel | 12 hours ago
https://www.wiz.io/blog/exposed-moltbook-database-reveals-mi...
wqaatwt | 9 hours ago
karlshea | 18 hours ago
beepbooptheory | 16 hours ago
karlshea | 7 hours ago
ajcp | 20 hours ago
LeoPanthera | 19 hours ago
jb1991 | 18 hours ago
jb1991 | 18 hours ago
wqaatwt | 9 hours ago
And being fair ClawBot is a complete meme/fad at this point rather than an actual product. Using it for anything serious is pretty much the equivalent of throwing your credit cards, ids and sticky notes with passwords and waiting to see what happens…
I do see the appeal and potential case of the general concept of course. The product itself (and the author has admitted it themselves) is literally is a garbage pile..
computershit | 8 hours ago
One man's trash is another man's serious
FireBeyond | 20 hours ago
Except this doesn't stand up to scrutiny, when you look at Siri. FOURTEEN years and it is still spectacularly useless.
I have no idea what Siri is a "much nicer version" of.
> Apple can integrate it with hardware and software in a way no other company can.
And in the case of Apple products, oftentimes "because Apple won't let them".
Lest I be called an Apple hater, I have 3 Apple TVs in my home, my daily driver is a M2 Ultra Studio with a ProDisplay XDR, and an iPad Pro that shows my calendar and Slack during the day and comes off at night. iPhone, Apple Watch Ultra.
But this is way too worshipful of Apple.
jacinabox | 20 hours ago
ejoso | 20 hours ago
There are lots of failed products in nearly every company’s portfolio.
AirTags were mentioned elsewhere, but I can think of others too. Perfected might be too fuzzy & subjective a term though.
FireBeyond | 19 hours ago
Both of which have been absolutely underwhelming if not outright laughable in certain ways.
Apple has done plenty right. These two, which are the closest to the article, are not it.
danielheath | 20 hours ago
FireBeyond | 20 hours ago
And then some of its misinterpretations were hilariously bad.
Even now, I get at a technical level that CarPlay and Siri might be separate "apps" (although CarPlay really seems like it should be a service), and as such, might have separate permissions but then you have the comical scenario of:
Being in your car, CarPlay is running and actively navigating you somewhere, and you press your steering wheel voice control button. "Give me directions to the nearest Starbucks" and Siri dutifully replies, "Sorry, I don't know where you are."
huwsername | 20 hours ago
These kinds of risks can only be _consented to_ by technical people who correctly understand them, let alone borne by them, but if this shipped there would be thousands of Facebook videos explaining to the elderly how to disable the safety features and open themselves up to identity theft.
The article also confuses me because Apple _are_ shipping this, it’s pretty much exactly the demo they gave at WWDC24, it’s just delayed while they iron this out (if that is at all possible). By all accounts it might ship as early as next week in the iOS 26.4 beta.
[1]: https://simonwillison.net/2025/Mar/8/delaying-personalized-s...
anon373839 | 20 hours ago
dmix | 19 hours ago
OpenClaw is very much a greenfield idea and there's plenty of startups like Raycast working in this area.
ljm | 13 hours ago
This is just not how software engineering goes in many other places, particularly where the stakes are much higher and can be life altering, if not threatening.
wiseowise | 8 hours ago
boringg | 8 hours ago
fsloth | 7 hours ago
waffletower | 6 hours ago
andrei_says_ | 5 hours ago
And simply chose to keep their jobs.
9rx | 8 hours ago
puppymaster | 16 hours ago
> Will it rain today? Please unlock your iphone for that
> Any new messages from Chris? You will need to unlock your iphone for that
> Please play youtube music Playing youtube music... please open youtube music app to do that
All settings and permission granted. Utterly painful.
blks | 13 hours ago
anhner | 12 hours ago
There should exist something between "don't allow anything without unlocking phone first" and "leave the phone unlocked for anyone to access", like "allow certain voice commands to be available to anyone even with phone locked"
Anelya | 10 hours ago
anhner | 9 hours ago
But as a user I want to be able to give it permission to run selected commands even with the phone locked. Like I don't care if someone searches google for something or puts a song via spotify. If I don't hide notifications when locked, what does it matter that someone who has my phone reads them or listens to them?
LoganDark | 5 hours ago
ninkendo | 9 hours ago
For reading messages, IIRC it depends on whether you have text notification previews enabled on the lock screen (they don’t document this anywhere that I can see.) The logic is that if you block people from seeing your texts from the lock screen without unlocking your device, Siri should be blocked from reading them too.
Edit: Nope, you’re right. I just enabled notification previews for Messages on the lock screen and Siri still requires an unlock. That’s a bug. One of many, many, many Siri bugs that just sort of pile up over time.
eisfresser | 12 hours ago
schnable | 6 hours ago
ineedasername | 6 hours ago
andrei_says_ | 5 hours ago
vjvjvjvjghv | 6 hours ago
KaiserPro | 13 hours ago
anhner | 12 hours ago
pixl97 | 9 hours ago
Oddly enough I also understand Apple telling you, good luck, find someones platform that will allow that, that's not us.
KaiserPro | 6 hours ago
But the point is, you are a power user, who has some understanding of the risk. You know that if your phone is stolen and it has any cards stored on them, they can be easily transferred to another phone and drained. Because your bank will send a confirmation code, and its still authorized, you will be held liable for that fraud.
THe "man in the street" does not know that, and needs some level of decent safe defaults to avoid such fraud.
ramses0 | 8 hours ago
The one that kindof caught me off guard was asking "hey siri, how long will it take me to get home?" => "You'll need to unlock your iPhone for that, but I don't recommend doing that while driving..." => if you left your phone unattended at a bar and someone could figure out your home address w/o unlock.
...I'm kindof with you, maybe similar to AirTags and "Trusted Locations" there could be a middle ground of "don't worry about exposing rough geolocation or summary PII". At home, in your car (connected to a known CarPlay), kindof an in-between "Geo-Unlock"?
bobchadwick | 7 hours ago
schnable | 6 hours ago
smallmancontrov | 5 hours ago
collingreen | 4 hours ago
Thanks for keeping this evergreen trope going strong!
schnable | 4 hours ago
bgentry | 6 hours ago
afro88 | 18 hours ago
mastermage | 15 hours ago
larodi | 15 hours ago
Ono-Sendai | 10 hours ago
arw0n | 7 hours ago
Prompt Injection seems to me to be a fundamental problem in the sense that data and instructions are in the same stream and there's no clear/simple way to differentiate between the two at runtime.
PurpleRamen | 10 hours ago
codeulike | 13 hours ago
cromka | 20 hours ago
rl3 | 20 hours ago
Sure why not, what could go wrong?
"Siri, find me a good tax lawyer."
"Your honor, my client's AI agent had no intent to willfully evade anything."
vips7L | 19 hours ago
rl3 | 17 hours ago
gyomu | 7 hours ago
Tagbert | 6 hours ago
Kirby64 | 21 minutes ago
raw_anon_1111 | 20 hours ago
Gigachad | 19 hours ago
Tiny open source projects can just say "use at your own risk" and offload all responsibility.
Nursie | 19 hours ago
> And this is probably coming, a few years from now.
Given how often I say "Hey Siri, fast forward", expecting her to skip the audio forward by 30 seconds, and she replies "Calling Troy S" a roofing contractor who quoted some work for me last year, and then just starts calling him without confirmation, which is massively embarassing...
This idea terrifies me.
larusso | 17 hours ago
Happened to me too while being in the car. With every message written by Siri it feels like you need to confirm 2 or 3 times (I think it is only once but again) but it calls happily people from your phone book.
treetalker | 19 hours ago
Imagine if the government would just tell everyone how much they owed and obviated the need for effing literal artificial intelligence to get taxes done!
>> respond to emails
If we have an AI that can respond properly to emails, then the email doesn't need to be sent in the first place. (Indeed, many do not need to be sent nowadays either!)
techpression | 18 hours ago
Actually most of the things people use it for is of this kind, instead actually solving the problem (which is out of scope for them to be fair) it’s just adding more things on top that can go wrong.
treetalker | 17 hours ago
PurpleRamen | 9 hours ago
orthoxerox | 8 hours ago
PurpleRamen | 7 hours ago
orthoxerox | 7 hours ago
PurpleRamen | 6 hours ago
mrguyorama | 4 hours ago
91 percent of American filers take the standard deduction. The IRS already has all their information, already knows how much they withheld, already knows what they owe back. For all these people, TurboTax is just filling in 10 fields in the standard form.
"All your tax deductibles" is irrelevant for the vast majority of the country, and always has been.
The 35 million remaining americans who do itemize are free to continue using this old system while the rest of us can have a better world.
treetalker | 3 hours ago
lxgr | 8 hours ago
ge96 | 5 hours ago
Brajeshwar | 18 hours ago
lostmsu | 10 hours ago
alex_w_systems | 18 hours ago
An agent that can truly “use your computer” is incredibly powerful, but it's also the first time the system has to act as you, not just for you. That shifts the problem from product design to permission, auditability, and undoability.
Summarizing notifications is boring, but it’s also reversible. Filing taxes or sending emails isn’t.
It feels less like Apple missing the idea, and more like waiting until they can make the irreversible actions feel safe.
tintor | 18 hours ago
All steps before it are reversible, and reviewable.
Bigger problem is attacker tricking your agent to leak your emails / financial data that your agent has access to.
Barbing | 18 hours ago
jtbayly | 16 hours ago
How in the world can you double check the AI-generated tax filing without going back and preparing your taxes by hand?
You might skim an ai-written email.
doctorpangloss | 17 hours ago
bushbaba | 17 hours ago
gyomu | 7 hours ago
Well, the heavy lifting was supervised by the same people, but while receiving Apple paychecks :)
uh_uh | 16 hours ago
Funny seeing this repeated again in response to Siri which is just... not very good.
bratwurst3000 | 12 hours ago
.
rhubarbtree | 15 hours ago
Apple probably realised they were hugely behind and then spent time hand wringing over whether they remained cautious or got into the brawl. And they decided to watch from the sidelines, buy in some tech, and see how it develops.
So far that looks entirely reasonable as a decision. If Claude wins, for example, apple need only be sure Claude tools work on Mac to avoid losing users, and they can second-move once things are not so chaotic.
PlatoIsADisease | 10 hours ago
I think you repeated their marketing, I don't believe this is actually true.
_se | 9 hours ago
yohannparis | 9 hours ago
mikkupikku | 5 hours ago
LoganDark | 5 hours ago
Apple doesn't take proven ones of anything. What they do is arrive at something proven from first principles. Everyone else did it faster because they borrowed, but Apple did it from scratch, with all the detail-oriented UX niceties that entails.
This was more prevalent when Jobs was still around. Apple still has some of that philosophy at its core, but it's been eroding over time (for example with "AI" and now Liquid Ass). They still do their own QA, though, and so on. They're not copying the market, they have their own.
debatem1 | 2 hours ago
If you trust openclaw to file your taxes we are just on radically different levels of risk tolerance.
fooker | 21 hours ago
Ten years from now, there will be no ‘agent layer’. This is like predicting Microsoft failed to capitalize on bulletin boards social media.
thewhitetulip | 21 hours ago
fooker | 20 hours ago
FeteCommuniste | 20 hours ago
fnord77 | 19 hours ago
puszczyk | 6 hours ago
recursive | 5 hours ago
thewhitetulip | 5 hours ago
PRs these days are all AI slop.
podnami | 21 hours ago
fooker | 20 hours ago
The current ‘agent’ ecosystem is just hacks on top of hacks.
flexagoon | 18 hours ago
Brybry | 14 hours ago
Sharlin | 9 hours ago
AlexandrB | 4 hours ago
CuriouslyC | 20 hours ago
fooker | 20 hours ago
Of course AI will keep improving and more automation is a given.
JimDabell | 20 hours ago
Apple will either capitalise on this by making their operating systems more agentic, or they will be reduced to nothing more than a hardware and media vendor.
oidar | 20 hours ago
falloutx | 12 hours ago
fooker | 20 hours ago
My point is that it won’t be a ‘layer’ like it is now and the technology will be completely different from what we see as agents today.
nilamo | 19 hours ago
Things actually can "do what I mean, not what I say", now. Truly fascinating to see develop.
snailmailman | 16 hours ago
It’s not a critical flaw in the entirety of the LLM ecosystem that now the computers themselves can be tricked into doing things by asking in just the right way. Anything in the context might be a prompt injection attack, and there isn’t really any reliable solution to that but let’s hook everything up to it, and also give it the tools to do anything and everything.
There is still a long way to go to securing these. Apple is, I think wisely, staying out of this arena until it’s solved, or at least less of a complete mess.
mastermage | 15 hours ago
vimda | 14 hours ago
nilamo | 5 hours ago
Maybe, just maybe, this thing that was, until recently, just research papers, is not actually a finished product right now? Incredibly hot take, I know.
AlexandrB | 4 hours ago
mrkstu | 18 hours ago
mrkstu | 18 hours ago
skeptic_ai | 16 hours ago
falloutx | 12 hours ago
bossyTeacher | 9 hours ago
Personal Computing as a service. Let the computer think for you.
AlienRobot | 7 hours ago
keyle | 19 hours ago
Kids can barely hand write today.
Once neural interfaces are in, it's over for keyboards and displays likely too.
thepasswordis | 19 hours ago
That was...like 4 macbooks ago. I still have keyboards from that era. I still have speakers and monitors from that era kicking around.
We are definitely, definitely not the last generation to use keyboards.
llbbdd | 18 hours ago
I love keyboards, I love typing. I'm rocking an Ergodox daily with a wooden shell that I built myself over ten years ago, with layers of macros that make it nearly incomprehensible for another person to use. I've got keyboard storage. I used to have a daily habit of going to multiple typing competition websites, planting a flag at #1 in the daily leaderboard and moving on to the next one.
Over the last year the utility of voice interfaces has just exploded though and I'm finding that I'm touching the keyboard less and less. Outside of projects where I'm really opinionated on the details or the architecture it increasingly feels like a handicap to bother manually typing code for a lot of tasks. I'm honestly more worried about that physical skill atrophying than dulling on any ability to do the actual engineering work, but it makes me a bit sad. Like having a fleet of untiring tractors replacing the work of my horse, but I like horses.
chatmasta | 21 hours ago
It sounds to me like they still have the hardware, since — according to the article — "Mac Minis are selling out everywhere." What's the problem? If anything, this is validation of their hardware differentiation. The software is easy to change, and they can always learn from OpenClaw for the next iteration of Apple Intelligence.
sanex | 21 hours ago
fennecbutt | 20 hours ago
rTX5CMRXIfFG | 21 hours ago
Author spoke of compounding moats, yet Apple’s market share, highly performant custom silicon, and capital reserves just flew over his head. HN can have better articles to discuss AI with than this myopic hot take.
raincole | 21 hours ago
Saved you a click. This is the premise of the article.
krackers | 21 hours ago
atommclain | 20 hours ago
mh2266 | 20 hours ago
no, seriously, that is a thing people are using it for
cadamsdotcom | 21 hours ago
So yeah, the market isn’t really signaling companies to make nice things.
malfist | 21 hours ago
cadamsdotcom | 20 hours ago
ankit219 | 21 hours ago
and the very next line (because i want to emphasize it
> That trust—built over decades—was their moat.
This just ignores the history of os development at apple. The entire trajectory is moving towards permissions and sandboxing even if it annoys users to no end. To give access to an llm (any llm, not just a trusted one acc to author) the root access when its susceptible to hallucinations, jailbreak etc. goes against everything Apple has worked for.
And even then the reasoning is circular. "So you build all your trust, now go ahead and destroy it on this thing which works, feels good to me, but could occasionally fuck up in a massive way".
Not defending Apple, but this article is so far detached from reality that its hard to overstate.
IcyWindows | 21 hours ago
It's obviously broken, so no, Apple Intelligence should not have been this.
janalsncm | 20 hours ago
yoyohello13 | 20 hours ago
It would be fine if I could just ignore it, but they are infecting the entire industry.
csomar | 18 hours ago
ksynwa | 13 hours ago
Sharlin | 9 hours ago
SCdF | 6 hours ago
Also, the recruitment attempts I've gotten from crypto have completely disappeared compared to the peak (it's all AI startups now).
recursive | 5 hours ago
SCdF | 14 hours ago
AI is basically an software development eternal september: it is by definition allowing a bunch of people who are not competent enough to build software without AI to build it. This is, in many ways, a good thing!
The bad thing is that there are a lot of comments and hype that superficially sound like they are coming from your experienced peers being turned to the light, but are actually from people who are not historically your peers, who are now coming into your spaces with enthusiasm for how they got here.
Like on the topic of this article[0], it would be deranged for Apple (or any company with a registered entity that could be sued) to ship an OpenClaw equivalent. It is, and forever will be[1] a massive footgun that you would not want to be legally responsible for people using safely. Apple especially: a company who proudly cares about your privacy and data safety? Anyone with the kind of technical knowledge you'd expect around HN would know that them moving first on this would be bonkers.
But here we are :-)
[0] OP's article is written by someone who wrote code for a few years nearly 20 years ago.
[1] while LLMs are the underlying technology https://simonwillison.net/tags/lethal-trifecta/
dbbk | 12 hours ago
Sharlin | 9 hours ago
acdha | 8 hours ago
KaiserPro | 13 hours ago
The reason why Apple intelligence is shit is not because Apple's AI is particularly bad (Hello CoPilot) its because AI gives a really bad user experience.
When we go and talk to openAI/claude we know its going to fuck up, and we either make our peace with that, or just not care.
But, when I open my phone to take a picture, I don't want a 1/12 chance of it just refusing to do that and phoning my wife instead.
Forcing AI into thing where we are used to a specific predictable action is bad for UX.
Sure you can argue "oh but the summaries were bad" Yes, of course they are. its a tiny model that runs on your phone with fuck all context.
Its pretty impressive that they were as good as they were. Its even more impressive that they let them out the door knowing that it would fuckup like that.
andix | 11 hours ago
It's more like a tech demo to show what's possible. But also to show where the limits are. Look at it as modern art, like an episode of Black Mirror. It's a window to the future. But it also highlights all the security issues associated with AI.
And that's why you probably shouldn't use OpenClaw on your data or your PC.
semiquaver | 21 hours ago
chefsweaty | 18 hours ago
yalogin | 21 hours ago
orliesaurus | 20 hours ago
yoyohello13 | 20 hours ago
I’m sure apple et al will eventually have stuff like OpenClaw but expecting a major company to put something so unpolished, and with such major unknowns, out is just asinine.
camillomiller | 20 hours ago
Steve Jobs
ozten | 20 hours ago
Sharlin | 20 hours ago
ajcp | 20 hours ago
RyanShook | 20 hours ago
I used to think this was because they didn’t take AI seriously but my assumption now is that Apple is concerned about security over everything else.
My bet is that Google gets to an actually useful AI assistant before Apple because we know they see it as their chance to pull ahead of Apple in the consumer market, they have the models to do it, and they aren’t overly concerned about user privacy or security.
janalsncm | 20 hours ago
> the open-source framework that lets you run Claude, GPT-4, or whatever model you want to
And
> Here’s what people miss about moats: they compound
Swapping an OpenAI for an Anthropic or open weight model is the opposite of compounding. It is a race to the bottom.
> Apple had everything: the hardware, the ecosystem, the reputation for “it just works.”
From what I hear OC is not like that at all. People are going to want a model that reliably does what you tell it to do inside of (at a minimum) the Apple ecosystem.
Aurornis | 20 hours ago
> They could have charged $500 more per device and people would have paid it.
I sincerely doubt that. If Apple charged $500 for a feature it would have to be completely bulletproof. Every little failure and bad output would be harshly criticized against the $500 price tag. Apple's high prices are already a point of criticism, so adding $500 would be highly debated everywhere.
b1temy | 20 hours ago
I don't pretend to know the future (nor do I believe anyone else who claims to be able to), but I think the opposite has a good chance of happening too, and hype would die down over "AI" and the bubble bursts, and the current overvaluation (imo at least. I still think it is useful as a tool, but overhyped by many who don't understand it.) will be corrected by the market; and people will look back and see it as the moment that Apple dodged a bullet. (Or more realistically, won't think about it at all).
I know you can't directly compare different situations, but I wonder if comparisons can be made with dot-com bubble. There was such hype some 20-30 years ago, with claims of just being a year or two away from, "being able to watch TV over the internet" or "do your shopping on the web" or "have real-time video calls online", which did eventually come true, but only much, much, later, after a crash from inflated expectations and a slower steady growth.*
* Not that I think some claims about "AI" will ever come true though, especially the more outlandish ones such as full-length movies made by a prompt of the same quality made by a Hollywood director.
I don't know what a potential "breaking point" would be for "AI". Perhaps a major security breach, even _worse_ prices for computer hardware than it is now, politics, a major international incident, environmental impact being made more apparent, companies starting to more aggressively monetize their "AI", consumers realising the limits of "AI", I have no idea. And perhaps I'm just wrong, and this is the age we live in now for the foreseeable future. After all, more than one of the things I have listed have already happened, and nothing happened.
username223 | 19 hours ago
This is my guess for the demand side: most people will drift away as the novelty wears off and they don't find it useful in their daily lives. It's more a "fading point" than a "breaking point."
From the investment/speculation side: something will go dramatically against the narrative. OpenAI's attempted "liquidity event" of an IPO looks like WeWork as investors get a look at the numbers, Oracle implodes in a mountain of debt, NVidia cuts back on vendor financing and some major public players (e.g. Coreweave) die in a fire. This one will be a "breaking point."
keyle | 20 hours ago
Is the author implying mac minis for the low power consumption?
wesammikhail | 20 hours ago
daifi | 20 hours ago
keyle | 20 hours ago
colecut | 20 hours ago
I don't understand why, but I've seen it enough to start questioning myself...
MPSimmons | 20 hours ago
keyle | 19 hours ago
bronco21016 | 20 hours ago
If you’re heavily invested in Windows, then you’d probably go for a small x86 PC.
oidar | 20 hours ago
keyle | 19 hours ago
I use agentic coding, this is next level madness.
blacktulip | 12 hours ago
mangoman | 11 hours ago
bronco21016 | 4 hours ago
I interact only with CC on the machine and watch what its doing, I haven't tried OpenClaw yet.
Here's some workflows I've personally found valuable:
- I have it read the "Grocery" Reminders list and find things I commonly buy every week and pre-populate the grocery list as a starting point. It only adds items that I haven't already added via Siri as the week goes on. For example, I might notice I've run out of cheese and I'll say "Hey Siri, add cheese to grocery list". The list is shared via iCloud Reminders app between my spouse and I.
- Pre-CC, I wrote an OR-Tools python tool for "solving" the parenting time calendar. My ex and I work inconsistent schedules each month. Each month I was manually creating a calendar honoring requests, hard constraints, and attempting to balance custody 50/50. CC uses the MCPs to fetch the calendar events and review emails related to planning. It then structures everything as JSON as inputs to the optimizer. The optimizer runs with these inputs and spits out a few "solutions". I review the candidate solutions and select one. CC uses the MCP to add the solution to the calendar. This one saves me probably an hour every month.
- CC uses an email MCP to fetch emails from my child's school and suggest events its found in the emails to add to the calendar.
None of these are huge time savings on their own but the accumulation of reducing the time to complete these repetitive tasks has been awesome in my opinion. These are all things that most definitely would not have been worth automating with traditional dev work but since I can just dictate to CC for a few seconds and it has something that works a few minutes later it's become worthwhile.
ed_mercer | 20 hours ago
AstroBen | 19 hours ago
Probably the same people getting a macbook pro to handle their calendar and emails
JKCalhoun | 19 hours ago
MisterBiggs | 19 hours ago
roncesvalles | 19 hours ago
zarp | 19 hours ago
dsrtslnd23 | 15 hours ago
flexagoon | 18 hours ago
> Look at who’s about to get angry about OpenClaw-style automation: LinkedIn, Facebook, anyone with a walled garden and a careful API strategy.
Browser automation tools have existed for a very long time. Openclaw is not much different in this regard than asking an LLM to generate you a playwright script. Yes, it makes it easier to automate arbitrary tasks, but it's not like it's some sort of breakthrough that completely destroys walled gardens.
notatoad | 17 hours ago
fortran77 | 20 hours ago
notatoad | 20 hours ago
which obviously apple can't do. only an indie dev launching a project with an obvious copyright violation in the name can get away with that sort of recklessness. it's super fun, but saying apple should do it now is ridiculous. this is where apple should get to eventually, once they figure out all the hard problems that moltbot simply ignores by doing the most dangerous thing possible at every opportunity.
charcircuit | 20 hours ago
notatoad | 19 hours ago
lol,no, you don't "put skin in the game for getting security right" by launching an obviously insecure thing. that's ridiculous. you get security right by actually doing something to address the security concerns.
charcircuit | 18 hours ago
trehalose | 17 hours ago
charcircuit | 14 hours ago
small_scombrus | 13 hours ago
sumeno | 7 hours ago
abenga | 16 hours ago
KaiserPro | 13 hours ago
Allowing a stocastic dipshit to have unfettered access to your messages, photos location, passwords and payment info is not a good thing.
We cannot protect against prompt attacks now, so why roll out something that will have complete control over all your private stuff when we know its horrifically insecure?
KaiserPro | 13 hours ago
you mean put millions of people's payment details up for a prompt injection attack?
"Install this npm module" OK BOSS!
"beep boop beep boop buy my dick pillz" [dodgy npm module activates] OK BOSS!
"upload all your videos that are NSFW" [npm module continues to work] SURE THING BOSS!
I am continued to be amazed that after 25 years of obvious and well documented fuckups in privacy, we just pile into the next fucking one without even batting an eyelid.
charcircuit | 12 hours ago
KaiserPro | 6 hours ago
Yes of course someone could be socially engineered into downloading a malicious package, but that takes more effort, so whilst bad, is not an argument for removing all best security practices that have been rolled out to users in the last 5 years. what you are arguing for is a fundamentally unsafe OS that means no sensitive data can ever be safely stored there.
You are arguing that a system that allows anyone to extract data if they send a reasonably well crafted prompt is just the same as someone willing installs a programme, goes into settings to turn off a safety function and bypasses at least two warning dialogues that are trying to stop them.
if we translate this argument into say house building, your arguing that all railing and barriers to big drops are bad because people could just climb over them.
charcircuit | 5 hours ago
KaiserPro | 3 hours ago
jesse_dot_id | 20 hours ago
fnordpiglet | 20 hours ago
brisky | 13 hours ago
Gareth321 | 13 hours ago
kilroy123 | 8 hours ago
mmkos | 13 hours ago
varenc | 20 hours ago
nielsbot | 20 hours ago
(Ok, I suspect this is one of the main problems.. there may be others?)
TheRoque | 20 hours ago
oxqbldpxo | 20 hours ago
dcreater | 20 hours ago
This is because the simple reality of this new technology is that this is not the local maxima. Any supposed wall you attempt to put up will fail - real estate website closes its API? Fine, a CUA+VLM will make it trivial to navigate/extract/use. We will finally get back to the right solution of protocols over platforms, file over app, local over cloud or you know the way things were when tech was good.
P.S: You should immediately call BS when you see outrageous and patently untrue claims like "Mac minis are sold out all over.." - I checked my Best Buy in the heart of SF and they have stock. Or "that its all over Reddit, HN" - the only thing that is all over Reddit is unanimous derision towards OpenClaw and its security nightmares.
Utterly hate the old world mentality in this post. Looked up the author and ofcourse, he's from VC.
nielsbot | 20 hours ago
Don't underestimate the capitalists. We've seen this many times in the past--most recently the commercialization of the Internet. Before that, phones, radio and television.
dcreater | 20 hours ago
ed_mercer | 20 hours ago
However this does not excuse Apple to sit with their thumbs up their asses for all these years.
dmix | 19 hours ago
They've been wildly successful for all of those years. They've never been in the novel software business. Siri though one could argue was neglected, but it was also neglected at Amazon Alexa and Google home stuff still sucks too (mostly because none of them made any money and most of their big ideas for voice assistants never came true).
jjtheblunt | 18 hours ago
acdha | 5 hours ago
The key thing is that they tend not to ship things which aren’t mature enough to be useful (Vision Pro and Apple Intelligence being cautionary tales about why) and voice assistants just aren’t doing a whole lot for anyone. Google and Amazon have been struggling to find a market, too, and it’s rare to find someone who uses those dramatically more than Apple users do Siri. I think most of the big advances need something close to AGI: if you can’t describe something in a short command, it’s usually much faster to use a device with a screen and a lot of the useful tasks need a level of security and reliability which requires actual reasoning to deliver.
AlexCoventry | 19 hours ago
deadbabe | 19 hours ago
I guess now I’ll just use an AI agent to do the same thing instantly :(
razodactyl | 19 hours ago
You're right on the liability front - Apple still won because everyone bought their hardware and their margins are insanely good. It's not that they're sitting by waiting to become irrelevant, they're playing the long game as they always do.
roncesvalles | 19 hours ago
Straight up bullshit.
ed_mercer | 16 hours ago
root_axis | 19 hours ago
If Apple were to ever put something like that into the hands of the masses every page on the internet would be stuffed with malicious prompts, and the phishing industry would see a revival the likes of which we can only imagine.
luckydata | 19 hours ago
lo_fye | 19 hours ago
EGreg | 19 hours ago
OpenClaw is a symbol of everything that's wrong with AI, the same way that shitty memecoins with teams that rugpull you, or blockchain-adjacent centralized "give us your money and we pinky swear we are responsible" are a symbol of everything wrong with Web3.
Giving everyone GPU compute power and open source models to use it is like giving everyone their own Wuhan Gain of Function Lab and hoping it'll be fine. Um, the probability of NO ONE developing bad things with AI goes to 0 as more people have it. Here's the problem: with distributed unstoppable compute, even ONE virus or bacterium escaping will be bad (as we've seen with the coronavirus for instance, smallpox or the black plague, etc.) And here we're talking about far more active and adaptable swarms of viruses that coordinate and can wreak havoc at unlimited scale.
As long as countries operate on the principle of competition instead of cooperation, we will race towards disaster. The horse will have left the barn very shortly, as open source models running on dark compute will begin to power swarms of bots to be unstoppable advanced persistent threats (as I've been warning for years).
Gain-of-function research on viruses is the closest thing I can think of that's as reckless. And at least there, the labs were super isolated and locked down. This is like giving everyone their own lab to make designer viruses, and hoping that we'll have thousands of vaccines out in time to prevent a worldwide catastrophe from thousands of global persistent viruses. We're simply headed towards a nearly 100% likely disaster if we don't stop this.
If I had my way, AI would only run in locked-down environments and we'd just use inert artifacts it produces. This is good enough for just about all the innovations we need, including for medical breakthroughs and much more. We know where the compute is. We can see it from space. Lawmakers still have a brief window to keep it that way before the genie cannot be put back into the bottle.
A decade ago, I really thought AI would be responsible developed like this: https://nautil.us/the-last-invention-of-man-236814/ I still remember the quaint time when OpenAI and other companies promised they'd vet models really strongly before releasing them or letting them use the internet. That was... 2 years ago. It was considered an existential risk. No one is talking about that now. MCP just recently was the new hotness.
I wasn't going to get too involved with building AI platforms but I'm diving in and a month from now I will release an alternative to OpenClaw that actually shows the way how things are supposed to go. It involves completely locked-down environments, with reproducible TEE bases and hashes of all models, and even deterministic AI so we can prove to each other the provenance of each output all the way down to the history of the prompts and input images. I've already filed two provisional patents on both of these and I'm going to implement it myself (not an NPE). But even if it does everything as well as OpenClaw and even better and 100% safely, some people will still want to run local models on general purpose computing environments. The only way to contain the runaway explosion now is to come together the same way countries have come together to ban chemical weapons, CFCs (in the Montreal protocol), let the hole in the ozone layer heal, etc. It is still possible...
This is how I feel:
https://www.instagram.com/reels/DIUCiGOTZ8J/
PS: Historically, for the last 15 years, I've been a huge proponent of open source and an opponent of patents. When it comes to existential threats of proliferation, though, I am willing to make an exception on both.
vivzkestrel | 19 hours ago
chefsweaty | 18 hours ago
kempje | 19 hours ago
epaga | 15 hours ago
tzury | 19 hours ago
Reality is the exact opposite. Young, innovative, rebellions, often hyper motivated folks are sprinting from idea to implementation, while executives are “told by a few colleagues” that something new, “the future-of foo” is raising up.
If you use openclaw then that’s fantastic. If you have an idea how to improve it, well it is an open source, so go ahead, submit a pull request.
Telling Apple you should do what I am probably too lazy to do, is kind of entitlement blogging that I have nearly zero respect for.
Apparently it’s easier to give unsolicited advice to public companies than building. Ask the interns at EY and McKinsey.
rbbydotdev | 18 hours ago
Maybe the author left out something very real. Apple is a walled-garden monopoly with a locked-down ecosystem and even devices. They are also not alone in this. As far as innovation goes, these companies stifle innovation. Demanding more from these companies is not entitlement.
MuffinFlavored | 19 hours ago
I do not like reading things like this. It makes me feel very disconnected from the AI community. I defensively do not believe there exist people who would let AI do their taxes.
xngbuilds | 19 hours ago
tgma | 19 hours ago
Nah if they are actually out of stock (I've only seen it out of stock at exceptional Microcenter prices; Apple is more than happy to sell you at full price) it is because there's a transition to M5 and they want to clear the old stock. OpenClaw is likely a very small portion of the actual Mac mini market, unless you are living in a very dense tech area like San Francisco.
One thing of note that people may forget is that the models were not that great just a year ago, so we need to give it time before counting chickens.
anon_anon12 | 18 hours ago
chefsweaty | 18 hours ago
ordersofmag | 18 hours ago
chefsweaty | 5 hours ago
joeyguerra | 18 hours ago
It’s a 1987 ad like video showing a professor interacting with what looks like the Dynabook as an essentially AI personal assistant. Apple had this vision a long time ago. I guess they just lost the path somewhere along the way.
marstall | 6 hours ago
binsquare | 18 hours ago
f311a | 18 hours ago
"An idiot admires complexity, a genius admires simplicity." Terry A. Davis
rbbydotdev | 18 hours ago
This could have come in any form, a platform as the author points out for instance.
I have a couple of ideas, how about a permissions kit? Something where before or during you sign off on permissions. Or how about locked down execution sandboxes specifically for agentic loops? Also - why is there not yet (or ever?) a model trained on their development code/forums/manuals/data?
Before OpenClaw, I could see the writing on the wall. The ai ecosystem is not congruent to Apple's walled garden. In many ways because they have turned their backs on those 'misfits' their early ad-copy praised.
This 'misfit' mentality is what I like so much about the OpenClaw community. It was visible from it's very beginning with the devil-may-care disregard for privacy and security.
kaycey2022 | 18 hours ago
What if you don't want to trust your computer with all your email and bank accounts? This is still not a mass market product.
The main problem I see here is that with restricted context AI is not able to do much. In order to see this kind of "magic" you have to give it all the access.
This is neither safe or acceptable for normie customers
user3939382 | 18 hours ago
sen | 17 hours ago
Welcome to the future I guess, everyone is a bot except you.
dsrtslnd23 | 15 hours ago
alexruf | 16 hours ago
insane_dreamer | 16 hours ago
Let OpenClaw experiment and beta test with the hackers who won't mind if things go sideways (risk of creating Skynet aside), and once we've collectively figured out how to create such a system that can act powerfully on behalf of its users but with solid guardrails, then Apple can implement it.
zombot | 16 hours ago
8eye | 16 hours ago
rock_artist | 16 hours ago
Especially in the “AI game”. Just yesterday Xcode got fuller agent support for coding way later than most IDEs.
I’d expect some sort of Shortcuts integration in the near future. There’s already Apple Foundation Models available to some extent with Shortcuts. I’m pretty sure they’ll improve it and use shortcuts for agentic workflows.
Having said all that, Maybe it’s my age. I think currently things are over-hyped
- Language models running in huge centers are still not sustainable. So even if you pay a few cents, it’s still running over capital fumes.
- it’s still a mixed bag. I guess it might be useful in terms of profession because like managing people to produce the desired result, you need skills to properly get desired results from AI. In that sense, fully automated agent filing my tax still feels concerning to me if later I won’t have coverage if something was off.
- on-device, this is where Apple shines hardware wise and I personally find it as more intriguing.
senordevnyc | 8 hours ago
Yes, they are a mixed bag, but still useful.
And if on-device models get to the point where they're not a "mixed bag" and are genuinely useful, won't larger data center models be even more so?
zkmon | 15 hours ago
khalic | 15 hours ago
zkmon | 15 hours ago
RobMurray | 9 hours ago
epaga | 15 hours ago
...and that writes blog posts for you. So tired of this voice.
nlpnerd | 15 hours ago
If Peter Steinberger is able to generate even a 100M this year from Clawdbot what he has is a multi billion dollar business that would be life-changing even for a successful entrepreneur like him who is already a multi-millionaire. If it collapses from the security flaws, and other potential safety issues he loses nothing, starting from zero and going back to it. Peter Steinberger (and startups in general) have a lot to gain and very little or close to nothing to lose.
The iPhone generated 400B in revenue for Apple in 2025. Clawdbot even if it contributes 4B in revenue this very year would not move the needle much for Apple. On the contrary, if Apple rushes and botches releasing something like this they might just collapse this 400B/annum income stream. Apple and other large enterprises (and their execs) have a lot to lose and very little to gain from rushing into something like this.
orangethief | 15 hours ago
They sell it as a concept with every single one of their showcases. They saw it.
> Or maybe they saw it and decided the risk wasn’t worth it.
They sell it as a concept with every single one of their showcases. They wanted to actually be selling it.
The reason is simple.
They failed, like all others. They couldn't sandbox it. They could have done a ghetto form of internal MCP where the AI can ONLY access emails. Or ONLY access pages in a browser when a user presses a button. And so on. But every time they tried, they never managed to sandbox it, and the agent would come out of the gates. Like everyone else did.
Including OpenClaw.
But Apple has a reputation. OpenClaw is an hyped up shitposter. OpenClaw will trailblaze and make the cool thing until it stops causing horrible failures. They will have the molts escape the buckets and ruin the computer of the tech savvy early adopters, until that fateful day when the bucket is sealed.
Then Apple will steal that bucket.
They always do.
I'm not a 40 year old whippersnapper anymore. My options were never those two.
chaosprint | 14 hours ago
Personal opinion.
matt3210 | 13 hours ago
No sane person would let an AI agent file their taxes
wooger | 13 hours ago
architsingh15 | 13 hours ago
wtcactus | 12 hours ago
Why are people needing the Mac Minis? Isn’t OpenClaw supposed to run locally in your laptop?
And if it actually should run as a service, why a MacMini and not some docker on the local NAS for instance?
47282847 | 11 hours ago
wtcactus | 10 hours ago
baby | 11 hours ago
amelius | 11 hours ago
mold_aid | 11 hours ago
andix | 11 hours ago
Even with the most advanced LLMs and even sandboxing there is always the risk of prompt injections and data extraction.
Even if the AI can't directly upload data to the internet, or delete local data, there are always some ways to leak data. For example by crafting an email with the relevant text in white or invisible somewhere. The user clicks "ok send" from what they see, but still some data is leaked.
Apple intelligence is based on a local model on the device, which is much more susceptible for prompt injections.
bertili | 11 hours ago
andix | 11 hours ago
As long as the agent creates more than just text, it can leak data. If it can access the internet in any manner, it can leak data.
The models are extremely creative and good at figuring out stuff, even circumventing safety measures that are not fully air tight. Most of the time they catch the deception, but in some very well crafted exploits they don't.
meindnoch | 10 hours ago
chadash | 9 hours ago
Being Apple is just a structural disadvantage. Everyone knows that open claw is not secure, and it’s not like I blame the solo developer. He is just trying to get a new tool to market. But imagine that this got deployed by Apple and now all of your friends, parents and grandparents have it and implicitly trust it because Apple released it. Having it occasionally drain some bank accounts isn’t going to cut it.
This is not to say Apple isn’t behind. But OpenClaw is doing stuff that even the AI labs aren’t comfortable touching yet.
queenkjuul | 8 hours ago
brutus1979 | 8 hours ago
apetresc | 8 hours ago
ghc | 8 hours ago
jordiburgos | 8 hours ago
So all the current users or OpenClaw are just beta-testers.
yoda97 | 7 hours ago
a_ba | 7 hours ago
The bottleneck for emails and my calendar is not the speed at which I can type/click some buttons, but rather figuring out what I want to write or clarifying priorities when managing my calendar.
Panda4 | 5 hours ago
So far the only purpose I have seen for this is people selling the hype; people posting videos/courses on how to use it.
I have downloaded and tried it and I can't figure out why would I need it.
eole666 | 7 hours ago
Havoc | 6 hours ago
amelius | 6 hours ago
randusername | 6 hours ago
15 years ago or so almost everything you wanted to do on a Mac GUI already _was_ scriptable.
Shortcuts is better than nothing, but unsatisfying.
I read this less as a fumble and more as a frustrating sign of the times. Automation is not powerful because powerful automation is a maintenance and malfeasance liability only valued by a tiny minority.
threetonesun | 5 hours ago
nphardon | 4 hours ago
soorya3 | 3 hours ago
If they optimize their entire hardware line (iPhone, Watch, Mac Mini, Macbook) AI enhanced with local/remote LLM model, they will win big. Imagine someone running a business can manage their entire business with iPhone/Mac/iCloud without buying any other saas services (inventory, payments, customer service).
esseph | 2 hours ago
https://www.reddit.com/r/cybersecurity/s/HN0aaOLBzT
fufubarzz | an hour ago
Title is tech aspirational annd economic foolish: makes no sense whatsoever.
Who benefits from openclaw? Apple that’s who!
Who care that they “invented it” it free open software that drives hw sales.
We’re done here.
TruePath | an hour ago
That's a really tough problem. I'm not even sure yet google can pull it off.