Ads in AI should be banned right now. We need to learn from mistakes of the internet (crypto, facebook) and aggressively regulate early and often before this gets too institutionalized to remove.
I trust corporations far far far less than government or lawmakers (who I also don’t trust). I know corporations will use ads in the most manipulative and destructive manner. Laws may be flawed but are worth the risk.
I'm not sure how anyone could reasonably argue that Alaska would be orders of magnitude better off if they reversed the implementation of their billboard-banning ballot measure and put up billboards everywhere.
Boomers in government would be clueless on how to properly regulate and create correct incentives. Hell, that is still a bold ask for tech and economist geniuses with the best of intentions.
Would that be the same cohort of boomers jamming LLMs up our collective asses? So they don’t understand how to regulate a technology they don’t understand, but fucking by golly you’re going to be left behind if you don’t use it?
It's mostly SV grifters who shoved LLMs up our asses. They then get in cahoots with boomers in the government to create policies and "investment schemes" that inflate their stock in a ponzi-like fashion and regulate competition.
Why do you think Trump has some no-name crypto firm, or why Thiel has Vance as his whipping boy, and Elon spend a fortune trying to get Trump to win? This is a multiparty thing, as most politicians are heavily bought and paid for.
True, we focused on hardware embodied AI assistants (smart speakers, smart glasses, etc) as those are the ones we believe will soon start leaving wake words behind and moving towards an always-on interaction design. The privacy implications of an always-listening smart speaker are magnitudes higher than OpenClaw that you intentionally interact with.
This. Kids already have tons of those gadgets on. Previously, I only really had to worry about a cell phone so even if someone was visiting, it was a simple case of plop all electronics here, but now with glasses I am not even sure how to reasonably approach this short of not allowing it period. Eh, brave new world.
Both are pandoras box. Open Claw has access to your credit cards, social media accounts, etc by default (i.e. if you have them saved in your browser on the account that Open Claw runs on, which most people do.)
With cloud based inference we agree, this being just one more benefit of doing everything with "edge" inference (on device inside the home) as we do with Juno.
Pretty sure a) it's not a matter of whether you agree and b) GDPR still considers always-on listening to be something the affected user has to actively consent to. Since someone in a household may not realize that another person's device is "always on" and may even lack the ability to consent - such as a child - you are probably going to find that it is patently illegal according to both the letter and the spirit of the law.
Is your argument that these affected parties are not users and that the GDPR does not require their consent?
Don't take this as hostility. I am 100% for local inference. But that is the way I understand the law, and I do think it benefits us to hold companies to a high standard. Because even such a device could theoretically be used against a person, or could have other unintended consequences.
This isn't a technology issue. Regulation is the only sane way to address the issue.
For once,we (as the technologists) have a free translator to laymen speak via the frontier LLMs, which can be an opportunity to educate the masses as to the exact world on the horizon.
> This isn't a technology issue. Regulation is the only sane way to address the issue.
It is actually both a technology and regulation/law issue.
What can be solved with the former should be. What is left, solved with the latter. With the best cases where both consistently/redundantly uphold our rights.
I want legal privacy protections, consistent with privacy preserving technology. Inconsistencies create technical and legal openings for nefarious or irresponsible powers.
Who would buy OpenAI's spy device? I think a lot of public discourse and backlash about the greedy, anticompetitive, and exploitative practices of the silicon valley elite have gone mainstream and will hopefully course correct the industry in time.
> ...exploitative practices of the silicon valley elite have gone mainstream and will hopefully course correct the industry in time.
I have little hope that is true. Don't expect privacy laws and boycott campaigns. That very same elite control the law via bribes to US politicians (and indirectly the laws of other counties via those politicians threats, see the ongoing watering down of EU laws). They also directly control public discourse via ownership of the media and mainstream communication platforms. What backlash can they really suffer?
This strikes me as a pretty weak rationalization for "safe" always-on assistants. Even if the model runs locally, there’s still a serious privacy issue: Unwitting victims of something recording everything they said.
Friends at your house who value their privacy probably won’t feel great knowing you’ve potentially got a transcript of things they said just because they were in the room. Sure, it's still better than also sending everything up to OpenAI, but that doesn’t make it harmless or less creepy.
Unless you’ve got super-reliable speaker diarization and can truly ensure only opted-in voices are processed, it’s hard to see how any always-listening setup ever sits well with people who value their privacy.
This is something we call out under the "What we got wrong" section. We're currently collecting an audio dataset that should help create a speech-to-text (STT) model that incorporates speaker identification and that tag will be weaved into the core of the memory architecture.
> The shared household memory pool creates privacy situations we’re still working through. The current design has everyone in the family shares the same memory corpus. Should a child be able to see a memory their parents created? Our current answer is to deliberately tune the memory extraction to be household-wide with no per-person scoping because a kitchen device hears everyone equally. But “deliberately chose” doesn’t mean “solved.” We’re hoping our in-house STT will allow us to do per-person memory tagging and then we can experiment with scoping memories to certain people or groups of people in the household.
Yes! We see a lot of the same things that really should have been solved by the first wave of assistants. Your _Around The House_ reads similar to a lot of our goals though we would love the system to be much more pro-active than current assistants.
Feel free to reach out. Would love to swap notes and send you a prototype.
> I hope the memory crisis isn't hurting you too badly.
Oh man, we've had to really track our bill of materials (BOM) and average selling price (ASP) estimates to make sure everything stays feasible. Thankfully these models quantize well and the size-to-intelligence frontier is moving out all the time.
I wonder if the answer is that it is stored and processes in a way that a human can’t access or read, like somehow it’s encrypted and unreadable but tokenized and can be processed, I don’t know how but it feels possible.
It wouldn't matter of you did all that because you could still ask the AI, "what would my friend Bob think about this?" And the AI, who heard Bob talking in his phone when he thought he was alone in the other room, could tell you.
Right but that’s where the controls could be, it would just pretend to not know about Bob due to consent controls etc, but of course this would limit the usefulness.
This spiel is hilarious in the context of the product this company (https://juno-labs.com/) is pushing – an always on, always listening AI device that inserts itself into your and your family’s private lives.
“Oh but they only run on local hardware…”
Okay, but that doesn't mean every aspect of our lives needs to be recorded and analyzed by an AI.
Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?
Have all your guests consented to this?
What happens when someone breaks in and steals the box?
What if the government wants to take a look at the data in there and serves a warrant?
What if a large company comes knocking and makes an acquistion offer? Will all the privacy guarantees still stand in face of the $$$ ?
The fundamental problem with a lot of this is that the legal system is absolute: if information exists, it is accessible. If the courts order it, nothing you can do can prevent the information being handed over, even if that means a raid of your physical premises. Unless you encrypt it in a manner resistant to any way you can be compelled to decrypt it, the only way to have privacy is for information not to exist in the first place. It's a bit sad as the potential for what technology can do to assist us grows that this actually may be the limit on how much we can fully take advantage of it.
I do sometimes wish it would be seen as an enlightened policy to legislate that personal private information held in technical devices is legally treated the same as information held in your brain. Especially for people for whom assistive technology is essential (deaf, blind, etc). But everything we see says the wind is blowing the opposite way.
Agreed, while we've tried to think through this and build in protections we can't pretend that there is a magical perfect solution. We do have strong conviction that doing this inside the walls of your home is much safer than doing it within any companies datacenter (I accept that some just don't want this to exist period and we won't be able to appease them).
Some of our decisions in this direction:
- Minimize how long we have "raw data" in memory
- Tune the memory extraction to be very discriminating and err on the side of forgetting (https://juno-labs.com/blogs/building-memory-for-an-always-on-ai-that-listens-to-your-kitchen)
- Encrypt storage with hardware protected keys (we're building on top of the Nvidia Jetson SOM)
We're always open to criticism on how to improve our implementation around this.
> Unless you encrypt it in a manner resistant to any way you can be compelled to decrypt it,
In the US you it is not legal to be compelled to turn over a password. It's a violation of your fifth amendment rights. In the UK you can be jailed until you turn over the password.
FYI -- Because of this, Apple made a feature where if you click the power button 5 times, your phone goes into "needs the passcode to unlock" mode.
Whenever I'm approaching a border crossing (e.g. in an airport), I'm sure to discreetly click power 5 times. You also get haptic feedback on the 5th click so you can be sure it worked even from within your pocket.
At Amazon, their travel trainings always recommended giving out your laptop password if asked by law enforcement or immigration, regardless of whether it was legal in the jurisdiction. Then you were to report the incident as soon as possible afterwards, and you'd have to change your password and possibly get your laptop replaced.
That kind of policy makes sense for the employee's safety, but it definitely had me thinking how they might approach other tradeoffs. What if the Department of Justice wants you to hand over some customer data that you can legally refuse, but you are simultaneously negotiating a multi-billion dollar cloud hosting deal with the same Department of Justice? What tradeoff does the company make? Totally hypothetical situation, of course.
There are many jurisdictions in the US (not all!) where you can't be compelled to turn over a password in a criminal case that's specifically against yourself. But that's a narrow exception to the general principle that a court can order you to give them whatever information they'd like.
It's a federal constitutional protection to not be compelled to turn over your password. If you think a jurisdiction can compel you, I would like references.
However, back when the constitution was amended the 5th amendment also applied to your own papers. (How is using something you wrote down not self-incrimination!?).
It only matters if one year in the future it is because all that back data becomes immediately allowed.
> Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?
One of our core architecture decisions was to use a streaming speech-to-text model. At any given time about 80ms of actual audio is in memory and about 5 minutes of transcribed audio (text) is in memory (this is help the STT model know the context of the audio for higher transcription accuracy).
Of these 5 minute transcripts, those that don't become memories are forgotten. So only selected extracted memories are durably stored. Currently we store the transcript with the memory (this was a request from our prototype users to help them build confidence in the transcription accuracy) but we'll continue to iterate based on feedback if this is the correct decision.
It’s definitely a strange pitch, because the target audience (the privacy-conscious crowd) is exactly the type who will immediately spot all the issues you just mentioned. It's difficult to think of any privacy-conscious individual who wouldn't want, at bare minimum, a wake word (and more likely just wouldn't use anything like this period).
The non privacy-conscious will just use Google/etc.
A good example of this is what one of my family member's partner said. "Isn't creep that you just talked about something and now you are seeing ads for it. Guess we just have to accept it."
My response was no I don't get any of that because I disable that technology since it is always listening and can never be trusted. There is no privacy in those services.
I used to be considered a weirdo and creep because I would answer the question of why don't I have WhatsApp with the answer "I do not accept their terms of service". Now people accept this answer.
I don't know what changed, but the general public is starting to figure out that that actually can disagree with large tech companies.
> Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?
Typically not how these things work. Speech is processed using ASR (automatic speech recognition), and then ran through a prompt that checks for appropriate tools calls.
I've been meaning to basically make this myself but I've been too lazy lately to bother.
I actually want a lot more functionality from a local only AI machine, I believe the paradigm is absurdly powerful.
Imagine an AI reminding you that you've been on HN too long and offering to save off the comment your working on for later and then moving they browser window to a different tab.
Having idle thoughts in the car of things you need to do and being able to just say them out loud and know important topics won't be forgotten about.
I understand for people who aren't neurodiverse that the idea of just forgetting to do something that is incredibly critical to ones health and well-being isn't something that happens (often) but for plenty of other people a device that just helps people remember important things can be dramatically life changing.
> Imagine an AI reminding you that you've been on HN too long and offering to save off the comment your working on for later and then moving they browser window to a different tab.
> Having idle thoughts in the car of things you need to do and being able to just say them out loud and know important topics won't be forgotten about.
> I understand for people who aren't neurodiverse that the idea of just forgetting to do something that is incredibly critical to ones health and well-being isn't something that happens (often) but for plenty of other people a device that just helps people remember important things can be dramatically life changing.
Those don't sound like things that you need AI for.
> > Imagine an AI reminding you that you've been on HN too long and offering to save off the comment your working on for later and then moving they browser window to a different tab.
This would be its death sentence. Nuked from orbit:
sudo rm -rfv /
Or maybe if there's any slower, more painful way to kill an AI then I'll do that instead. I can only promise the most horrible demise I can possibly conjure is that clanker's certain end.
> Having idle thoughts in the car of things you need to do and being able to just say them out loud and know important topics won't be forgotten about.
I push a button on the phone and then say them. I've been doing this for over twenty years. The problem is ever getting back to those voice notes.
I agree. I also don't really have an ambient assistant problem. My phone is always nearby and Siri picks up wake words well (or I just hold the powerbutton).
My problem is Siri doesn't do any of this stuff well. I'd really love to just get it out of the way so someone can build it better.
Some of the more magical moments we’ve had with Juno is automatic shopping list creation saying “oh no we are out of milk and eggs” out loud without having to remember to tell Siri becomes a shopping list and event tracking around kids “Don’t forget next Thursday is early pickup”. A nice freebie is moving the wake word to the end. “What’s weather Juno today?” becomes much more natural than a prefixed wake word.
> Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?
Is this somehow fundamentally different from having memories?
Because I thought about it, and decided that personally I do - with one important condition, though. I do because my memories are not as great as I would like them to be, and they decline with stress and age. If a machine can supplement that in the same way my glasses supplement my vision, or my friend's hearing aid supplements his hearing - that'd be nice. That's why we have technology in the first place, to improve our lives, right?
But, as I said, there is an important condition. Today, what's in my head stays in there, and is only directly available to me. The machine-assisted memory aid must provide the same guarantees. If any information leaves the device without my direct instruction - that's a hard "no". If someone with physical access to the device can extract the information without a lot of effort - that's also a hard "no". If someone can too easily impersonate myself to the device and improperly gain access - that's another "no". Maybe there are a few more criteria, but I hope you got the overall idea.
If a product passes those criteria, then it - by design - cannot violate others' privacy - no more than I can do myself. And then - yeah - I want it, wish there'd be something like that.
I understand the rationale, but don’t you see how this idea contradicts autonomy of decisions for able-minded people? Such good intentions tend to be a pavement on roads to bad places.
I’d rather suggest to inform about all the potential benefits and drawbacks, but leave decisions with the individual.
Especially given that it’s not something irreversibly permanent.
>That's why we have technology in the first place, to improve our lives, right?
No, we have technology to show you more and more ads, sell you more and more useless crap, and push your opinions on Important Matters toward the state approved ones.
Of course indoor plumbing, farming, metallurgy and printing were great hits, but technology has had a bit of a dry spell lately.
If "An always-on AI that listens to your household" doesn't make you recoil in horror, you need to pause and rethink your life.
If you can't think of an always-on AI that listens but doesn't cause any horrors (even though its improbable to get to the market in the world we live on), I urge you to exercise your imagination. Surely, it's possible to think of an optimistic scenario?
Even more so, if you think technology is here to unconditionally screw us up no matter what. Honestly - when the world is so gloomy, seek something nice, even if a fantasy.
Not only is it improbable, it's a complete fantasy. It's not going to happen. And personally, I'm of the opinion that having AI be a constant presence in your life and relying on it to assist you with every minor detail or major decision is dystopian in the extreme, and that's not even factoring in the inevitable Facebook-esque monetisation.
>when the world is so gloomy, seek something nice, even if a fantasy
I don't need fantasy to do that. My something nice is being in nature. Walking in the forest. Looking at and listening to the ocean by a small campfire. An absence of stimulation. Letting your mind wander. In peace, away from technology. Which is a long winded way to say "touch grass", but - and I say this sincerely without any snark - try actually doing it. You realise the alleged gloom isn't even that bad. It's healing.
> I'm of the opinion that having AI be a constant presence in your life and relying on it to assist you with every minor detail or major decision is dystopian in the extreme
Could that be because you're putting some extra substance in what you call an "AI"? Giving it some properties that it doesn't necessarily have?
Because when I'm thinking about "AI" all I'm giving to it is "a machine doing math at scale that allows us to have meaningful relation with human concepts as expressed in a natural language". I don't put anything extra in it, which allows me to say "AI can do good things while avoiding bad things". Surely, a machine can be made to crunch numbers and put words together in a way that helps me rather than harms me.
Oh, and if anything - I don't want "AI" to save me thinking. It cannot do that for me anyway, in principle. I want it to help me do things it machines finally start to do acceptably well: remember and relate things together. This said, yea, I guess I was generous with just a single requirement - now I can see that a personal "AI" also needs its classifications (interpretations) to match with the individual user's expectations as close as possible at all times.
> It's not going to happen.
I can wholeheartedly agree as far as "it is extremely unlikely to happen", but... to say "it is not going to happen", after last five years of "that wasn't on my bingo list"? How can you be so sure? How do we know there won't be some more weird twists of history? Call me naive but I rather want to imagine something nice would happen for a change. And it's not beyond fathomable that something crashes and the resulting waves, would possibly bring us towards a somewhat better world.
Touching grass is important, and it helps a lot, but as soon as you're back - nothing goes anywhere in the meanwhile. The society with all the mess doesn't disappear while we stop looking. So seeking an optimistic possibility is also important, even if it may seem utterly unrealistic. I guess one just have to have something to believe in?
I can imagine a lot of ways we could be using the new tech advancements of the last decade or two in really great ways, but unfortunately I've seen things go in very bad directions almost every time, and I do not have faith that this trend will stop in the future.
I really hope, that before I will get old and fragile, I will get my smart robotic house, with an (local!) AI assistant always listening to my wishes and then executing them.
I rather have the horror of being old and forgotten in a half care like most old people are right now. AI and robots can bring emporerment. And it is up to us, whether we let ad companies serve them to us from the cloud, or local models running in the basement.
When I look at Google, I see a company that is fully funded by ads, but provides me a number of highly useful services that haven't really degraded over 20 years. Yes, the number of search results that are ads grew over the years, but by and large, Google search and Gmail are tools that serve rather benevolently. And if you're about to disagree with this ask yourself if you're using Gmail, and why?
Then I look at Meta or X, and I see a cesspool of content that's driven families apart and created massive societal divides.
It makes me think that Ads aren't the root of the problem, though maybe a "necessary but not sufficient" component.
Google is almost cartoonishly evil these days. I think that's pretty much an established fact at this point.
I'm not using Gmail, and I don't understand why anyone would voluntarily. It was the worst email client I'd ever used, until I had to use Outlook at my new job.
The only Google products I use are YouTube, because that's where the content is. And Android, because IOS is garbage and Apple is only marginally less evil than Google.
Memories are usually private. People can make them public via a blog.
AI feels more like an organized sniffing tool here.
> If a product passes those criteria, then it - by design - cannot violate others' privacy
A product can most assuredly violate privacy. Just look how Facebook gathered offline data to interconnect people to reallife data points, without their consent - and without them knowing. That's why I call it Spybook.
Ever since the USA became hostile to Canadians and Europeans this has also become much easier to deal with anyway - no more data is to be given to US companies.
> AI feels more like an organized sniffing tool here.
"AI" on its own is an almost meaningless word, because all it tells is that there's something involving machine learning. This alone doesn't have any implied privacy properties, the devil is always in the untold details.
But, yeah, sure, given the current trends I don't think this device will be privacy-respecting, not to say truly private.
> Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?
Maybe I missed it but I didn't see anything there that said it saved conversations. It sounds like it processes them as they happen and then takes actions that it thinks will help you achieve whatever goals of your it can infer from the conversation.
I’m 99% sure this article is AI generated. Regardless, people will gravitate to the tool that solves their problems. If their problem is finding a local plumber or a restaurant they like, advertising will be involved.
It's interesting to me that there seems to be an implicit line being drawn around what's acceptable and what's not between video and audio.
If there's a camera in an AI device (like Meta Ray Ban glasses) then there's a light when it's on, and they are going out of their way to engineer it to be tamper resistant.
But audio - this seems to be on the other side of the line. Passively listening ambient audio is being treated as something that doesn't need active consent, flashing lights or other privacy preserving measures. And it's true, it's fundamentally different, because I have to make a proactive choice to speak, but I can't avoid being visible. So you can construct a logical argument for it.
I'm curious how this will really go down as these become pervasively available. Microphones are pretty easy to embed almost invisibly into wearables. A lot of them already have them. They don't use a lot of power, it won't be too hard to just have them always on. If we settle on this as the line, what's it going to mean that everything you say, everywhere will be presumed recorded? Is that OK?
> Passively listening ambient audio is being treated as something that doesn't need active consent
That’s not accurate. There are plenty of states that require everyone involved to consent to a recording of a private conversation. California, for example.
Voice assistants today skirt around that because of the wake word, but always-on recording obviously negates that defense.
AI "recording" software has never been tested in court, so no one can say what the legality is. If we are having a conversation (in a two party consent state) and a secret AI in my pocket generates a text transcript of it in real time without storing the audio, is that illegal? What about if it just generates a summary? What about if it is just a list of TODOs that came out of the conversation?
Speech-to-text has gone through courts before. It's not a new technology. You're out of luck on sneaking the use of speech-to-text in 2-party consent states.
I'm not aware of many bluetooth headphones that blink an obvious light just because they are recording. You can get a pair of sunglassses with a microphone and record with it and it does nothing to alert anybody.
Whether it's actually legal or not, as you say, varies - but it's clear where device manufactures think the line lies in terms of what tech they implement.
I think local inference is great for many things - but this stance seems to conflate that you can’t have privacy with server side inference, and you can’t have nefariousness with client side inference. A device that does 100% client side inference can still phone home unless it’s disconnected from internet. Most people will want internet-connected agents right? And server side inference can be private if engineered correctly (strong zero retention guarantees, maybe even homomorphic encryption)
I agree with the core premise that the big AI companies are fundamentally driven towards advertising revenue and other antagonistic but profit-generating functionality.
Also agree with paxys that the social implications here are deep and troubling. Having ambient AI in a home, even if it's caged to the home, has tricky privacy problems.
I really like the explorations of this space done in Black Mirror's The Entire History of You[1] and Ted Chiang's The Truth of Fact short story[2].
My bet is that the home and other private spaces almost completely yield to computer surveillance, despite the obvious problems. We've already seen this happen with social media and home surveillance cameras.
Just as in Chiang's story spaces were 'invaded' by writing, AI will fill the world and those opting out will occupy the same marginal positions as those occupied by dumb phone users and people without home cameras or televisions.
Its always fascinating that HN crowd seems to be blind to Apple's very obvious transgressions.
Even the article makes the mistake. They paint every company with a broad brush ("all AI companies are ad companies") but for Apple they are more sympathetic "We can quibble about Apple".
Apple's reality distort field is so strong. People still think they are not in ad business. People still think they stand up to government, and folks chose to ignore hard evidence (Apple operates in China on CCP's pleasure. Apple presents a gold plaque to President Trump to curry favors and removes ICEBlock apps ..) There's no pushback, there's no spine.
Every company is disgusting. Apple is hypocritical and disgusting.
This was the inevitable endpoint of the current AI unit economics. When inference costs are this high and open-source models are compressing SaaS margins to zero, companies can't survive on standard subscription models. They have to subsidize the compute by monetizing the user's context window. The real liability isn't just ads; it's what happens when autonomous agents start making financial decisions influenced by sponsored retrieval data.
> There needs to be a business model based on selling the hardware and software, not the data the hardware collects. An architecture where the company that makes the device literally cannot access the data it processes, because there is no connection to access it through.
Genuine Q: Is this business model still feasible? Its hard to imagine anyone other than apple sustaining a business off of hardware; they have the power to spit out full hardware refreshes every year. How do you keep a team of devs alive on the seemingly one-and-done cash influx of first-time-buyers?
I guess it goes to show that real value is in the broader market to a certain extent, if they can’t just sell people the power they and up just earning a commission for helping someone else sell a product.
How long web search had been objective, nice, and helpful - 10 years? Now things are happening faster so there is max 5 years in total of AI prompt pretending that they want to help.
I mean why is it so difficult for such companies to understand the core thing: irrespective of whether the data related to our daily lives gets processed on their servers or ours, we DON'T want it stored beyond a few minutes at max.
Even if these folks are giving away this device for 100% free, I'll still not keep it inside my house.
Because storing, analyzing, and selling access to your data is massively profitable and they don’t care what the (not even vocal) privacy focused minority wants.
I really dislike the preorder page. The fact that it's a deposit is in a different color that fades into the background, and it refers to it as a "price" multiple times. I don't know if it was intentionally deceptive, but it made me dislike this company.
Just when you've asked if there are eggs the doorbell rings, the neighbor stands there in disbelief, it told me to bring you eggs? Give him the half bottle vodka, it's going to expire soon and his son will make a surprise visit tonight. An argument arises and it participates by encouraging both parties with extra talking points.
But this was only the beginning, after gathering a few TB worth of micro expressions it starts to complete sentences so successfully the conversation gradually dies out.
After a few days of silence... Narrator mode activated....
> The most helpful AI will also be the most intimate technology ever built. It will hear everything. See everything
Big Brother is watching you. Who knew it would be AI ...
The author is quite right. It will be an advertisement scam. I wonder whether people will accept that though. Anyone remembers ublock origin? Google killed it on chrome. People are not going to forget that. (It still works fine on Firefox but Google bribed Firefox into submission; all that Google ad money made Firefox weak.)
Recently I had to use google search again. I was baffled at how useless it became - not just from the raw results but the whole UI - first few entries are links to useless youtube videos (also owned by Google). I don't have time to watch a video; I want the text info and extract it quickly. Using AI "summaries" is also useless - Google is just trying to waste my time compared to the "good old days". After those initial videos to youtube, I get about 6 results, three of which are to some companies writing articles so people visit their boring website. Then I get "other people searched for candy" and other useless links. I never understood why I would care what OTHER people search for when I want to search for something. Is this now group-search? Group-think 1984? And then after that, I get some more videos at youtube.
Google is clearly building a watered-down private variant of the web. Same problem with AMP pages. Google is annoying us - and has become a huge problem. (I am writing this on thorium right now, which is also chrome-based; Firefox does not allow me to play videos with audio as I don't have or use pulseaudio whereas the chrome-based browser does not care and my audio works fine - that shows you the level of incompetency at Mozilla. They don't WANT to compete against Google anymore. And did not want since decades. Ladybird unfortunately also is not going to change anything; after I critisized one of their decisions, they banned me. Well, that's a great way to try to build up an alternative when you deal with criticism via censorship - all before leaving alpha or beta already. Now imagine the amount of censorship you will get once millions of people WERE to use it ... something is fundamentally wrong with the whole modern web, and corporations have a lot to do with this; to a lesser extent also people but of course not all of them)
It would be really great if Google had a setting that allowed you to exclude certain domains from all searches by default. Like you, a YouTube video (or a Facebook page, or an Instagram or Twitter post) is basically never what I am looking for.
First it's ads, then it's political agenda. We've seen this inconspicuous transition happen with social media and it will happen even more inconspicuously with LLMs.
Maybe I'm just getting old, but I don't understand the appeal of the always-on AI assistant at all. Even leaving privacy/security issues aside, and even if it gets super smart and capable, it feels like it would have a distancing effect from my own life, and undermine my own agency in shaping it.
I'm not against AI in general, and some assistant-like functionality that functions on demand to search my digital footprint and handle necessary but annoying administrative tasks seems useful. But it feels like at some point it becomes a solution looking for a problem, and to squeeze out the last ounce of context-aware automation and efficiency you would have to outsource parts of your core mental model and situational awareness of your life. Imagine being over-scheduled like an executive who's assistant manages their calendar, but it's not a human it's a computer, and instead of it being for the purpose of maximizing the leverage of your attention as a captain of industry, it's just to maintain velocity on a personal rat race of your own making with no especially wide impact, even on your own psyche.
It's the rat race. I gotta get my cheese, and fuck you, because you getting cheese means I go hungry. The kindergarden lesson on sharing got replaced by a lesson on intellectual property. Copyright, trademark, patents, and you.
Or we could opt out, and help everyone get ahead, on the rising tide lifts all boats theory, but from what I've seen, the trickle of trickle down economics is urine.
No matter how useful AI is and will become - I use AI daily, it is an amazing technology - so much of the discourse is indeed a solution looking for a problem. I have colleagues suggesting on exactly everything "can we put an MCP in it" and they don't even know what the point of MCP is!
Totally agree. Sounds some envision want some level of Downton Abbey without the humans as service personal. A footman / maid in every room or corner to handle your requests at any given moment.
May I refer you to WALL-E. The contention between hard vs convenient in our daily lives always seems to slowly edge towards convenient. If not in this generation, the next gen will be more willing to offload more.
I think it has very little to do with the assistant factor and more to do with the loneliness factor (at least in the West, people are getting lonelier, not less). In other words: sell it to them as a friendly companion/assistant, playing on emotions, while creating a sea of surveillance drones you can license back to the powers that be at a premium.
Not if you use open source. Not if you pay for services contractually will not mine your data. Not if you support start-ups that commit to privacy and the banning of ads.
I said on another thread recently that we need to kill Android, that we need a new Mobile Linux that gives us total control over what our devices do, our software does. Not controlled by a corporation. Not with some bizarre "store" that floods us with millions of malware-ridden apps, yet bans perfectly valid ones. We have to take control of our own destiny, not keep handing it over to someone else for convenience's sake. And it doesn't end at mobile. We need to find, and support, the companies that are actually ethical. And we need to stop using services that are conveniently free.
We have mobile Linux. It's only supported on less than a dozen handsets and runs like shit, but we have it already.
The reason nobody uses mobile Linux is that it has to compete with AOSP-derived OSes like LineageOS and GrapheneOS, which don't suck or run like shit. This is what it looks like when people vote with their dollars, people want the status-quo we have (despite the horrible economic damages).
The concern is real but the local solution is not ready. The author does not seem to think about that from the perspective of an "average consumer". I have been running my personal AI assistant on a consumer-grade computer, for almost an year now. It can do only one in thousand of the tasks that cloud models can do and that too at a much slow pace. Local ai assistant on consumer grade hardware is at least a few year away, and "always-on" is much further than that IMO.
Well the consumers will decide. Some people will find it very useful, but some others will not necessarily like this... Considering how many times I heard people yelling "OK GOOGLE" for "the gate" to open, I'm not sure a continuous flow of heavily contextualized human conversation will necessarily be easier to decipher?
I know guys, AI is magic and will solve everything, but I wouldn't be surprised if it ordered me eggs and butter when I mentioned out loud I was out of it but actually happy about this because I was just about to go on vacations. My surprise when I'm back: melted butter and rotten eggs at my door...
The product that’s being implicitly advertised here is supposed to ship at the end of this year and there doesn’t even appear to be a real photo of the thing and if that’s an indicator of the quality of the product then I must assume that it is poor and the people responsible also apparently do not have the money to hire a capable web designer and I’m sorry if this is harsh or unnecessary but I never thought I would miss the generic Bootstrap or Tailwind or whatever bougie framework other companies use because boy the layout here does not elicit great expectations for their product either and I’m worried that if it ever does ship that nefarious parties will intercept all the private communications of its unfortunate owners and in an ironic sort of way their devices will become the first sort of reverse ad agent that does not transmit advertisements but receives advertisements in the form of the raw interests of their clients fed to said nefarious parties and then laundered through more traditional channels.
> is supposed to ship at the end of this year and there doesn’t even appear to be a real photo
Given they're "still finalizing the design and materials" and are not based in China, I think it's a safe bet that the first run will either be delayed or be an alpha.
The first version will use small batch production techniques like 3D printing and small volume PCB manufacturing. On the photos, we thought it to be more appropriate to show a sketch vs a pretty AI generated photo that is true to anything yet but presents well.
Well. Color me convinced a bit. I took a little time to compare where your at now to where Ring began with Doorbot. It’s not improbable that this can take off.
I’m not a product guy. Or a tech guy for that matter. Do you have any preparations in mind for Apple’s progress with AI (viz. their partnership with Google)? I don’t even know if the actual implementation would satisfy your vision with regard to everything staying local though.
Starting with an iPad for prototyping made me wonder why this didn’t begin as just an app. Or why not just ship the speaker + the app as a product.
You don’t have sketches? Like ballpoint pen on dot grid paper? This is me trying to nudge you away from the impression I get that the website is largely AI-scented.
After making my initial remarks (a purposely absurd one that I was actually surprised got upvoted at all), I checked your resume and felt a disconnect between your qualifications and the legitimate doubt I described in my comment.
To be honest my impression was mostly led by the contents of the website itself, speculation about the quality/reliability of the actual product followed.
I don’t want to criticize you and your decisions in that direction but if this ambition is legitimate it deserves better presentation.
Do you have any human beings involved in communicating your vision?
One point I see less discussed, not related to the post, is "We never trained people to pay for software. If there existed proper global payment mechanism for software companies, the whole trajectory would look different. People are ok to pay 5$ for a coffee but not for software which makes their lives easier."
We're getting closer to a world where every company is an ad company, period. It seems like there are more and more ads touting a dwindling number of actual products.
> They’re building a pocket-sized, screenless device with built-in cameras and microphones — “contextually aware,” designed to replace your phone.
"Contextually aware" means "complete surveillance".
Too many people speak of ads, not enough people speak about the normalization of the global surveillance machine, with Big Brother waiting around the corner.
Instead, MY FELLOW HUMANS are, or will be, programmed to accept and want their own little "Big Brother's little brother" in their pocket, because it's usefull and or makes them feel safe and happy.
> not enough people speak about the normalization of the global surveillance machine, with Big Brother waiting around the corner
Everyone online is constantly talking about it. The truth is for most people it's fine.
Some folks are upset by it. But we by and large tend to just solve the problem at the smallest possible scale and then mollify ourselbves with whining. (I don't have social media. I don't have cameras in or around my home. I've worked on privacy legislation, but honestly nobody called their representatives and so nothing much happened. I no longer really bring up privacy issues when I speak to my electeds because I haven't seen evidence that nihilism has passed.)
> I'm not convinced that there is a point in attempting explaining it
That encapsulates my point.
I’ve worked on various pieces of legislation. All privately. A few made it into state and federal law. Broadly speaking, the ones that make it are the ones for which you can’t get their supporters to stop calling in on.
Privacy issues are notoriously shit at getting people to call their electeds on. The exception is when you can find traction outside tech, or if the target is directly a tech company.
Pretty much this. Nobody really actually cares. People will cite 1984 twenty million times, but since they're very disconnected from 3rd order effects of cross-company data brokerage, it doesn't really matter. I used to care about it before as well, but life became much easier once I took the "normie stand" on some of the issues.
Already here. Even without flexible but dodgy LLM automation, entities like marketing companies have had access to extreme amounts of user data for a long time.
Perhaps I'm not totally clear on how this particular device works, but it doesn't seem like it has no ability to connect to the Internet.
Honestly, I'd say privacy is just as much about economics as it is technical architecture. If you've taken outside funding from institutional venture capitalists, it's only a matter of time before you're asked to make even more money™, and you may issue a quiet, boring change to your terms and conditions that you hope no one will read... Suddenly, you're removing mentions of your company's old "Don't Be Evil" slogan.
The explicit ads angle is only half the story. Even without paid placements, these models already have implicit recommendations baked in.
We ran queries across ChatGPT, Claude, and Perplexity asking for product recommendations in ~30 B2B categories. The overlap between what each model recommends is surprisingly low -- around 40% agreement on the top 5 picks for any given category. And the correlation with Google search rankings? About 0.08.
So we already have a world where which CRM or analytics tool gets recommended depends on which model someone happens to ask, and nobody -- not the models, not the brands, not the users -- has any transparency into why. That's arguably more dangerous than explicit ads, because at least with ads you know you're being sold to.
Contextual irony aside, this is a big reason why the proposal of leveraging AI agents for workflow processing in lieu of using them to develop fixed software to perform the same functions has always struck me as weird, and of late come across as completely nonsensical.
If you're paying someone else to run the inference for these models, or even to build these models, then you're ultimately relying on their specific preferences for which tools, brands, products, companies, and integrations they prefer, not necessarily what you need or want. If and when they deprecate the model your agentic workflow is built on, you now have to rebuild and re-validate it on whatever the new model is. Even if you go out of your way to run things entirely locally with expensive inference kit and a full security harness to keep things in check, you could spend a lot less just having it vomit up some slopcode that one of your human specialists can validate and massage into perpetual functionality before walling it off on a VM or container somewhere for the next twenty years.
The more you're outsourcing workflows wholesale to these bots, the more you're making yourself vulnerable to the business objectives of whoever hosts and builds those bots. If you're just using it as a slop machine to get you the software you want and that IT can support indefinitely, then you're going to be much better off in the long run.
It's the siren song of the myopically lazy. It's laziness today in exchange for harder work tomorrow, with the wager that tomorrow's harder work will be even lazier thanks to advances in technology.
Whereas I'd self-describe as "strategically lazy". It's building iterable code and repeatable processes today, so I can be lazy far into the future. It's engineering solutions today that are easier to support with lazier efforts tomorrow, regardless of whether things improve or get worse.
Building processes around agents predicated on a specific model is myopically lazy, because you'll be rebuilding and debugging that entire setup next year when your chosen agent is deprecated or retired. Those of us building documented code with agents today, will have an easier time debugging it in the future because the hard work is already done.
Incidentally, we'll also have gainful employment tomorrow by un-fucking agent-based workflows that didn't translate into software when tokens were cheap and subsidized by VCs for market capture purposes.
Is anthropic using ads ? Is mistral using ads ? Is déepseek using ads ?
Google, meta, and amazon, sure, of course.
It's interesting that the "every company" part is only open ai... They're now part of the "bad guys spying on you to display ads." At least it's a viable business model, maybe they can recoup capex and yearly losses in a couple decades instead of a couple centuries.
FeteCommuniste | a day ago
rimbo789 | a day ago
kalterdev | a day ago
rimbo789 | 22 hours ago
Marsymars | 16 hours ago
nancyminusone | a day ago
doomslayer999 | a day ago
irishcoffee | a day ago
This is like a shitty Disney movie.
doomslayer999 | 23 hours ago
sxp | a day ago
[OP] ajuhasz | a day ago
iugtmkbdfil834 | a day ago
popalchemist | 22 hours ago
gpm | 17 hours ago
kleiba | a day ago
[OP] ajuhasz | a day ago
popalchemist | 22 hours ago
Is your argument that these affected parties are not users and that the GDPR does not require their consent?
Don't take this as hostility. I am 100% for local inference. But that is the way I understand the law, and I do think it benefits us to hold companies to a high standard. Because even such a device could theoretically be used against a person, or could have other unintended consequences.
NickJLange | a day ago
For once,we (as the technologists) have a free translator to laymen speak via the frontier LLMs, which can be an opportunity to educate the masses as to the exact world on the horizon.
Nevermark | 21 hours ago
It is actually both a technology and regulation/law issue.
What can be solved with the former should be. What is left, solved with the latter. With the best cases where both consistently/redundantly uphold our rights.
I want legal privacy protections, consistent with privacy preserving technology. Inconsistencies create technical and legal openings for nefarious or irresponsible powers.
knallfrosch | 13 hours ago
(The article is an AI ad.)
doomslayer999 | a day ago
janice1999 | 22 hours ago
I have little hope that is true. Don't expect privacy laws and boycott campaigns. That very same elite control the law via bribes to US politicians (and indirectly the laws of other counties via those politicians threats, see the ongoing watering down of EU laws). They also directly control public discourse via ownership of the media and mainstream communication platforms. What backlash can they really suffer?
notatoad | 19 hours ago
if there's a market for a face camera that sends everything you see to meta, there's probably a market for whatever device openAI launches.
alansaber | 7 hours ago
BoxFour | a day ago
Friends at your house who value their privacy probably won’t feel great knowing you’ve potentially got a transcript of things they said just because they were in the room. Sure, it's still better than also sending everything up to OpenAI, but that doesn’t make it harmless or less creepy.
Unless you’ve got super-reliable speaker diarization and can truly ensure only opted-in voices are processed, it’s hard to see how any always-listening setup ever sits well with people who value their privacy.
[OP] ajuhasz | 23 hours ago
This is something we call out under the "What we got wrong" section. We're currently collecting an audio dataset that should help create a speech-to-text (STT) model that incorporates speaker identification and that tag will be weaved into the core of the memory architecture.
> The shared household memory pool creates privacy situations we’re still working through. The current design has everyone in the family shares the same memory corpus. Should a child be able to see a memory their parents created? Our current answer is to deliberately tune the memory extraction to be household-wide with no per-person scoping because a kitchen device hears everyone equally. But “deliberately chose” doesn’t mean “solved.” We’re hoping our in-house STT will allow us to do per-person memory tagging and then we can experiment with scoping memories to certain people or groups of people in the household.
com2kid | 22 hours ago
I wrote a blog post about this exact product space a year ago. https://meanderingthoughts.hashnode.dev/lets-do-some-actual-...
I hope y'all succeed! The potential use cases for locally hosted AI dwarf what can be done with SaSS.
I hope the memory crisis isn't hurting you too badly.
[OP] ajuhasz | 22 hours ago
Feel free to reach out. Would love to swap notes and send you a prototype.
> I hope the memory crisis isn't hurting you too badly.
Oh man, we've had to really track our bill of materials (BOM) and average selling price (ASP) estimates to make sure everything stays feasible. Thankfully these models quantize well and the size-to-intelligence frontier is moving out all the time.
luxuryballs | 19 hours ago
krupan | 15 hours ago
luxuryballs | 6 hours ago
paxys | a day ago
“Oh but they only run on local hardware…”
Okay, but that doesn't mean every aspect of our lives needs to be recorded and analyzed by an AI.
Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?
Have all your guests consented to this?
What happens when someone breaks in and steals the box?
What if the government wants to take a look at the data in there and serves a warrant?
What if a large company comes knocking and makes an acquistion offer? Will all the privacy guarantees still stand in face of the $$$ ?
zmmmmm | 23 hours ago
I do sometimes wish it would be seen as an enlightened policy to legislate that personal private information held in technical devices is legally treated the same as information held in your brain. Especially for people for whom assistive technology is essential (deaf, blind, etc). But everything we see says the wind is blowing the opposite way.
[OP] ajuhasz | 23 hours ago
Some of our decisions in this direction:
We're always open to criticism on how to improve our implementation around this.bossyTeacher | 8 hours ago
I believe you should allow people to set how long the raw data should be stored as well as dead man switches.
HWR_14 | 19 hours ago
In the US you it is not legal to be compelled to turn over a password. It's a violation of your fifth amendment rights. In the UK you can be jailed until you turn over the password.
rrr_oh_man | 9 hours ago
estimator7292 | 5 hours ago
schrodinger | 4 hours ago
Whenever I'm approaching a border crossing (e.g. in an airport), I'm sure to discreetly click power 5 times. You also get haptic feedback on the 5th click so you can be sure it worked even from within your pocket.
eel | 7 hours ago
That kind of policy makes sense for the employee's safety, but it definitely had me thinking how they might approach other tradeoffs. What if the Department of Justice wants you to hand over some customer data that you can legally refuse, but you are simultaneously negotiating a multi-billion dollar cloud hosting deal with the same Department of Justice? What tradeoff does the company make? Totally hypothetical situation, of course.
DANmode | 7 hours ago
SpicyLemonZest | 5 hours ago
HWR_14 | 2 hours ago
lesuorac | 3 hours ago
However, back when the constitution was amended the 5th amendment also applied to your own papers. (How is using something you wrote down not self-incrimination!?).
It only matters if one year in the future it is because all that back data becomes immediately allowed.
HWR_14 | an hour ago
Sharlin | 5 hours ago
I'm being a bit flippant here, but thermite typically works fine.
[OP] ajuhasz | 23 hours ago
One of our core architecture decisions was to use a streaming speech-to-text model. At any given time about 80ms of actual audio is in memory and about 5 minutes of transcribed audio (text) is in memory (this is help the STT model know the context of the audio for higher transcription accuracy).
Of these 5 minute transcripts, those that don't become memories are forgotten. So only selected extracted memories are durably stored. Currently we store the transcript with the memory (this was a request from our prototype users to help them build confidence in the transcription accuracy) but we'll continue to iterate based on feedback if this is the correct decision.
BoxFour | 23 hours ago
The non privacy-conscious will just use Google/etc.
yndoendo | 20 hours ago
My response was no I don't get any of that because I disable that technology since it is always listening and can never be trusted. There is no privacy in those services.
They did not like that response.
dotancohen | 17 hours ago
I don't know what changed, but the general public is starting to figure out that that actually can disagree with large tech companies.
bandrami | 13 hours ago
com2kid | 22 hours ago
Typically not how these things work. Speech is processed using ASR (automatic speech recognition), and then ran through a prompt that checks for appropriate tools calls.
I've been meaning to basically make this myself but I've been too lazy lately to bother.
I actually want a lot more functionality from a local only AI machine, I believe the paradigm is absurdly powerful.
Imagine an AI reminding you that you've been on HN too long and offering to save off the comment your working on for later and then moving they browser window to a different tab.
Having idle thoughts in the car of things you need to do and being able to just say them out loud and know important topics won't be forgotten about.
I understand for people who aren't neurodiverse that the idea of just forgetting to do something that is incredibly critical to ones health and well-being isn't something that happens (often) but for plenty of other people a device that just helps people remember important things can be dramatically life changing.
reilly3000 | 21 hours ago
allovertheworld | 5 hours ago
ramenbytes | 21 hours ago
> Having idle thoughts in the car of things you need to do and being able to just say them out loud and know important topics won't be forgotten about.
> I understand for people who aren't neurodiverse that the idea of just forgetting to do something that is incredibly critical to ones health and well-being isn't something that happens (often) but for plenty of other people a device that just helps people remember important things can be dramatically life changing.
Those don't sound like things that you need AI for.
jcgrillo | 19 hours ago
This would be its death sentence. Nuked from orbit:
Or maybe if there's any slower, more painful way to kill an AI then I'll do that instead. I can only promise the most horrible demise I can possibly conjure is that clanker's certain end.dotancohen | 17 hours ago
SkyPuncher | 21 hours ago
My problem is Siri doesn't do any of this stuff well. I'd really love to just get it out of the way so someone can build it better.
[OP] ajuhasz | 21 hours ago
throwaway5465 | 21 hours ago
drdaeman | 18 hours ago
Is this somehow fundamentally different from having memories?
Because I thought about it, and decided that personally I do - with one important condition, though. I do because my memories are not as great as I would like them to be, and they decline with stress and age. If a machine can supplement that in the same way my glasses supplement my vision, or my friend's hearing aid supplements his hearing - that'd be nice. That's why we have technology in the first place, to improve our lives, right?
But, as I said, there is an important condition. Today, what's in my head stays in there, and is only directly available to me. The machine-assisted memory aid must provide the same guarantees. If any information leaves the device without my direct instruction - that's a hard "no". If someone with physical access to the device can extract the information without a lot of effort - that's also a hard "no". If someone can too easily impersonate myself to the device and improperly gain access - that's another "no". Maybe there are a few more criteria, but I hope you got the overall idea.
If a product passes those criteria, then it - by design - cannot violate others' privacy - no more than I can do myself. And then - yeah - I want it, wish there'd be something like that.
dbtc | 17 hours ago
estimator7292 | 5 hours ago
walt_grata | 5 hours ago
drdaeman | 3 hours ago
I’d rather suggest to inform about all the potential benefits and drawbacks, but leave decisions with the individual.
Especially given that it’s not something irreversibly permanent.
walt_grata | 2 hours ago
encom | 17 hours ago
No, we have technology to show you more and more ads, sell you more and more useless crap, and push your opinions on Important Matters toward the state approved ones.
Of course indoor plumbing, farming, metallurgy and printing were great hits, but technology has had a bit of a dry spell lately.
If "An always-on AI that listens to your household" doesn't make you recoil in horror, you need to pause and rethink your life.
drdaeman | 16 hours ago
If you can't think of an always-on AI that listens but doesn't cause any horrors (even though its improbable to get to the market in the world we live on), I urge you to exercise your imagination. Surely, it's possible to think of an optimistic scenario?
Even more so, if you think technology is here to unconditionally screw us up no matter what. Honestly - when the world is so gloomy, seek something nice, even if a fantasy.
encom | 14 hours ago
>when the world is so gloomy, seek something nice, even if a fantasy
I don't need fantasy to do that. My something nice is being in nature. Walking in the forest. Looking at and listening to the ocean by a small campfire. An absence of stimulation. Letting your mind wander. In peace, away from technology. Which is a long winded way to say "touch grass", but - and I say this sincerely without any snark - try actually doing it. You realise the alleged gloom isn't even that bad. It's healing.
drdaeman | 12 hours ago
Could that be because you're putting some extra substance in what you call an "AI"? Giving it some properties that it doesn't necessarily have?
Because when I'm thinking about "AI" all I'm giving to it is "a machine doing math at scale that allows us to have meaningful relation with human concepts as expressed in a natural language". I don't put anything extra in it, which allows me to say "AI can do good things while avoiding bad things". Surely, a machine can be made to crunch numbers and put words together in a way that helps me rather than harms me.
Oh, and if anything - I don't want "AI" to save me thinking. It cannot do that for me anyway, in principle. I want it to help me do things it machines finally start to do acceptably well: remember and relate things together. This said, yea, I guess I was generous with just a single requirement - now I can see that a personal "AI" also needs its classifications (interpretations) to match with the individual user's expectations as close as possible at all times.
> It's not going to happen.
I can wholeheartedly agree as far as "it is extremely unlikely to happen", but... to say "it is not going to happen", after last five years of "that wasn't on my bingo list"? How can you be so sure? How do we know there won't be some more weird twists of history? Call me naive but I rather want to imagine something nice would happen for a change. And it's not beyond fathomable that something crashes and the resulting waves, would possibly bring us towards a somewhat better world.
Touching grass is important, and it helps a lot, but as soon as you're back - nothing goes anywhere in the meanwhile. The society with all the mess doesn't disappear while we stop looking. So seeking an optimistic possibility is also important, even if it may seem utterly unrealistic. I guess one just have to have something to believe in?
duskdozer | 7 hours ago
lukan | 10 hours ago
I rather have the horror of being old and forgotten in a half care like most old people are right now. AI and robots can bring emporerment. And it is up to us, whether we let ad companies serve them to us from the cloud, or local models running in the basement.
schrodinger | 4 hours ago
When I look at Google, I see a company that is fully funded by ads, but provides me a number of highly useful services that haven't really degraded over 20 years. Yes, the number of search results that are ads grew over the years, but by and large, Google search and Gmail are tools that serve rather benevolently. And if you're about to disagree with this ask yourself if you're using Gmail, and why?
Then I look at Meta or X, and I see a cesspool of content that's driven families apart and created massive societal divides.
It makes me think that Ads aren't the root of the problem, though maybe a "necessary but not sufficient" component.
encom | 12 minutes ago
I'm not using Gmail, and I don't understand why anyone would voluntarily. It was the worst email client I'd ever used, until I had to use Outlook at my new job.
The only Google products I use are YouTube, because that's where the content is. And Android, because IOS is garbage and Apple is only marginally less evil than Google.
shevy-java | 16 hours ago
AI feels more like an organized sniffing tool here.
> If a product passes those criteria, then it - by design - cannot violate others' privacy
A product can most assuredly violate privacy. Just look how Facebook gathered offline data to interconnect people to reallife data points, without their consent - and without them knowing. That's why I call it Spybook.
Ever since the USA became hostile to Canadians and Europeans this has also become much easier to deal with anyway - no more data is to be given to US companies.
drdaeman | 16 hours ago
"AI" on its own is an almost meaningless word, because all it tells is that there's something involving machine learning. This alone doesn't have any implied privacy properties, the devil is always in the untold details.
But, yeah, sure, given the current trends I don't think this device will be privacy-respecting, not to say truly private.
> A product can most assuredly violate privacy.
That depends on the design and implementation.
beepbooptheory | 5 hours ago
https://en.wikipedia.org/wiki/Funes_the_Memorious
https://www.mathfiction.net/files/Mathfiction%20-%20Borges%2...
tzs | 18 hours ago
Maybe I missed it but I didn't see anything there that said it saved conversations. It sounds like it processes them as they happen and then takes actions that it thinks will help you achieve whatever goals of your it can infer from the conversation.
peyton | 9 hours ago
zmmmmm | 23 hours ago
If there's a camera in an AI device (like Meta Ray Ban glasses) then there's a light when it's on, and they are going out of their way to engineer it to be tamper resistant.
But audio - this seems to be on the other side of the line. Passively listening ambient audio is being treated as something that doesn't need active consent, flashing lights or other privacy preserving measures. And it's true, it's fundamentally different, because I have to make a proactive choice to speak, but I can't avoid being visible. So you can construct a logical argument for it.
I'm curious how this will really go down as these become pervasively available. Microphones are pretty easy to embed almost invisibly into wearables. A lot of them already have them. They don't use a lot of power, it won't be too hard to just have them always on. If we settle on this as the line, what's it going to mean that everything you say, everywhere will be presumed recorded? Is that OK?
BoxFour | 23 hours ago
That’s not accurate. There are plenty of states that require everyone involved to consent to a recording of a private conversation. California, for example.
Voice assistants today skirt around that because of the wake word, but always-on recording obviously negates that defense.
paxys | 23 hours ago
pclmulqdq | 21 hours ago
1over137 | 19 hours ago
zmmmmm | 14 hours ago
I'm not aware of many bluetooth headphones that blink an obvious light just because they are recording. You can get a pair of sunglassses with a microphone and record with it and it does nothing to alert anybody.
Whether it's actually legal or not, as you say, varies - but it's clear where device manufactures think the line lies in terms of what tech they implement.
ripped_britches | 22 hours ago
thundergolfer | 22 hours ago
Also agree with paxys that the social implications here are deep and troubling. Having ambient AI in a home, even if it's caged to the home, has tricky privacy problems.
I really like the explorations of this space done in Black Mirror's The Entire History of You[1] and Ted Chiang's The Truth of Fact short story[2].
My bet is that the home and other private spaces almost completely yield to computer surveillance, despite the obvious problems. We've already seen this happen with social media and home surveillance cameras.
Just as in Chiang's story spaces were 'invaded' by writing, AI will fill the world and those opting out will occupy the same marginal positions as those occupied by dumb phone users and people without home cameras or televisions.
Interesting times ahead.
1. https://en.wikipedia.org/wiki/The_Entire_History_of_You 2. https://en.wikipedia.org/wiki/The_Truth_of_Fact,_the_Truth_o...
Animats | 21 hours ago
Apple? [1]
[1] https://www.apple.com/apple-intelligence/
kibwen | 21 hours ago
bitpush | 13 hours ago
Even the article makes the mistake. They paint every company with a broad brush ("all AI companies are ad companies") but for Apple they are more sympathetic "We can quibble about Apple".
Apple's reality distort field is so strong. People still think they are not in ad business. People still think they stand up to government, and folks chose to ignore hard evidence (Apple operates in China on CCP's pleasure. Apple presents a gold plaque to President Trump to curry favors and removes ICEBlock apps ..) There's no pushback, there's no spine.
Every company is disgusting. Apple is hypocritical and disgusting.
HenryOsborn | 21 hours ago
danny_codes | 16 hours ago
nfgrep | 19 hours ago
Genuine Q: Is this business model still feasible? Its hard to imagine anyone other than apple sustaining a business off of hardware; they have the power to spit out full hardware refreshes every year. How do you keep a team of devs alive on the seemingly one-and-done cash influx of first-time-buyers?
luxuryballs | 19 hours ago
lifestyleguru | 19 hours ago
freakynit | 18 hours ago
Even if these folks are giving away this device for 100% free, I'll still not keep it inside my house.
soared | 18 hours ago
sciencesama | 18 hours ago
HWR_14 | 18 hours ago
econ | 18 hours ago
But this was only the beginning, after gathering a few TB worth of micro expressions it starts to complete sentences so successfully the conversation gradually dies out.
After a few days of silence... Narrator mode activated....
fwipsy | 17 hours ago
walterbell | 15 hours ago
Apple bought those for $2B.. coming to Siri.
halper | 8 hours ago
shevy-java | 16 hours ago
Big Brother is watching you. Who knew it would be AI ...
The author is quite right. It will be an advertisement scam. I wonder whether people will accept that though. Anyone remembers ublock origin? Google killed it on chrome. People are not going to forget that. (It still works fine on Firefox but Google bribed Firefox into submission; all that Google ad money made Firefox weak.)
Recently I had to use google search again. I was baffled at how useless it became - not just from the raw results but the whole UI - first few entries are links to useless youtube videos (also owned by Google). I don't have time to watch a video; I want the text info and extract it quickly. Using AI "summaries" is also useless - Google is just trying to waste my time compared to the "good old days". After those initial videos to youtube, I get about 6 results, three of which are to some companies writing articles so people visit their boring website. Then I get "other people searched for candy" and other useless links. I never understood why I would care what OTHER people search for when I want to search for something. Is this now group-search? Group-think 1984? And then after that, I get some more videos at youtube.
Google is clearly building a watered-down private variant of the web. Same problem with AMP pages. Google is annoying us - and has become a huge problem. (I am writing this on thorium right now, which is also chrome-based; Firefox does not allow me to play videos with audio as I don't have or use pulseaudio whereas the chrome-based browser does not care and my audio works fine - that shows you the level of incompetency at Mozilla. They don't WANT to compete against Google anymore. And did not want since decades. Ladybird unfortunately also is not going to change anything; after I critisized one of their decisions, they banned me. Well, that's a great way to try to build up an alternative when you deal with criticism via censorship - all before leaving alpha or beta already. Now imagine the amount of censorship you will get once millions of people WERE to use it ... something is fundamentally wrong with the whole modern web, and corporations have a lot to do with this; to a lesser extent also people but of course not all of them)
FeteCommuniste | 12 hours ago
rrr_oh_man | 5 hours ago
emsign | 16 hours ago
dasil003 | 16 hours ago
I'm not against AI in general, and some assistant-like functionality that functions on demand to search my digital footprint and handle necessary but annoying administrative tasks seems useful. But it feels like at some point it becomes a solution looking for a problem, and to squeeze out the last ounce of context-aware automation and efficiency you would have to outsource parts of your core mental model and situational awareness of your life. Imagine being over-scheduled like an executive who's assistant manages their calendar, but it's not a human it's a computer, and instead of it being for the purpose of maximizing the leverage of your attention as a captain of industry, it's just to maintain velocity on a personal rat race of your own making with no especially wide impact, even on your own psyche.
fragmede | 16 hours ago
Or we could opt out, and help everyone get ahead, on the rising tide lifts all boats theory, but from what I've seen, the trickle of trickle down economics is urine.
kaffekaka | 16 hours ago
No matter how useful AI is and will become - I use AI daily, it is an amazing technology - so much of the discourse is indeed a solution looking for a problem. I have colleagues suggesting on exactly everything "can we put an MCP in it" and they don't even know what the point of MCP is!
larusso | 16 hours ago
alansaber | 7 hours ago
rglover | 5 hours ago
It's a hell of a mousetrap.
Starts playing Somewhere Over the Rainbow.
michelsedgh | 16 hours ago
0xbadcafebee | 16 hours ago
Not if you use open source. Not if you pay for services contractually will not mine your data. Not if you support start-ups that commit to privacy and the banning of ads.
I said on another thread recently that we need to kill Android, that we need a new Mobile Linux that gives us total control over what our devices do, our software does. Not controlled by a corporation. Not with some bizarre "store" that floods us with millions of malware-ridden apps, yet bans perfectly valid ones. We have to take control of our own destiny, not keep handing it over to someone else for convenience's sake. And it doesn't end at mobile. We need to find, and support, the companies that are actually ethical. And we need to stop using services that are conveniently free.
Vote with your dollars.
bigyabai | 4 hours ago
The reason nobody uses mobile Linux is that it has to compete with AOSP-derived OSes like LineageOS and GrapheneOS, which don't suck or run like shit. This is what it looks like when people vote with their dollars, people want the status-quo we have (despite the horrible economic damages).
ponector | an hour ago
Like rooted Android phone, which is useless for regular folks because many critical apps doesn't work (like banking).
witnessme | 15 hours ago
Sparkyte | 15 hours ago
alansaber | 7 hours ago
jeandejean | 14 hours ago
Well the consumers will decide. Some people will find it very useful, but some others will not necessarily like this... Considering how many times I heard people yelling "OK GOOGLE" for "the gate" to open, I'm not sure a continuous flow of heavily contextualized human conversation will necessarily be easier to decipher?
I know guys, AI is magic and will solve everything, but I wouldn't be surprised if it ordered me eggs and butter when I mentioned out loud I was out of it but actually happy about this because I was just about to go on vacations. My surprise when I'm back: melted butter and rotten eggs at my door...
aaron465 | 14 hours ago
tolerance | 14 hours ago
A man-in-the-middle-of-the-middle-man.
JumpCrisscross | 11 hours ago
Given they're "still finalizing the design and materials" and are not based in China, I think it's a safe bet that the first run will either be delayed or be an alpha.
tempodox | 11 hours ago
[OP] ajuhasz | 5 hours ago
We have some details here on how we’re doing the prototyping with some photos of the current prototype: https://juno-labs.com/blogs/how-we-validate-our-custom-ai-ha...
tolerance | 5 hours ago
I’m not a product guy. Or a tech guy for that matter. Do you have any preparations in mind for Apple’s progress with AI (viz. their partnership with Google)? I don’t even know if the actual implementation would satisfy your vision with regard to everything staying local though.
Starting with an iPad for prototyping made me wonder why this didn’t begin as just an app. Or why not just ship the speaker + the app as a product.
You don’t have sketches? Like ballpoint pen on dot grid paper? This is me trying to nudge you away from the impression I get that the website is largely AI-scented.
After making my initial remarks (a purposely absurd one that I was actually surprised got upvoted at all), I checked your resume and felt a disconnect between your qualifications and the legitimate doubt I described in my comment.
To be honest my impression was mostly led by the contents of the website itself, speculation about the quality/reliability of the actual product followed.
I don’t want to criticize you and your decisions in that direction but if this ambition is legitimate it deserves better presentation.
Do you have any human beings involved in communicating your vision?
bandrami | 13 hours ago
alansaber | 7 hours ago
ghywertelling | 13 hours ago
alfiedotwtf | 13 hours ago
s09dfhks | 6 hours ago
BrenBarn | 12 hours ago
5o1ecist | 12 hours ago
"Contextually aware" means "complete surveillance".
Too many people speak of ads, not enough people speak about the normalization of the global surveillance machine, with Big Brother waiting around the corner.
Instead, MY FELLOW HUMANS are, or will be, programmed to accept and want their own little "Big Brother's little brother" in their pocket, because it's usefull and or makes them feel safe and happy.
JumpCrisscross | 11 hours ago
Everyone online is constantly talking about it. The truth is for most people it's fine.
Some folks are upset by it. But we by and large tend to just solve the problem at the smallest possible scale and then mollify ourselbves with whining. (I don't have social media. I don't have cameras in or around my home. I've worked on privacy legislation, but honestly nobody called their representatives and so nothing much happened. I no longer really bring up privacy issues when I speak to my electeds because I haven't seen evidence that nihilism has passed.)
5o1ecist | 11 hours ago
I'll let you decide.
Thank you.
JumpCrisscross | 10 hours ago
That encapsulates my point.
I’ve worked on various pieces of legislation. All privately. A few made it into state and federal law. Broadly speaking, the ones that make it are the ones for which you can’t get their supporters to stop calling in on.
Privacy issues are notoriously shit at getting people to call their electeds on. The exception is when you can find traction outside tech, or if the target is directly a tech company.
tokioyoyo | 9 hours ago
alansaber | 7 hours ago
tempodox | 11 hours ago
emsign | 11 hours ago
ardeaver | 10 hours ago
Honestly, I'd say privacy is just as much about economics as it is technical architecture. If you've taken outside funding from institutional venture capitalists, it's only a matter of time before you're asked to make even more money™, and you may issue a quiet, boring change to your terms and conditions that you hope no one will read... Suddenly, you're removing mentions of your company's old "Don't Be Evil" slogan.
13pixels | 7 hours ago
We ran queries across ChatGPT, Claude, and Perplexity asking for product recommendations in ~30 B2B categories. The overlap between what each model recommends is surprisingly low -- around 40% agreement on the top 5 picks for any given category. And the correlation with Google search rankings? About 0.08.
So we already have a world where which CRM or analytics tool gets recommended depends on which model someone happens to ask, and nobody -- not the models, not the brands, not the users -- has any transparency into why. That's arguably more dangerous than explicit ads, because at least with ads you know you're being sold to.
ACCount37 | 6 hours ago
Replace "LLMs" with "random schmucks online" and what changes exactly?
jayd16 | 5 hours ago
ACCount37 | 4 hours ago
stego-tech | 7 hours ago
If you're paying someone else to run the inference for these models, or even to build these models, then you're ultimately relying on their specific preferences for which tools, brands, products, companies, and integrations they prefer, not necessarily what you need or want. If and when they deprecate the model your agentic workflow is built on, you now have to rebuild and re-validate it on whatever the new model is. Even if you go out of your way to run things entirely locally with expensive inference kit and a full security harness to keep things in check, you could spend a lot less just having it vomit up some slopcode that one of your human specialists can validate and massage into perpetual functionality before walling it off on a VM or container somewhere for the next twenty years.
The more you're outsourcing workflows wholesale to these bots, the more you're making yourself vulnerable to the business objectives of whoever hosts and builds those bots. If you're just using it as a slop machine to get you the software you want and that IT can support indefinitely, then you're going to be much better off in the long run.
rrr_oh_man | 5 hours ago
stego-tech | 3 hours ago
Whereas I'd self-describe as "strategically lazy". It's building iterable code and repeatable processes today, so I can be lazy far into the future. It's engineering solutions today that are easier to support with lazier efforts tomorrow, regardless of whether things improve or get worse.
Building processes around agents predicated on a specific model is myopically lazy, because you'll be rebuilding and debugging that entire setup next year when your chosen agent is deprecated or retired. Those of us building documented code with agents today, will have an easier time debugging it in the future because the hard work is already done.
Incidentally, we'll also have gainful employment tomorrow by un-fucking agent-based workflows that didn't translate into software when tokens were cheap and subsidized by VCs for market capture purposes.
vivzkestrel | 6 hours ago
- put them inside the soundproof box and they cannot hear anything outside
- the box even shows the amount of time for which the device has not been able to snoop on you daily
schaefer | 6 hours ago
rrr_oh_man | 5 hours ago
phtrivier | 5 hours ago
Google, meta, and amazon, sure, of course.
It's interesting that the "every company" part is only open ai... They're now part of the "bad guys spying on you to display ads." At least it's a viable business model, maybe they can recoup capex and yearly losses in a couple decades instead of a couple centuries.