Signal leaders warn agentic AI is an insecure, unreliable surveillance risk

Source: coywolf.com
323 points by speckx 9 hours ago on hackernews | 96 comments

With agentic AI embedded at the OS level, databases storing entire digital lives accessible to malware, tasks whose reliability quickly breaks down at each step, and being opted-in without consent, Signal leadership is sounding the alarm for the industry to pull back until threats can be mitigated.

At the 39th Chaos Communication Congress (39C3) in Hamburg, Germany, Signal President Meredith Whittaker and VP of Strategy and Global Affairs Udbhav Tiwari gave a presentation titled AI Agent, AI Spy. In it, they shared the many vulnerabilities and concerns they have about how agentic AI is being implemented, the very real threat it’s bringing to enterprise companies, and how they recommend the industry change to mitigate a disaster in the making.

“AI Agent, AI Spy” presented by Meredith Whittaker and Udbhav Tiwari at 39C3CC BY 4.0

A key component of AI agents is that they must know enough about you and have access to sensitive data so that they can autonomously take actions on your behalf, such as making purchases, scheduling events, and responding to messages. However, the way AI agents are being implemented is making them insecure, unreliable, and open to surveillance.

How AI agents are vulnerable to threats

Microsoft is trying to bring agentic AI to its Windows 11 users via Recall. Recall takes a screenshot of your screen every few seconds, OCRs the text, and does semantic analysis of the context and actions. It then creates a forensic dossier of everything you do into a single database on your computer. The database includes a precise timeline of actions, full raw text (via OCR), dwell time, and focus on specific apps and actions. Additionally, it assigns topics to specific activities.

Tiwari says the problem with this approach is that it doesn’t mitigate the threat of malware (via online attacks) and indirect (hidden) prompt injection attacks, which can all gain access to the database. These vulnerabilities subsequently circumvent end-to-end encryption (E2EE), prompting Signal to add a flag in its app to prevent its screen from being recorded, but Tiwari says that’s not a reliable or long-term solution.

Why complex agentic tasks aren’t reliable

Whittaker emphasized that agentic AI isn’t just intrusive and vulnerable to threats; it’s also unreliable. She said AI agents are probabilistic, not deterministic, and that each step they take in a task degrades their accuracy and the final action.

She said if an AI agent could perform each step with 95% accuracy–which currently isn’t possible–a 10-step task would yield an action with a ~59.9% success rate. And if you had a 30-step task, the success rate would be ~21.4%. Furthermore, if we used a more realistic accuracy rate of 90%, then a 30-step task would drop down to a success rate of 4.2%. She added that the best agent models failed 70% of the time.

How to make AI agents private and secure

Whittaker said there currently isn’t a solution for making AI agents preserve privacy, security, and control; there’s only triage, but companies can take steps now to mitigate it.

  1. Stop the reckless deployment of AI agents to avoid plain-text database access to malware.
  2. Make opting out the default, with mandatory developer opt-ins.
  3. AI companies must provide radical (or any) transparency about how everything works and make it auditable at the granular level.

If the industry doesn’t heed Whittaker’s and Tiwari’s warnings, the age of agentic AI could be in jeopardy, primarily because consumers could quickly lose their trust in a technology that is already overhyped and over-invested in.

Headshot for Jon Henshaw

Jon Henshaw is the founder of Coywolf and an industry veteran with almost three decades of SEO, digital marketing, and web technologies experience. Follow @jon@henshaw.social