Precisely! Google doesn't care one bit about civil society; it cares about power to itself even if this means punching freedom and liberty in the face. Personally I think it'll be a good thing if this restriction finally wakes up people to seek alternatives to Google.
> For example, a common attack we track in Southeast Asia illustrates this threat clearly. A scammer calls a victim claiming their bank account is compromised and uses fear and urgency to direct them to sideload a "verification app" to secure their funds, often coaching them to ignore standard security warnings. Once installed, this app — actually malware — intercepts the victim's notifications. When the user logs into their real banking app, the malware captures their two-factor authentication codes, giving the scammer everything they need to drain the account.
> While we have advanced safeguards and protections to detect and take down bad apps, without verification, bad actors can spin up new harmful apps instantly. It becomes an endless game of whack-a-mole. Verification changes the math by forcing them to use a real identity to distribute malware, making attacks significantly harder and more costly to scale.
I agree that mandatory developer registration feels too heavy handed, but I think the community needs a better response to this problem than "nuh uh, everything's fine as it is."
A related approach might be mandatory developer registration for certain extremely sensitive permissions, like intercepting notifications/SMSes...? Or requiring an expensive "extended validation" certificate for developers who choose not to register...?
Agree with this middle path you point out. On one hand, I do not want some apps to be distributed anonymously, I need to know who is behind it in order to trust the app. On the other hand, many apps are benign.
Something like Thunderbird might be an exception, but also domain confusion exists, so in the general case, most likely not because most users are susceptible to this.
Typo squatting is a thing, and so are Unicode homographs.
The permissions approach isn't bad. I may trust Thunderbird for some things, but permission to read SMS and notifications is permission to bypass SMS 2FA for every other account using that phone number. It deserves a special gate that's very hard for a scammer to pass. The exact nature of the gate can be reasonably debated.
They are, but this the next-layer-up problem. Most people don't type memorise and type URLs into their browser bar, they use a search engine result, browser history or browser bookmark.
It's therefore on their choice of search engine, or choice of app store, to lead them from "thunderbird" to "The app downloadable from https://thunderbird.net/", which can then be validated as signed by the verified owner of the same domain.
I'm not proposing changing the permissions system.
That's a search engine / reputation problem and it's also present even in Daddy Google's and Daddy Apple's walled gardens.
If you search any web search engine for "thunderbird", https://thunderbird.net/ is the top result. You can choose your preferred search engine, you should be able to choose your own app store, and your level of confidence stems from your own estimation of that entity's past competence.
If you do search Google Play for "thunderbird", you'll find it lists an app with internal name "net.thunderbird.android" as the top result (along with lots of other mail clients). What I'm proposing is that if your choice of search engine or app store shows you https://thunderbird.net/ as the place to download Thunderbird, and you do, PKI can then verify that the app was independently signed by the owner of the matching domain, and that the certificate was issued to them by a CA who regularly validates they control that domain.
If you can "coach someone to ignore standard security warnings", you can coach them to give you the two-factor authentication codes, or any number of other approaches to phishing.
> Installing an app that silently intercepts SMS/MMS data is a persistent technical compromise. Once the app is there, the attacker has ongoing access.
The motivating example as described involves "giving the scammer everything they need to drain the account". Once they've drained the account, they don't need ongoing access.
Persistence allows the scammer free license to attempt password recoveries for every account the victim could possibly have. Other banks, retirement accounts, the victim's email account.
Scammer that thrive are greedy, but not too greedy. Easier to break into one type of account for 10 victims, than to break into 10 different account of one victim. Persistence is risk.
When the victim's relatives send them money because they need to eat and pay rent after handing everything over to the scammer, the persistent backdoor lets that money be drained as well... You're underestimating the persistence and ruthlessness of the scammers.
This is still not a root cause solution, it's just a mitigation. Because you do not require side loading to install malware. The play store and apple app store both contain malware, as well as apps which can be used for nefarious purposes, such as remote desktop.
A root cause solution is proper sandboxing. Google and apple will not do this, because they rely on applications have far too much access to make their money.
One of the fundamentals of security is that applications should use the minimum data and access they need to operate. Apple and Google break this with every piece of software they make. The disease is spreading from the inside out. Putting a shitty lotion on top won't fix this.
>The play store and apple app store both contain malware
Wow, that a major claim. What apps are malware, exactly?
>This is still not a root cause solution, it's just a mitigation.
Requiring signed apps solves the issue though, as it provides identification of whoever is running the scam and a method for remuneration or prosecution.
> Wow, that a major claim. What apps are malware, exactly?
I don't understand how this is a major claim at all, it should be obvious. All repositories of large enough sizes contain malware because malware doesn't declare itself as malware.
This is exacerbated by the fact the Google Play Store and Apple App Store allow closed-source applications. It's much easier to validate behavior on things like the Debian repos, where maintainers can, and do, audit the source code.
Google does not have a magic "is this malware" algorithm, that doesn't exist. They rely on heuristics and things like asking the authors "hey is this malware". As you can imagine, this isn't very effective. They don't even install and test the apps fully. Not that it matters much, obviously malware can easily change it's behavior to not be detectable from the end-user just running the app.
> Requiring signed apps solves the issue though, as it provides identification of whoever is running the scam and a method for remuneration or prosecution.
It doesn't, for three reasons:
1. Identifying an app doesn't magically make it not malware. I can tell you "hey I made this app" and you still have zero idea if it's malware. This is still a post mitigation. Meaning, if we somehow know an app is malware, we can find out who wrote it. It doesn't do the "is this malware" part of the mitigation, which is the most important part.
2. Bad actors typically have little allegiance to ethics, meaning they typically will not be honest about their identity. There are criminal organizations which operate in meatspace and fake their identities, which is 1000x harder than doing it online. Most malware will not have a legitimate identity tacked to it.
3. Bad actors typically come from countries which don't prosecute them as hard. So, even if you find out if something is malware, and then find out the actual people behind it, you typically can't prosecute them. Even large online services like the Silk Road lasted for a long time, and most likely still do exist, even despite the literal US federal government trying to stop them.
A lot of what you said in the second portion isn't at all true (for instance, Google definitely doesn't just ask the author if what they are uploading is malware as a sole check if an app is malware). But I don't think we can even continue the discussion until you prove the "obvious" assertion that there are apps in the Play Store that are malware. So I am going to ask again: give a single name of an app currently in the Play Store that is malware. We are not talking about Apple, but I will extend it so that you can give an app in the Apple App Store that is malware as well.
Let me know when you can provide a single specific name.
This has been going on for years, Google knows about it, and intentionally leaves it unfixed.
> Out of 47 Indian apps I randomly analyzed, 31 of them used the "ACTION_MAIN" filter - giving them access to see all the apps on your phone without any disclosure. That's 2 out of 3 apps.
Of course there's hundreds of other variants of malware, this is just one of the most prevalent.
No they don't? The whole article is about the fact that they're using a loophole. I just checked Zomato's Play Store page, it doesn't say it collects "other installed apps", which is what it should be saying. For example, one of the other listed apps does have this. That's what it should be listing: "Installed apps".
I'm sorry, I gave you too much credit. Is your argument that the "ACTION_MAIN" intent filter somehow gives you access to all installed apps? Do you have any reasoning or Google API documentation to support this?
> A root cause solution is proper sandboxing. Google and apple will not do this, because they rely on applications have far too much access to make their money.
Oh they do this quite well. Thing is, these sandboxes are meant to protect apps from you, not the other way around. That's why some apps - not just platform vendor apps but also select third-party apps - get special access and elevated privileges, while you can't even see what data they store in `/storage/emulated/0/android/data` even with ADB trickery.
Pins can still be phished. Just make the phishing a live proxy resembling the real site.
A fundamental difference with e.g. FIDO2 (especially hardware-backed) is that the private credentials are keyed to the relying party ID, so it's not possible for a phising site to intercept the challenge-response.
You'll then get more warnings if you want to give the sideloaded app additional permissions. And if they want to make the sideloading warnings more dire, that wouldn't be nearly as unreasonable.
Never ending worm approach is to get remote control via methods on android or apple. Then scam other contacts.
It’s built into FaceTime. Need 3rd party apps for android.
The phisher’s app or login would be from a completely new device though.
Passkeys are also an active area to defeat phishing as long as the device is not compromised. To the extent there is attestation, passkeys also create very critical posts about locking down devices.
Given what I see in scams, I think too much is put on the user as it is. The anti-phishing training and such try to blame somebody downward in the hierarchy instead of fixing the systems. For example, spear-phishing scams of home down payments or business accounts work through banks in the US not tying account numbers to payee identity. The real issue is that the US payment system is utterly backward without confirmation of payee (I.e. giving the human readable actual name of recipient account in the banking app). For wire transfers or ACH Credit in the US, commercial customers are basically expected to play detective to make sure new account numbers are legit.
As I understand it, sideloading apps can overcome that payee legal name display in other countries. So the question for both sideloading and passkeys is if we want banks liable for correctly showing the actual payee for such transfers. To the extent they are liable, they will need to trust the app’s environment and the passkey.
This reeks of "think of the children^Wscammed". I mean, following this principle the only solution is to completely remove any form of sideloading and have just one single Google approved store because security.
> A related approach might be mandatory developer registration for certain extremely sensitive permissions, like intercepting notifications/SMSes...? O
It doesn't work like that. What they mean with "mandatory developer registration" is what Google already does if you want to start as a developer in Play Store. Pay 25$ one-time fee with a credit card and upload your passport copy to some (3rd-party?) ID verification service. [1]
In contrast with F-Droid where you just need a GitLab user to open a merge request in the fdroid-data repository and submit your app, which they scan for malware and compile from source in their build server.
[1] but I guess there are plenty of ways to fool Google anyway even with that, if you are a real scammer.
I agree with Epic. It should be like on windows or macOS where you can register, get notarized, and then distribute without scare screens. I don’t see why phones are inherently different than computers.
>You can also cut yourself with a kitchen knife but nobody proposes banning kitchen knives.
oh nice, i love this game.
you cant carry a kitchen knife that is too long, you cant carry your kitchen knife into a school, you cant brandish your kitchen knife at police, you cant let a small child run around with a kitchen knife...
literally most of what "the state" does is be a "nanny"
(not agreeing or disagreeing with google here, i have no horse in this particular race. but this little knife quip is silly when you think about it for more than 5 seconds)
sorry, should say "carry", not "buy". most states have a maximum length you can carry (4-5.5 inches is common).
although, i would imagine at some length, it becomes a "sword" (even if marketed as a knife) and falls under some other "nanny"-ing. i have not googled that.
As kevin_thibedeau points out elsewhere in the thread, he's not necessarily wrong. In many states and foreign countries it's illegal to carry a large knife in public without a reason and I'm sure purchases are restricted in some places as well. Most people are more or less OK with that, it seems, so there historically hasn't been a lot of pushback.
I think it's important to consider the intent of those laws, too. They are primarily or even exclusively to prevent you from hurting others with knives. They are not really intended to protect you from cutting yourself in your own home. So I think the parent's comment still holds weight.
In this example we still don't require you to register with anyone to buy a knife, get the blessing of some institution to sell knives, or, as in this case, get a certification before you can start making knives.
its crazy that different things, like knives and app stores, have different rules. maybe thats why the quip about the knife sounded super cool but fell apart as an analogy for this scenario when thought about for more than 5 seconds?
the point of my comment was that the state does implement a lot of rules (read: "is a nanny"), despite the claim otherwise.
I don’t want to be too flippant, but I think there is a real trade off across many aspects of life between “freedom” and “safety”.
There is a point at which people have to think critically about what they are doing. We, as a society, should do our best to protect the vulnerable (elderly, mentally disabled, etc) but we must draw the line somewhere.
It’s the same thing in the outside world too - otherwise we could make compelling arguments about removing the right to drive cars, for example, due to all the traffic accidents (instead we add measures like seatbelts as a compromise, knowing it will never totally solve the issue).
> protect the vulnerable (elderly, mentally disabled, etc)
Yes, one could imagine some kind of mental test and if you fail you don't get to use your bank online, you have to walk to the physical location to make transactions. But this can obviously be abused to shut out people from banking based on political and other aspects. Generally democracies are wary of declaring too broad sets of people as incapable of acting independently without some guardian. Obviously beyond a certain threshold of mental incapacitation, dementia etc. it kicks in, but just imagine declaring that you're too easy to influence and scam and we can't let you handle your money,... But somehow we can rely on you using sane judgment when voting in elections. Or should we strip election rights too?
We rely on polite fictions around the abilities of the average person. The contradictions sometimes surface but there is no simple way to resolve it without revising some assumptions.
I think there's room to raise the bar of required tech competency without registration.
Manually installing an app might be close to the limit of what grandma can be coached through by an impatient scammer.
Multiple steps over adb, challenges that can't be copy and pasted in a script, etc. It can be done but it won't provide as much control over end user devices.
Because I hope you realize that clamping down on “sideloading” (read: installing unsigned software) on PCs is the next logical step. TPMs are already present on a large chunk of consumer PCs - they just need to be used.
Of course it extends to PCs. It'd suck for us, but end users, software vendors, content providers, and service providers all benefit from a more restricted platform that can provide certain guarantees against malware, fraud, piracy, and so forth. It's pathologically programmer-brained to assume that the good old days of being able to run arbitrary code on a networked computing device would last forever. That freedom must be balanced against the interests of the rest of society to avoid risk from certain kinds of harm which can easily proliferate in an environment where any program can run with the full authority of the owner and malware spreads willy-nilly.
The "programmer-brained" assumption is that I will be able to write any program and run it on my machine and that this ability isn't reserved for only me or some limited class of people and that I can share what I write with others. One big plus of the current stye of AI will be that "end users" will be able to write simple programs and will value this ability. Thus helping protect general purpose computing from this bit of evil for a while longer.
Users get way more out of it when the device is free. Even if they don't use this option, it makes it easier to set up competing services. This includes ones that would never be allowed in an official store because they're DRM-free alternatives to big streaming services but still offer all the same content. The existence of such alternatives, if they are easy to use, can force the big services to become more user-friendly. Just as happened back then with Napster.
Also every user is free to simply not use the option of installing things outside of the store.
> This includes ones that would never be allowed in an official store because they're DRM-free alternatives to big streaming services but still offer all the same content.
Do you know anyone who works in a professional creative field that doesn't involve writing code? If so, ask them how they'd feel about their work bring out there on the internet free to all takers. What the implications would be for their ability to feed their children and pay their mortgage doing the things they love.
This is what I mean by "programmer-brained." Of all creative workers, only programmers seem okay with abolishing IP laws, I guess because they figure they'll be okay living out of an office at MIT, or even worse out of an office at some YC startup that turns the user into the product. But artists, musicians, writers, filmmakers, etc. all put food on the table because of those IP laws programmers hate so much. Taking that protection for the fruit of your labor away would be at least as disruptive as AI has been.
> That freedom must be balanced against the interests of the rest of society to avoid risk from certain kinds of harm which can easily proliferate in an environment where any program can run with the full authority of the owner and malware spreads willy-nilly.
No, no, a thousand times no. This is an argument for authoritarian clampdown on general computing and must be opposed by all means necessary. I have the right to run whatever code I wish on my own damn property without the permission of arbitrary authorities or whatever subset of society you favor, and if you or they have a problem with this, you or they can proceed to pound sand.
Right, but this same problem (scamming) exists on PCs.
Would it make sense to then argue that enforcing TPM-backed measured boot and binary signature verification is a legitimate way to address the problem?
Their point, applied to that situation, would be that if someone does argue for enforcing TPM-backed measured boot yadda yadda to address scamming, trying to counter it by dismissing scamming as not a real problem is useless.
I get it dude, but my wider point is that we need to question where this line of argumentation leads to.
Are we saying that, because scamming exists and we haven’t proposed an alternative, it means that clamping down on software installation methods is a legitimate solution to the problem?
Scamming is a real problem, but that does not imply this solution is the right way to go about it. There are other means to address this problem; they may not scale as well, but they also don't sacrifice computing freedoms.
Developer registration doesn't prevent this problem. Stolen ID can be found for a lot less money than what a day in a scam farm's operation will bring in. A criminal with access to Google can sign and deploy a new version of their scam app every hour of the day if they wish.
The problem lies in (technical) literacy, to some extent people's natural tendency to trust what others are telling them, the incompetence of investigative powers, and the unwillingness of certain countries to shut down scam farms and human trafficking.
My bank's app refuses to operate when I'm on the phone. It also refuses to operate when anything is remotely controlling the phone. There's nothing a banking app can do against vulnerable phones rooted by malware (other than force to operate when phones are too vulnerable according to whatever threshold you decide on so there's nothing to root) but I feel like the countries where banks and police are putting the blame on Google are taking the easy way out.
Scammers will find a way around these restrictions in days and everyone else is left worse off.
> Stolen ID can be found for a lot less money than what a day in a scam farm's operation will bring in.
Well, in that case, Google has an easy escalation path that they already use for Google Business Listings: They send you a physical card, in the mail, with a code, to the address listed. If this turns out to be a real problem at scale, the patch is barely an inconvenience.
So they'll have a lead time building up a set of verified developers. These scams are pulled by organized crime syndicates, using human trafficking and beatings to keep their call centers manned with complicit workers.
Now they'll need to pay off a local mailman to give them all of Google's letters with an address in an area they control so they can register a town's worth of addresses, big whoop. It'll cost them a bit more than the registration fee, but I doubt it'll be enough to solve the problem.
> Now they'll need to pay off a local mailman to give them all of Google's letters with an address in an area they control so they can register a town's worth of addresses, big whoop. It'll cost them a bit more than the registration fee, but I doubt it'll be enough to solve the problem.
Yeah, this is a huge amount more work than, like, nothing.
> Someone will manufacture and sell bulk identities
How? You've now moved the level of sophistication required from "someone runs some bots on the facebook website" to "someone is now committing complex fraud against a government".
If the only people who can run scams are state sponsored, that's still vastly better than the status quo.
Amazon has a huge problem with packages being sent to fake people at different addresses. It’s part of review scams. This won’t be much different. Just send the verification to empty houses and apartments.
You now need to have a variety of fake addresses you can use, since scammed addresses will get banned. You also need fake IDs. So again, the bar has now been raised from "run a bot to make fake Facebook accounts" to "I have a large number of physical addresses and the ability to create arbitrary fake government IDs".
> Amazon has a huge problem with packages being sent to fake people at different addresses.
This usually involves those people getting weird packages and not doing anything with them, it doesn't require attacker-controlled addresses.
I still think it’s doable. Fake IDs aren’t exactly hard to come by. You could also pay randos a $30 gift card to sign up for a developer account and share access. Enough people will do it. I guess this does raise the cost a little though.
> You could also pay randos a $30 gift card to sign up for a developer account and share access. Enough people will do it.
This could work, but the issue here is that a lot of these scams rely on the "zero cost"-ness of turnup and use that as a asymmetry. If it costs you nothing to turn up new scam-accounts, and it costs me something to investigate and remove them, you win. If it costs you $10 to create new scam accounts then as long as I can get the EV of a scam account below $10, the scam isn't worthwhile.
Laundering millions is a huge amount of work already. You need to hide your criminal activity from banks investigating fraud. Presuming the banks are doing their jobs right, at least, but if they don't, then that'd be the place to start solving this problem.
People are already effectively faking addresses for something as stupid as Amazon reviews. Apparently it's that cheap to fake an address, because those crapware spam stores that rotate their name/products/listings aren't exactly the size of the mob.
What this will probably do is raise the bar for scams a little so that dumb "mom-and-pop" criminals can no longer get started with a guide and a software kit they buy on Telegram, clearing the field for "professionals" while at the same time making identity fraud, address fraud, and (money) mules more lucrative.
All of that to shift away the blame from banks, public institutions, education, and to some extent people's personal financial responsibilities.
My guess is that Android 17 will show the registered name of the developer of the app you're trying to install. With stolen IDs you can only get accounts for individual developers not for organisations.
When a scammer pretending to be your bank tells you to install an app for verification and it says "This app was created by John Smith" even grandma will get suspicious and ask why it doesn't show the bank's name.
When someone is getting scammed by "special agent John Smith of the Federal Banking Enforcement Commission", the name "John Smith" won't cause any suspicion.
This trick only works if the general public is aware of what the app developer label does, what it is used for, what it protects against, and what it's supposed to say. However, if that's the case, you already have all the info you need to deduce that you shouldn't be installing APKs sent by a guy over the phone anyway.
There simply isn't a known solution to this problem. If you give users the ability to install unverified apps, then bad actors can trick them into installing bad ones that steal their auth codes and whatnot. If you want to disallow certain apps then you have to make decisions about what apps (stores) are "blessed" and what criteria are used to make those distinctions, necessarily restricting what users can do with their own devices.
You can go a softer route of requiring some complicated mechanism of "unlocking" your phone before you can install unverified apps - but by definition that mechanism needs to be more complicated then even a guided (by a scammer) normal non-technical user can manage. So you've essentially made it impossible for normies to install non-playstore apps and thus also made all other app stores irrelevant for the most part.
The scamming issue is real, but the proposed solutions seem worse then the disease, at least to me.
I'm going to assume you're referring to auth codes, especially the ones sent via SMS? In which case yes, banks should definitely stop using those but that alone doesn't solve the overarching issue.
The next step is simply that the scammer modifies the official bank app, adds a backdoor to it, and convinces the victim to install that app and login with it. No hardware-bound credentials are going to help you with that, the only fix is attestation, which brings you back to the aformentioned issue of blessed apps.
I'm not sure if you understand what makes passkeys phishing-resistant?
The backdoored version of the app would need to have a different app ID, since the attacker does not have the legitimate publisher's signing keys. So the OS shouldn't let it access the legitimate app's credentials.
Correction: nothing prevents the attacker from using the app's legit package ID other than requiring the uninstall of the existing app.
The spoofed app can't request passkeys for the legit app because the legit app's domain is associated with the legit app's signing key fingerprint via .well-known/assetlinks.json, and the CredentialManager service checks that association.
If the side loaded app does not have permission to use the passkeys and cannot somehow get the user to approve passkey access of the new app, that would be a good alternative to still allow custom apps.
I don't think you understand. This exists _today_, regardless of how you install apps, because attackers can't spoof app signatures. If I don't have Bank of America's private signing key, I cannot make an app that requests passkeys for bankofamerica.com, because bankofamerica.com publishes a file [0] that says "only apps signed with this key fingerprint are allowed to request passkeys for bankofamerica.com" and Android's credential service checks that file.
No need for locking down the app ecosystem, no need to verify developers. Just don't use phishable credentials and you are not vulnerable to malware trying to phish credentials.
I understand how passkeys work. You don't need the legitimate app's credentials, we're talking about phishing attacks, you're trying to bring the victim to giving you access/control to their account without them realizing that that's what is happening.
A simple scenario adapted from the one given in the android blog post: the attacker calls the victim and convinces them that their banking account is compromised, and they need to act now to secure it. The scammer tells the victim, that their account got compromised because they're using and outdated version of the banking app that's no longer suppported. He then walks them through "updating" their app, effectively going through the "new device" workflow - except the new device is the same as the old one, just with the backdoored app.
You can prevent this with attestation of course, essentially giving the bank's backend the ability to verify that the credentials are actually tied to their app, and not some backdoored version. But now you have a "blessed" key that's in the hands of Google or Apple or whomever, and everyone who wants to run other operating systems or even just patched versions of official apps is out of luck.
I understand how passkeys work. You don't need the legitimate app's credentials, we're talking about phishing attacks, you're trying to bring the victim to giving you access/control to their account without them realizing that that's what is happening.
That doesn't work, because the scammer's app will be signed with a different key, so the relying party ID is different and the secure element (or whatever hardware backing you use), refuses to do the challenge-response.
> He then walks them through "updating" their app, effectively going through the "new device" workflow - except the new device is the same as the old one, just with the backdoored app.
This is where the scheme breaks down: the new passkey credential can never be associated with the legitimate RP. The attacker will not be able to use the credential to sign in to the legitimate app/site and steal money.
The attacker controls the fake/backdoored app, but they do not control the signing key which is ultimately used to associate app <-> domain <-> passkey, and they do not control the system credentials service which checks this association. You don't even need attestation to prevent this scenario.
> do not control the signing key which is ultimately used to associate app <-> domain <-> passkey, and they do not control the system credentials service which checks this association.
You're assuming the attacker must go through the credential manager and the backing hardware, but that is only the case with attestation. Without it, the attacker can simply generate their own passkey in software, because the backend on the banks side would have no way of telling where the passkey came from.
With banks, typically a combination of your account number, pin and some confirmation code sent via email or SMS. And of course unregistering your previous device. Not sure where you're going with this though?
I never said that passkeys can be phished, I said they don't solve this problem, but yeah. Locking the front door while leaving the back door wide open, as they say. But unless you can convince people to go into the bank counter every time they change their phone, that's life.
The solution would be a "noob mode" that disables sideloading and other security-critical features, which can be chosen when the device is first turned on and requires a factory reset to deactivate. People who still choose expert mode even though they are beginners would then only have themselves to blame.
This is just a variant of the "complicated unlocking mechanism" I was talking about. It still screws over everything not coming from the play store because the installation process for them essentially becomes a huge hassel, that even involves factory resetting their device, that most people won't want to deal with.
> There simply isn't a known solution to this problem. If you give users the ability to install unverified apps, then bad actors can trick them into installing bad ones that steal their auth codes and whatnot.
This is also true if they can only install verified apps, because no company on earth has the resources to have an actually functional verification process and stuff gets through every day.
> This is also true if they can only install verified apps, because no company on earth has the resources to have an actually functional verification process and stuff gets through every day.
This is true, but if this goes through, I imagine that the next step for safety fascists will be to require developer licensing and insurance like general contractors have. And after that, expensive audits, etc, until independent developers are shut out completely.
I never mentioned building critical software like medical diagnosis software, software for industrial equipment, etc.
If I write a trash library for a random project and someone else starts using it to run their nuke plant, that isn’t my fault. Read the license. NO WARRANTY.
>I agree that mandatory developer registration feels too heavy handed, but I think the community needs a better response to this problem than "nuh uh, everything's fine as it is."
OK, so instead of educating stupid (or overly naive) people, we implement "protections" to limit any and all people to do useful things with their devices? And as a "side effect" force them to use "our" app store only? Something doesn't smell that good here …
How about a less drastic measure, like imposing a serious delay for "side loading" … let's say I'd to tell my phone that I want to install F-Droid and then would have to wait for some hours before the installation is possible? While using the device as usual, of course.
The count down could be combined with optional tutorials to teach people to contact their bank by phone meanwhile. Or whatever small printed tips might appear suitable.
> I agree that mandatory developer registration feels too heavy handed, but I think the community needs a better response to this problem than "nuh uh, everything's fine as it is."
Why would the community give a different response? Everything is fine as it is. Life is not safe, nor can it be made safe without taking away freedom. That is a fundamental truth of the world. At some point you need to treat people as adults, which includes letting them make very bad decisions if they insist on doing so.
Someone being gullible and willing to do things that a scammer tells them to do over the phone is not an "attack vector". It is people making a bad decision with their freedom. And that is not sufficient reason to disallow installing applications on the devices they own, any more than it would be acceptable for a bank to tell an alcoholic "we aren't going to let you withdraw your money because we know you're just spending it at the liquor store".
Right like someone who can only afford a $100 phone can buy the cheapest iPhone which is 5x more expensive.
This is about like the geeks who hate the idea of ad supported services and think that everyone should just pay for every service they use.
FWIW: I do exclusively buy Apple devices, pay for streaming services ad free tier, the Stratechery podcast bundle, ATP and the Downstream podcasts and Slate. I also pay for ChatGPT and refuse to use any ad supported app or game.
It's not like there's much of an alternative, but that's irrelevant anyway. Android is becoming more like an iPhone, and as long as the OS is able and willing to reliably report to anyone asking just how tightly it is locked down, we have zero choice in the matter, because increasingly many important apps (like bank and government apps) plain refuse to work if device is locked down less than it could be.
> At some point you need to treat people as adults, which includes letting them make very bad decisions if they insist on doing so.
The world does not consist of all rational actors, and this opens the door to all kinds of exploitation. The attacks today are very sophisticated, and I don't trust my 80-yr old dad to be able to detect them, nor many of my non-tech-savvy friends.
> any more than it would be acceptable for a bank to tell an alcoholic "we aren't going to let you withdraw your money because we know you're just spending it at the liquor store".
It's not a false equivalence at all. Both situations are taking away someone's control of something that they own, borne from a paternalistic desire to protect that person from themselves. If one is acceptable, the other should be. Conversely if one is unacceptable, the other should be unacceptable as well. Either paternalistic refusal to let people do as they wish is ok, or it isn't.
Maybe not, but I think that overextending any idea like that in the opposite direction of whatever point you are trying to make at least devolves into a "slippery slope" argument. For instance, is your point that all security on phones that impede freedom of the user (for instance, HTTPS, forced password on initial startup, not allowing apps to access certain parts of the phone without user permissions, verifying boot image signatures) should be removed as well?
No, that's not my point at all. Measures such as that are a tool which is in the hands of the user. There is a default restriction which is good enough for most cases, but the user has the ability to open things up further if he needs. What Google is proposing takes control out of the user's hands and makes Google the sole arbiter of what is and is not allowed on the device.
None of the measures I mentioned are changeable by the user, except possibly sideloading an HTTPS certificate. That's the only way any of those measures even work; if it wasn't set as invariants by the OS, they would be bypassable.
>There is a default restriction which is good enough for most cases, but the user has the ability to open things up further if he needs.
But this is what the other guy's point is. You are defining "good enough for most cases" in a way that he is not, then making the argument that what he says is equivalent to not allowing an alcoholic to buy beer. Why can you set what level is an acceptable amount of restriction, but he can't?
Protecting from scams isn't protection from the victim themselves. That should be obvious from the fact that very intelligent and technologically literate people too can fall for phishing attacks. Tell me for example, how many people in your life know how a bank would ACTUALLY contact you about a suspected hijacking and what the process should look like? And how about any of the dozens of other cover stories used? Not to mention the situations where the scammers can use literally the same method of first contact as the real thing (eg. spoofed).
...And the fact that for example email clients do their best to help them by obscuring the email address and only showing the display name, because that's obviously a good idea.
> Protecting from scams isn't protection from the victim themselves.
That is where we differ. It is, ultimately, the victim of a scam who makes the choice of "yes, this person is trustworthy and I will do what they say". The only way to prevent that is to block the user from having the power to make that decision, which is to say protecting them from themselves.
But the proposal here, requiring developers to register their identities, doesn't actually impact consumers at all. They still have the ability to make the decision about whether or not to trust someone.
None of these things requires "locking down phones." Every single thing you've mentioned can be done in a smarter way that doesn't involve "individuals aren't allowed to modify the devices they purchase."
The alcoholic knows the bad outcomes, and chooses to ignore them. The hapless Android user does not understand the negative consequences of sideloading. I think this makes for a substantial differerence between those two.
> The hapless Android user does not understand the negative consequences of sideloading.
Then make sideloading disabled by default but enable it when the users tap 7 times on whatever settings item. At that time, explain those "negative consequences" to them, explain them real good, don't spare anything and if they still hit "Yes, continue to enable sideloading" you do that immediately in order to avoid increasing their haplessness with other made-up excuses.
There is some world where somebody scammed through sideloading loses their life savings, and every country is politically fine with the customer, not the bank, taking the losses.
But for regular people, that is not really the world they want. If the bank app wrongly shows they’re paying a legitimate payee, such as the bank, themselves or the tax authority, people politically want the bank to reimburse.
Then the question becomes not if the user trusts the phone’s software, but if the bank trusts the software on the user’s phone. Should the bank not be able to trust the environment that can approve transfers, then the bank would be in the right to no longer offer such transfers.
If the actual bank app does that, or is even easy to fool into doing that, then the bank should be responsible. That's the world "regular people" want and it's the world as it should be.
If random malware the user chose to install does that, then that is not the bank's fault. The bank is no more involved than anybody else. And no, I don't think "regular people" want to make that the bank's fault.
The legal infrastructure for banking and securities ownership has long had defaults for liability assignment.
For securities, if I own stock outright, the company has to indemnify if they do a transfer for somebody else or if I lack legal capacity. So transfer agents require Medallion Signature Guarantees from a bank or broker. MSGs thereby require a lengthy banking relationship and probably showing up in person.
For broker to broker transfers, there is ACATS. The receiving broker is in fact liable in a strict, no-fault way.
As far as I know, these liabilities are never waived. Basically for the sizable transfers, there is relatively little faith in the user’s computers (including phones). To the extent there is faith, it has total liability on some capitalized party for fraud.
These defaults are probably unknown for most people, even those with large amounts of securities. The system is expected to work since it has been set up this way.
Clearly a large number of programmers have a bent to go the complete opposite direction from MSGs, where everything is private keys or caveat emptor no matter the technical sophistication of the customer. I, well, disagree with that sentiment. The regime where it’s possible for no capitalized entity to be liable for wrongful transfers (defined as when the customer believes they are transferring to a different human-readable payee than actually receiving funds) should not be the default.
> Basically for the sizable transfers, there is relatively little faith in the user’s computers (including phones). To the extent there is faith, it has total liability on some capitalized party for fraud.
But that is expensive, so my impression is that for non-sizeable transfers, and beyond banking, for basically anything dealing with lots of regular people doing regular-people-sized operations, the default in the industry is to try and outsource as much liability onto end-users. So instead of treating user's computers as untrusted and make system secure on the back end, the trend is to treat them as trusted, and then deal with increased risk by a) legal means that make end-users liable in practice (keeping users uninformed about their rights helps), and b) technical means that make end-user devices less untrusted.
b) is how we end up with developer registries and remote attestation. And the sad thing is, it scales well - if device and OS vendors cooperate (like they do today), they can enable "endpoint security" for everyone who seeks to externalize liability.
Why do banks go through all the know-your-customer (KYC) process if not to identify the beneficial owner of every account? If they receive a transfer via fraud, then they either get it clawed back, have to pay it back, and/or get identified to law enforcement. If the last bank in the chain doesn't want to play by the rules, then other banks shouldn't transfer into them, or that bank itself should be held liable.
This is more or less how people expect things to work today ....
In the case of some knowing or blindfully unknowing money mule in the chain or at the end of the chain, the intermediary or final banks may not be at fault. The bank could have followed KYC procedures in that somebody with that name actually existed who controlled the account.
The money mule themselves is almost certainly insolvent to pay the damages. Currencies can also change by the money mule (either to a different fiat currency or crypto), putting the ultimate link completely out of reach of the originating country.
If intermediary banks are deputized and become liable in a no-fault sense, then legitimate transfers out become very difficult. How does a bank prove a negative for where the funds come from? De-banking has already been a problem for a process-based AML regime.
Are banks POWERFUL? Do they have lots of money and/or connections to those who do? Do they have a vested interest in getting transactions right?
Absolutely!
Now, with all that money and power -- they -- whoever THEY are, need to come up with smart ways to verify transactions that don't involve me giving them all the keys to all my devices.
We have protections like this elsewhere - even when they have some "ownership." The bank kinda owns my house, but they still can't come in whenever they want.
The point is "a warning" is not enough to communicate to people the gravity of what they are doing.
It is not enough to write "be careful" on a bag you get from a pharmacy... certain medications require you to both have a prescription, and also to have a conversation with a pharmacist because of how dangerous the decisions the consumer makes can be.
Normal human beings can be very dumb. It's entirely reasonable to expect society to try to protect them at some level.
OK so make the warning more annoying. Have a security quiz. Cooldown period of one day to enable. Require unlock via adb connected to laptop.
There are alternative solutions if the true goal is maintaining user freedom while protecting dumb users. But that is not the true goal of the upcoming changes.
- Don't reset it every 5 days / 5 hours / 5dBm blip in Wi-Fi strength, because this pretty much defeats end-user automation, whether persistent or event-driven. This is the current situation with "Wireless Debugging", otherwise cool trick for "rootless root", if it only didn't require being connected to Wi-Fi (and not just a Wi-Fi, but the same AP, breaking when device roams in multi-AP networks).
- Don't announce the fact that this is on to everyone. Many commercial vendors, including those who shouldn't and those who have no business caring, are very interested in knowing whether your device is running with debugging features enabled, and if so, deny service.
Unfortunately, in a SaaS world it's the service providers that have all the leverage - if they don't like your device, they can always refuse service. Increasingly many do.
I do? It's a trivially comparable thing? I'm not even talking about ALL prescription drugs. I'm talking about the fact that some have interactions that can kill you. Having "life savings gone" consequences from a random app install is that level of danger.
A non-trivial number of people should probably have to go see a specialist before being able to unlock sideloading in my opinion... which means we probably all would have to. It's annoying, but I actually care about other people.
I have a hard time with this because it's the world we've lived in forever. Everyone knows installing an "app" installs an executable.
Doesnt android require a specific permission to be user-accepted for an installed app to read notifications? I think it's separate from the post-notifications permission.
This seems to be an issue of user literacy. If so, doesn't it make more sense for a user to have the option to opt into "I'm tech illiterate, please protect me" than destroy open computing as we know it?
You can add 5 layers of "are you sure you want to do this unsafe thing" and it just adds 5 easy steps to the scam where they say "agree to the annoying popup"
You could even make this an installation-time option. If you want to enable the switch afterwards, you have to do a factory reset. Then, the attackers convincing the victims would get nothing.
Or make sideloading available only after 24 hours since enabling it. I would enable it on my new devices and wait 24 hours before installing F-Droid and other apps. Not a problem. Scammers might wait one day too but it decreases the chances of success because friends and family members can interfere.
But I'm afraid that this is security theater and the true goal is to protect revenues by making it hard or impossible to install apps that impact Alfabet bottom line (eg third party YouTube clients.)
> But I'm afraid that this is security theater and the true goal is to protect revenues by making it hard or impossible to install apps that impact Alfabet bottom line (eg third party YouTube clients.)
It's not just them. Every other SaaS, from banks to media providers to E2EE[0] chat clients to random apps whose makers feel insecure, or are obsessed with security [theater] best practices, just salivate at the thought of being able to check if you're a deviant running with root or debugging privileges, all because ${complex web of excuses that often sound plausible if you don't look too closely}. There's a huge demand for device attestation, remote or otherwise.
In the case of most of those business it's only because they must mark checkboxes on a regulation compliance sheet and/or deflect blame on someone else. The problem is that this is a never ending spiral of regulation after regulation and new ways to deflect blame so after device attestation will fail to solve all of their problems they'll end up pushing something else.
That's... brilliant. Enough work to not be able to talk it though over the phone to someone not technical. A sane default for people who don't know about security. And a simple enough procedure for the technically minded and brave.
It solves the 'smartest bear / dumbest human' overlap design concern in this situation.
The reality in South East Asia doesn't support that. You're assuming that the potential victims are able to either use Android alternative or that they are willing and able to educate themselves about scams. The reality in these countries is that neither is the case in practice. Daily lives depend a lot on smartphones and they play a big role in cashless financial transactions. Networking effects play a big role here. Android devices are the only category that is both widely available and affordable.
Education is also not that effective. Spreading warnings about scams is hard and warnings don't reach many people for a whole laundry list of reasons.
The status quo is decidedly not fine. Society must act to protect those that can't protect themselves. The only remaining question is the how.
Google has an approach that would work, but at a high cost. Is there an alternative change that has the same effects on scammers, but with fewer issues for other scenarios?
The status quo may not be perfect but it is the best we can do. We try to educate people about scams. We give them warnings that what they are doing can be dangerous if misused. If they choose to ignore those things and proceed anyway, the only further step society could take is to take away the person's freedom to choose. And that is an unacceptable solution.
The original post laids out why it's not possible to do well: privacy apps, sanctioned countries, apps made by people for themselves to avoid clouds and third parties, etc.
Simple example: I have a foss VPN app running on my phone to avoid censorship and surveillance in some countries I visit. While using this app is no problem, non-anonymous development might carry consequences to the developer in some dictatorship jurisdictions (which are plenty of). I'm not sure all devs of such system would be willing to give their ids.
Another example is that this way US can cut out countries and people they don't like from mobile usage (which basically equals to modern social life). Look into sanctioned judges of international court because US protects war criminals.
Society takes away individual's freedom to choose all the time. You can't choose not to pay your taxes. You can't choose to board a passenger plane without passing a security check. You can't just get a loan without any guarantees to the bank etc.
Education isn't really working at this global scale. It doesn't reach people the way you seem to belive it does. Many, if not most people are generally disinterested in learning new things and this gets amplified when it involves technology.
> Life is not safe, nor can it be made safe without taking away freedom.
So... no food and safety regulations, because life is not safe, and people should have the freedom to poison food with cheaper, lethal ingredients because their freedom matters more?
You're right that things can't be made more safe without taking away the freedom to harm people. Which is why even the most freedom-loving countries on earth strike a balance. They actually have tons and tons of safety regulations that save tons and tons of lives, even you from your point of view that means not "treating people as adults". You have to wear a seatbelt, even if you feel like you're not being treated like an adult. Because it's also not just your own life you're putting at risk, but your passengers' as well.
You're taking the most extreme libertarian stance possible. Thank goodness that's an extremely minority view, and that the vast, vast majority of voters do actually think safety is important.
Your post is addressing a strawman, not what I said. But to answer the words you so ungraciously put in my mouth:
> So... no food and safety regulations, because life is not safe, and people should have the freedom to poison food with cheaper, lethal ingredients because their freedom matters more?
This is harm to others and is very obviously something we should enforce. There are unreasonable laws about food (banning the sale of raw milk cheese for example, which most of the world enjoys with perfect safety), but by and large they are unobjectionable.
> You're right that things can't be made more safe without taking away the freedom to harm people. Which is why even the most freedom-loving countries on earth strike a balance.
I never said I was opposed to striking a balance. Of course we can strike a balance. Indeed we already have when it comes to installing apps on Android. But these measures are being advanced as if safety were the only consideration, which it isn't.
> You're taking the most extreme libertarian stance possible.
No, that is what you have projected onto me. That's not actually what my stance is.
> Life is not safe, nor can it be made safe without taking away freedom. That is a fundamental truth of the world... Someone being gullible and willing to do things that a scammer tells them to do over the phone is not an "attack vector". It is people making a bad decision with their freedom.
That sounds pretty black and white extreme to me, when you talk about things like "life is not safe" and a "fundamental truth". I don't see any appreciation of balance there.
Maybe it's not what you meant to write, but your comment continues to absolutely come across as extremist and anti-balance to me. It seems like I was mischaracterizing what you actually believe (now that you've elaborated), but I don't think I mischaracterized what you wrote.
Your analogy is terrible because it doesn't do a proper accounting of "harm" and "risk."
Food and seatbelts, that's literal health and life-and-death; very immediate and visible.
"Cybersecurity" rarely is; and even when it is, the problem is that the centralized established authorities (like google) aren't at all provably good at this.
If those bad decisions have a lot of higher order effects and they turn out to be very costly for society, then limiting freedom seems worth it.
And it seems Google thinks society is beginning to unravel in SEA due to scammers. Trust breaks down, people stop using phones to do important things, GDP can shrink, banks go back to cheques, trees will be cut down!!
It's bad to let people go and catch the zombie virus and the come back and spread it, right?
...
I don't like it, but the obvious decision is to set up a parallel authority that can issue certificates to developers (for side loading), so we don't have to trust Google. Let the developer community manage this. And if we can't then Google can revoke the intermediary CA. And of course Google and other manufacturers could sell development devices that are unlocked, etc.
This is a terrible response as a Software Developer by the way. You can just use this to ignore any security concern.
It signals that you don't care much about security, and that you don't care about non-technical users, and don't even have the capacity to see how they view a system.
Sure, you can analyze domain names effectively, you can distinguish between an organic post and an ad, you know the difference between Read and Write permissions to system files, etc...
But can you put yourself on the shoes of a user that doesn't? If not, you are rightfully not in a position as a steward of such users, and Google is.
Cars worked fine without seatbelts too. Just because the world goes on doesn't mean we can't do better.
Taking a step back though, I suspect there are cultural differences in approach here. Growing up in Europe, the idea of a regulation to make everyone safer is perfectly acceptable to me, whereas I get the impression that many folks who grew up in the US would feel differently. That's fine! But we also have to recognise these differences and recognise that the platforms in question here are global platforms with global impact and reach.
OTOH the controlling way modern software behaves is an US artifact, so the differences are not necessarily clear-cut like this.
I grew up and live in Europe. I support the general idea of "regulation to make everyone safer" being an acceptable choice. At the same time, I vehemently oppose third-party interests reaching into my computing device and dictating what I can vs. cannot do with it.
But as you say, "global platforms with global impact and reach" - and so I can't set up my phone to conditionally read out text and voice messages aloud, because somewhere on the other side of the world, someone might get scammed into installing malware, therefore let's lock everything down and add remote attestation on top.
Unfortunately, the problem is political, not technological, and this here is but one facet of it. Ultimately, what SaaS does is give away all leverage: as users, it doesn't matter if we fully own the endpoints, or have a user-friendly vendor: any SaaS can ultimately decide not to serve a client that doesn't give the service a user-proof beachhead.
It might not "solve" the problem, but I'd expect it to significantly address the problem no?
I've heard much criticism of it being too heavy-handed, but I don't think I understand criticism that it won't improve security. Could you expand on that?
No. You seem to be implicitly arguing that that unsigned apps are inherently less trustworthy than PlayStore apps. That's a claim that needs to be proven first. And based on the huge amount of documented data exfiltration performed by Google-approved apps, I'm going to say that claim is false.
the problem is that in developing countries smart phones are a massive technology jump for people who lack the education to even have a clue whats going on. treating people as adults does not work if they don't have the education needed for that.
these people aren't gullible. they are ignorant (in the uneducated sense). they are not making bad decisions. they are not even aware that there is a decision to be made.
and worst of all, this problem affects the majority of those populations. if more than half of our population was alcoholic then we absolutely would restrict the access to alcohol through whatever means possible.
it's a pandemic. and we all know what restrictions that required.
To add to that, I think it's important to point out that the problem of people not understanding how to safely use their devices is in big part caused by technology companies racing to get widest adoption everywhere, both in terms of location and in terms of industries. I'm not against "intuitive UX design" in general, but at it's extreme, it just fuels incompetence. We shouldn't now let them pick the most convenient option, the option that just happens to also increase their powers over the users, as a way to "fix" the problem.
> how is a UI designed that doesn't fuel incompetence?
I'm specifically talking about UX ("how a user interacts with and experiences a product, system, or service"), not necessarily UI.
> how does it do that? (i am not getting hung up on "intuitive", i just mean you argue that the currently used design fuels incompetence)
tl;dr We have a product, we want to make money, we need people to use the product. One of the things that stand in the way, is people not understanding how to use our product. We will make sure they can get started as fast as possible, and not mention how they may hurt themselves with the product, that would scare them away. Hurting yourself with our product is in the broad "don't do stupid things" category. We will never explain the "framework" (in case of an OS I mean apps, that apps can interact with each other and your data, how you can or cannot, control that), even in broad terms. Just click this button and get your solution.
It started with PCs and people not understanding how to not lose their documents. Now that every device is connected to the internet, the problem became worse.
You can now say that "sideloading" is stupid anyway, but this is not the only problem. Another thing that people still usually learn by painful experience is backups. There are fake apps, on both stores. Another thing, in-band signaling. You cannot trust email, phones, whatsapp, messenger... Even if your friend you often chat with is messaging you, they could've just been hacked.
Try to explain that you also cannot trust websites and that even technical people don't have a good way of telling if an email of a website is real.
But at least enrollment is fast and adoption metrics are growing. Since we are already in "move fast and break things" mindset, we will think about fixing such issues when it actually becomes a problem.
To be clear, I'm not saying that making technology easy is always bad, that you should always expose the user to "the elements" and expect them pipe commands in the shell. But I think that often the focus is on only making enrollment fast. "Get started"
What if we actually expected people to understand something about technologies they want to use?
What if we actually expected people to understand something about technologies they want to use?
but that's what we have now, and it's not working.
the implied question is: what if we don't allow people to use technology unless they can demonstrate that they understand it?
is that really something we want to do? this sounds like gatekeeping, elitism, and anti-innovation because if if less people are going to use a technology, then there is less motivation to build it.
remember, i think it was someone at IBM that said that the potential for computers is some small number? and then it grew beyond anyone's wildest expectations?
do you think that would have happened if we had required understanding before we let anyone buy a home computer?
besides education, i don't know how to approach this issue.
> but that's what we have now, and it's not working.
My entire point is that education is the opposite of what we have now. That users are not expected to understand or know anything about IT technologies they use. Not the case with cars, recreational and prescription drugs...
> the implied question is: what if we don't allow people to use technology unless they can demonstrate that they understand it?
It's not exactly my point, but in extreme cases, maybe. I genuinely think that nobody has even tried to educate people about computers. Like, have you seen IT classes in schools? Assuming you are lucky enough for the classes to have any content, you will probably get some lessons in Word and Excel. Maybe some programming. Maybe Paint. But actually using the computer? Dangers of the internet, importance of backups, trusting websites, applications and emails? The concept of application and difference between applications and websites? And those technologies are not "developing" like they were 20 years ago, they are probably here to stay.
> is that really something we want to do? this sounds like gatekeeping, elitism, and anti-innovation because if if less people are going to use a technology, then there is less motivation to build it.
And the alternative Google and Apple present is giving them paternalizing control over the most popular computing device. The say over what people can do with their devices. After they made sure that these devices are embedded into our lives.
I would much rather we slowed down with innovation for a second and resolved such issues first, because the way I see it, it's literally manipulation (also see: dark patterns).
As for the gatekeeping and etilism - Assuming we want a "computing license" (not necessarily what I'm arguing for), is "driving license" also gatekeeping and etilism? Or maybe some amount of gatekeeping is good?
As for anti-innovation - I genuinely think we might have had just enough innovation in the field and it may be time to slow down a little, take a step back and evaluate the results. And I honestly don't see much innovation in apps/computers/web space besides maybe AI, and governments are already working on regulating that.
> do you think that would have happened if we had required understanding before we let anyone buy a home computer?
Home computers were very harmless before the internet, but that's an aside. Assuming the tech is actually useful, not just slightly more convenient than "traditional" alternatives, then yes, I'm sure it would have still grown to sizes it has grown to today. Maybe a bit slower.
> besides education, i don't know how to approach this issue.
Same, I generally do think this whole situation needs more consideration.
> Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron's cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience
this is not about moral busybodies. it's not even a moral issue. it's an existential issue. this is about demands from the population to be safe from scams. those scammers ruin lives. do you think those people really prefer to be scammed and lose their life savings?
the correct solution is of course education, but education takes time. we can educate today's children so that they can protect themselves in the future. but that's the next generation. for the current generation that kind of education is to late.
the proposed solution is a stopgap measure. do you have a better idea how to solve the problem? (maybe putting more effort into persecution, but that costs money. or making banks responsible for covering the loss. but then you'll get banks demanding the protection. tyranny of the banks then? is that any better? that's actually happening in europe now.)
not doing anything will hurt a lot of people and make them unhappy. as a government you really don't want that either.
> but I think the community needs a better response
The community does not need to do that. Installing software on my device should not require identification to be uploaded to a third party beforehand.
We're getting into dystopian levels of compliance here because grandma and grandpa are incapable of detecting a scam. I sympathize, not everyone is in their peak mental state at all times, but this seems like a problem for the bank to solve, not Android.
> Why the hell does App A need access to data or notifications from App B.
Advertising networks. Just like how you see crap like a metronome app have a laundry list of permissions that it doesn’t need. Some cases they are just scammy data harvesters, but in other cases it’s the ad networks that are actually demanding those permissions.
Google won’t sandbox properly because it’s against their direct business interest for them to do so. Google’s Android is adware, and that is the fundamental problem.
> the malware captures their two-factor authentication codes
Aren't we supposed to have sandboxing to prevent this kind of thing? If the malware relies on exploiting n-days on unpatched OSes, they could bypass the sideloading restrictions too.
Codes arrive via SMS, which is available to all apps with the READ_SMS permission. This isn't an OS vuln. It is a property of the fact that SMS messages are delivered to a phone number and not an app.
On the Play store there is a bunch of annoying checking for apps that request READ_SMS to prevent this very thing. Off Play such defense is impossible.
There are about a half dozen permissions that are regularly abused by malware. These permissions are also extremely useful for a ton of completely legitimate features.
I am pretty confident that if Google had enabled this policy only for apps which use these permissions that the community would still be upset.
I use an app[0] to do scheduled exports of my SMS (which I rsync to my IMAP server and import into my mailbox for a "single pane of glass" view of my communication). I certainly don't want to lose this functionality.
There are about a half dozen permissions that are regularly abused by malware. These permissions are also extremely useful for a ton of completely legitimate features.
I am pretty confident that if Google had enabled this policy only for apps which use these permissions that the community would still be upset.
I like the idea of requiring extra work to get notification access. But really what all these scams pray on are time sensitivity, take that away and you solve the problem in many ways. For example, your bank shouldn't let you drain your account without either being in person or having a mandatory 24hr waiting period. Same could be done with side loaded apps getting notifications, if it's side loaded and wants to read notifications, then it needs to wait 24 hrs. Mostly it won't ever matter.
Alternatively reading notifications could be opt in per app, so the reading app needs to have permission to read your SMS message app notifications, or your bank notifications, that would not be as full proof as that requires some tech literacy to understand.
I am the author of the letter and the coordinator of the signatories. We aren't saying "nuh uh, everything's fine as it is." Rather, we are pointing out that Android has progressively been enhanced over the years to make it more secure and to address emerging new threat models.
For example, the "Restricted Settings"¹ feature (introduced in Android 13 and expanded in Android 14) addresses the specific scam technique of coaching someone over the phone to allow the installation of a downloaded APK. "Enhanced Confirmation Mode"², introduced in Android 15, adds furthers protection against potentially malicious apps modifying system settings. These were all designed and rolled out with specified threat models in mind, and all evidence points to them working fairly well.
For Google to suddenly abandon these iterative security improvements and unilaterally decide to lock-down Android wholesale is a jarring disconnect from their work to date. Malware has always been with us, and always will be: both inside the Play Store and outside it. Google has presented no evidence to indicate that something has suddenly changed to justify this extreme measure. That's what we mean by "Existing Measures Are Sufficient".
The app store does contain malware, although arguably less than the play store. Apple devices would be much more secure without the app store. Apple should remove the app store.
There could be many other factors, like abysmal patch policies. Many vendors still only do Android Security Bulletins (which are only vulnerabilities marked as high and critical), do them late (despite a three month embargo for patches), very delayed device firmware updates, and sometimes only for two or three years.
Many Android phones still do not have a separate secure element.
Also, the Play Store itself regularly contains malware.
In the end it is mostly about control, dressed up as protecting users. If it was about security, Google would support GrapheneOS remote attestation for Google Pay (for being the most secure Android variant) and cut off many existing phones with deplorable security.
Not OP, but my experience was most of the malware-like apps on App Store were top ads of apps with names similar to the original ones: such as Whatsapp or Office.
I guess it's too late now, but I think "sufficient" is much too strong a word to use for that position, and puts Google in a position where they can disregard you because they "know" that existing measures aren't "sufficient."
Like you said, for years now they have added more and more restrictions to address various scams. So far none of them had any effect, other than annoying users of legitimate apps, because all the new restrictions were on the user side. This new approach restricts developers, but is actually a complete non-issue for most, since the vast majority of apps is distributed via Google Play already.
In the section "Existing Measures Are Sufficient." your letter also mentions
> Developer signing certificates that establish software provenance
without any explanation of how that would be the case. With the current system, yes, every app has to be signed. But that's it. There's no certificate chain required, no CA-checks are performed and self-signed certificates are accepted without issue. How is that supposed to establish any form of provenance?
If you really think there is a better solution to this, I would suggest you propose some viable alternative. So far all I've heard for the opponents of this change is, either "everything is fine" or "this is not the way", while conveniently ignoring the fact that there is an actual problem that needs a solution.
That said, I do generally agree, with you that mandatory verification for *all* apps would be overkill. But that is not what Google has announced in their latest blog posts. Yes, the flow to disable verification and the exemptions for hobbyists and students are just vague promises for now. But the public timeline (https://developer.android.com/developer-verification#timelin...) states developer verification will be generally available in March 2026. Why publish this letter now and not wait a few weeks so we can see what Google actually is planning before getting everybody outraged about it?
Because without this early resistance, there wouldn't even be vague promises of hobbyist/student exemptions. I think it's important to make community objection to the entire idea known loud and clear, especially when changes like these are absolutely ratcheting.
Starting from their first announcement of this, Google has explicitly asked for comments and feedback from affected developers. They have a Google Form for exactly that linked on all the announcement pages.
The exceptions for students/hobbyist were always promised, but the "advanced flow" came later based on this feedback. AFAICT Google has, so far, only made things better after the initial announcement. I don't see why we shouldn't give them the benefit of doubt, at least until we have some specifics.
Pushing this open letter out just days/weeks before Google promised the next major update just seems off.
- From April 2025 there's https://blog.google/company-news/inside-google/around-the-gl... a blog post from a “VP, Government Affairs & Public Policy”, which mentions “people in Asia Pacific feel it acutely, having lost an estimated $688 billion in 2024” (I think this may be across all scams?) and ends with “Combatting evolving online fraud in Asia-Pacific is critical” after listing a bunch of random things (unrelated to Android) Google is/was doing. This suggests to me that Google was under some criticism/pressure from governments for enabling scams, and eager to say “see, we're doing something”.
> In early discussions about this initiative, we've been encouraged by the supportive initial feedback we've received. In Brazil, the Brazilian Federation of Banks (FEBRABAN) sees it as a “significant advancement in protecting users and encouraging accountability.” This support extends to governments as well, with Indonesia's Ministry of Communications and Digital Affairs praising it for providing a “balanced approach” that protects users while keeping Android open. Similarly, Thailand’s Ministry of Digital Economy and Society sees it as a “positive and proactive measure” that aligns with their national digital safety policies.
This shows that it was a negotiation with the governments/agencies in Brazil, Indonesia, Thailand that were breathing down on Google to do something.
- And the most recent https://android-developers.googleblog.com/2025/11/android-de... from November 2025 (which promised the “students and hobbyists” account type and the “experienced users” flow “in the coming months”) also has a “Why verification is important” section that mentions the “consistently acted to keep our ecosystem safe” and “common attack we track in Southeast Asia” and “While we have advanced safeguards and protections to detect and take down bad apps, without verification, bad actors can spin up new harmful apps instantly”.
The overall picture I get is less of “Google to suddenly abandon these iterative security improvements” but more like: under pressure from governments to stop scams, Google has been doing various things like the things you mentioned, and scammers have also been evolving and finding new ways to carry out scams at scale (like “impersonating developers”), and the latest upcoming change requiring developer verification on “certified Android devices” is simply the next step of the iteration. It sucks and feels like a wholesale lock-down, yes, but it does not seem a jarring disconnect from the previous steps in the progression of locking things down.
Google's announcement is just trolling, there's an order of magnitude more scams on the Play store and they don't call for its closure.
Right now when I search for "ChatGPT", the top app is a counterfeit app with a fake logo, is it really this store which is supposed to help us fight scams?
> Right now when I search for "ChatGPT", the top app is a counterfeit app with a fake logo, is it really this store which is supposed to help us fight scams?
Just did Play search for "ChatGPT" and the top-2 results were for OpenAI's app (one result was sponsored by OpenAI one result was from Google's search). So anecdotally your results may vary.
Make the warning a full screen overlay with a button to call local police then.
(Seriously)
"but local police won't treat that seriously..." "the victim will be coached to ignore even that..." well no shit then you have a bigger problem which isn't for google to fix.
Maybe we should take away peoples' phone calls, ability to use knives, walking on the street, swimming in water, drinking liquids of any kinds, alcohol, trains, while we are at it.
> I think the community needs a better response to this problem than "nuh uh, everything's fine as it is."
People choosing between the smartphone ecosystems already have a choice between the safety of a walled garden and the freedom to do anything you like, including shooting yourself in the foot.
You don't spend a decade driving other "user freedom" focused ecosystems out of the marketplace, only to yank those supposed freedoms away from the userbase that intentionally chose freedom over safety.
There will _always_ be a need to balance between safety and the cost of adding more safety. There is no point at which safety is complete; there is always more that can be done, but the cost gets higher and higher.
So yes, "its fine the way it is" _is_ valid; but the meaning it "we're at a good point in the balance, any more cost is too much given the gains it generates"
That attack vector is just a symptom. It’s unfathomably foolish to use two-factor authentication via something as easy to intercept as SMS. Two-factor authentication should be done using a separate hardware token that generates time-based one-time codes. Anything else is basically security theater.
One time codes are still vulnerable to phishing by a site that proxies the bank's authentication challenge. You need something like FIDO2 where a challenge-response only works when the relying party ID is correct.
>A related approach might be mandatory developer registration for certain extremely sensitive permissions, like intercepting notifications/SMSes...? Or requiring an expensive "extended validation" certificate for developers who choose not to register...?
I think my overriding concern is not nuking F-Droid. I actually think that's a great solution and, interestingly, F-Droid apps already don't use significant permissions (or often use any permissions!) so that might work. Also it would be good if perhaps F-Droid itself could earn a trusted distributor status if there's a way to do that.
Or a marriage of the two, F-Droid can jump through some hoops to be a trusted distributor of apps that don't use certain critical permissions.
I think there have to be ways of creatively addressing the issue that don't involve nuking a non-evil app distribution option.
> In Google's announcement in Nov 2025, they articulated a pretty clear attack vector.
If you can be convinced by this, you can be convinced by anything. What if the scammer uses "fear and urgency" to make the person log onto their bank account and transfer the funds to the scammer?
If you can convince people to install new apps through "fear and urgency," especially with how annoying it often is to do outside of the blessed google-owned flow (and they're free to make it more annoying without taking this step), that person can be convinced of anything.
> I agree that mandatory developer registration feels too heavy handed, but I think the community needs a better response to this problem than "nuh uh, everything's fine as it is."
There's no other "solution" other than control by an authority that you totally trust if your "threat" is that a user will be able to install arbitrary apps.
The manufacturer, service provider, and google, of course, won't be held to any standard or regulations; they just get trusted because they own your device and its OS and you're already getting covertly screwed and surveilled by them. Google is a scammer constantly trying to exfiltrate information from my phone and my life in order to make money. The funny thing is that they are only pretending to defend me from their competition - they're not threatened by those small-timers - they're actually "defending" me from apps that I can use to replace their own backdoors. Their threat is that they might not know my location at all times, or all of my contacts, or be able to tax anyone who wants access to me.
I wonder if putting this choice on the user would be most appropriate?
People fearful about being scammed should buy a phone with a hardware lock to prevent it from ever accepting sideloads--no option to go to dev mode, ever. You could even charge more for the extra security.
People who want the freedom to sideload can choose to buy a phone without the extra hardware security feature.
I have a radical solution - it should not be possible to contact someone unsolicited.
All phone calls, SMS, emails, and instant messages should be blocked unless the other party is in my contacts or I have reached out to them first (plus opt-in contact from contacts of contacts, etc). Ideally, cryptographically verified.
I would argue this is the real solution to spam and scamming - why on earth are random people allowed to contact me without my consent? Phone numbers or email addresses being all you need to contact me should be an artifact of an earlier time, just like treating social security numbers as secret.
I realize this isn't super practical to transition existing systems to (though spam warnings on email and calls helps, I suppose, and maybe it could be made opt-in). I dearly hope the next major form of communication works this way, and we eventually leave behind the old methods.
I have an even more radical solution. The real root of the problem is that we use this "money" concept to represent value. If money didn't exist there wouldn't be any reason to steal, hack, or scam.
What do we replace it with? Haha, idk man. How about water? More difficult to hoard in ridiculous quantities, better spend it before it evaporates, and it occasionally falls from the sky (UBI). That's what I call a liquid asset!
Ah this explains why so many banks are making their own 2FA apps with warnings to never share the codes. Well a lot of people are very annoyed to install them because they perceive it as a technological downgrade when it's the opposite. I can only imagine asking them to use passkeys or hardware keys would be difficult, especially if there is some FUD (or truth?!) about how $boogeyman has your keys if you use them.
Are you not aware of cases where marks physically went to the bank, withdrew all cash and dropped it off to the criminals, also taking out loans and yelling at bank employees when they were trying to stop them? No app involved.
You'll always find individual cases where people do extremely dumb stuff, but using that as a justification is also dumb. If you want to significantly curtail that freedoms of a large group, it's on you to come up with a good evaluation of tradeoffs, so
> the community needs a better response to this problem than "nuh uh, everything's fine as it is."
They already have, but you choose to use a fake simplification as a representative
It matters to me because I'm reading it now and feel more informed about this problem. Throwing the towel in and saying it's all pointless isn't helpful.
It's not throwing in the towel, it's about doing things that we the people can actually do.
One thing, we the people can do, is pressure our politicians to break up Google along with the rest of big tech.
There are many primary challengers this cycle that are running anti-monopoly platforms. Help their cause, signing pointless petitions is just West Wing style fantasy that is extremely childish.
It's something apps that will soon break can point their users to so they know to blame Google and a bunch of incompetent governments.
Google will not change their minds, they're too busy buying goodwill from governments by playing along. There aren't any real alternatives to Android that are less closed off and they know it.
Because the company either has to address it, or stop pretending it's "listening to concerns" or whatever. Even if it doesn't change the outcome, it makes it clearer that the company is engaging in bad faith.
For me this change is a problem not just because of the ID upload to Google but mainly because it's another nail in the coffin of native software solutions. It increases friction and anything that increases friction is bad.
Concretely, my original plan was to provide an .apk for manual installation first and tackle all this app store madness later. I already have enough on my plate dealing with macOS, Windows, and Linux distribution. With the change, delaying this is no longer viable, so Android is not only one among five platforms with their own requirements, signing, uploading, rules, reviews, and what not, it is one more platform I need to deal with right from the start because users expect software to be multiplatform nowadays.
Quite frankly, it appears to me as if dealing with app stores and arbitrary and ever changing corporate requirements takes away more time than developing the actual software, to the detriment of the end users.
It's sad to watch the decline of personal computing.
When there were many different app stores to choose from, nobody would be forced to use an unmoderated app store. What happened to individual freedom and responsibility?
I would need to see a widely used and trusted 3rd party store before leaving Google Play became a consideration. I'm interested, but not an early adopter. It's also unclear if any store that reaches this point doesn't institute similar moderation techniques. Scale incentivizes bad actors, which in turn requires good moderation.
That's the status quo, though. Apple's App Store and Google's Play Store are essentially unmoderated. The sheer scale of them and both platforms' technical architectures prohibits either company from properly validating their stores' contents - they can't even catch the easy cases, like all the apps that impersonate ChatGPT. The main thing they manage to do is inconvenience innocent indie devs once in a while.
The result is unwarranted trust from users in stores that are full of scams.
Apple and Google effectively built malware pipelines under the guise of security.
F-Droid does not contain malware. There were cases of maintainers going rogue, such as Simple apps being bought by an adware firm, which resulted in a timely takedown, directing users to a maintained fork Fossify. Like a distro repository, the user safety comes not from reactive moderation but active curation.
Meanwhile my parents are getting hammered by inescapable malvertisements from Google, a TTS voice ordering them to install a "cleaner" app or have their phone die, no matter how many you report or what knobs you touch under ad personalization. Facebook knew 20% of their yearly revenue was scams and intentionally deferred moderator action to keep that business. All this "trust" is so overwhelming, the only way to make our computing more trusted is if OEM auto-installed the malware themselves. Oh wait, Samsung does that!
Isn't the obvious solution to use an AOSP fork that does not have to comply with the registration requirements? Distributions like Graphene and Lineage are completely unaffected.
No, because many apps refuse to run on third-party distros due to misguided notions of them being insecure. It's easy to say "just don't use those apps" but in reality, people are rightly unwilling to put up with any friction and so will simply continue to use Google's version of the OS.
Does the web app for the bank actually selectively block mobile phones? I just checked and Chase here in the US lets me log in on Brave Mobile on iOS. Perhaps your bank lets you log on in the browser.
My understanding (I'm in the US too) is that apps in many other countries don't even have a web app equivalent. If you want your money, you need an authentic android phone and a closed-source app. Or, you can buy a plane ticket somewhere else.
Is using a cheap Android device (the cheapest Android phones are less than $100 on Amazon) an option? The idea is to use that phone for 2FA or whatever is app is necessary for, and use a degoogled device for your other day-to-day activities. It's not ideal because you need to spend some extra money, but it buys you a lot of privacy.
Linux based phones are starting to become viable as daily drivers. [0] They are even coming with VM Android in case an application is needed that does not have a Linux equivalent.
I am interested in how Google's gatekeeper tactics are going to affect Android like platforms such as /e/os and GrapheneOS. [1]
> No luck needed. Linux based phones are starting to become viable as daily drivers.
Then please tell me, which non-Android Linux-based phone can I buy here in Brazil (one of the first places where Android would have these new restrictions)? I'd love to know (not sarcasm, I'm being sincere). Keep in mind that only phones with ANATEL certification can be imported, non-certified phones will be stopped by customs and sent back.
My condolences, that sucks that you’re stuck in such an authoritarian country. If you look at the PostmarketOS site, you may be able to find a legal phone (weird to type that phrase) that can be reflashed. Or you could buy one while on vacation, my guess is they don’t check models at the border if it looks like a personal device.
Illegal in Brazil per the Digital Child and Adolescent Statute. Operating systems are legally required to provide age verification functionality in a manner approved by the government.
Edit: apparently if it isn’t a “marketable product” then the law may not apply. So far they haven’t enforced it against Linux distros, likely because of this exception. However, IANAL (and definitely not a Brazilian lawyer).
Indeed, and since Brazil now has mandatory age checking in the OS, it's illegal to own or operate such phones in the country, thus they will never be certified by ANATEL.
Only way is to get the laws to change by electing other officials or civil disobedience.
I do not know all International laws. Nor do I respect countries and politicians that force such restrictive laws that prevent reuse of good devices that are now unsupported by the original manufacture.
Secondly if that law was enacted in the US ... I would buy a product that has a known bug to allow for loading a custom OS. In court I would push for jury-nullification too.
Authoritative governments suck at all fronts ... not just phone restrictions.
Would you mind pointing me to the ANATEL certification process? I am wondering if the voice of the law is worded to prevent competition ... sounds like something Google would of helped push through.
Are you allowed old school non-smart phones? That is how I would do it. Laptop and dumb phone.
I would if there was a viable mobile phone OS I could switch to. iOS isn't any better. Linux phones, sadly, aren't very practical for daily use. AOSP based projects also have many limitations, and are still dependent on Google.
What phone are you considering? Sailfish still doesn't seem very successful and mobile Linux barely boots on anything that performs better than a fifteen year old budget device.
I'm kind of hoping Qualcomm's open sourcing work will also affect the ability to run mainline Linux on Android devices, but it's looking like a Linux OS that covers the bare basics seems to be a decade away.
Something like 7 iOS phones are sold every second of the day and there are even more Android phones sold. The number of people who care about this issue is far too few for any kind of boycott to be noticed by the handset makers. The only option is to appeal to Google's sense of what's right.
In the time it took you to read this comment, 200 phones were sold.
Highly technically knowledgeable people are more influential in this sphere than the average consumer. If developers hate your device and love your competitor, that's a real problem.
The real issue is that mandatory registration doesn't actually stop scammers. It stops hobbyist developers and small open source projects.
Scammers will use stolen identities or shell companies. They already do this on the Play Store itself. The $25 fee and passport upload haven't prevented the flood of scam apps there.
Meanwhile F-Droid's model (build from source, scan for trackers/malware) actually provides stronger guarantees about what the app does. No identity check needed because the code speaks for itself.
The permission-based approach someone mentioned above makes way more sense. If your app wants to read SMS or intercept notifications, sure, require extra scrutiny. But a simple calculator app or a notes tool? That's just adding friction for no security benefit.
The permission problem also affects normal apps. Things like KDE Connect quickly become useless without advanced permissions, for instance.
No permission system can work as well as a proper solution (such as banks and governments getting their shit together and investing in basic digital skills for their citizens).
Friction does matter. Yes, criminals will create fake accounts with stolen IDs and stolen credit cards. But creating 1,000s of these is hard. Creating polymorphic banking trojans is simple.
I don't know if this trade off is worth it, but the idea that it won't affect this abuse at all is false.
If you can convince someone over the phone to install malware thru a million "don't do this" screens, you can convince them to just give you their login credentials. Which is both easier, cheaper, and, I imagine, more effective.
The thing that everyone here ignores is that the friction isn't just for safety. It's by design. For some reason, everyone is giving Google as much benefit of the doubt as possible. But no, they want to drive out small developers in general, and this is just one piece of the puzzle. Google has already put up unrelated barriers to publishing apps on Google Play, required every app developer to dox themselves to every user (meanwhile Apple is far more permissive and allows an opt-out for non-commercial apps), they downrank apps by small developers, use alternate UX that disincentivizes installing lesser known apps, put up big scary warnings like "This app isn't installed often" or "Fewer people engage with this app" on the pages of those apps. The only explanation is that they want more money and less upkeep and moderation with the pesky small developers, and the real money-makers are the big corporate apps. They're recreating "the rich get richer" in their microcosm.
The problem with mandatory developer registration, is that it gives Google and Governments the ability to veto apps.
It would not be unsurprising for a government to tell Google they must block any VPN apps from being installed on devices, and Google using the developer requirements to carry out the ban.
No judgement whatsoever, but for almost everyone they too will think, no big deal you only install software through stores right? Nothing changes for them, in fact they can't conceive of an alternative anymore.
How can you judge if Google's plan is a good one? Add up the harms caused by the new rules and weigh that against the reduction in harm and see where the balance is?
I have a hard time believing the net outcome for the overall Android community would be negative.
It's worse than that. Google will be able to track who's using a particular app because it has to be installed the official way. This means for example that anyone who has installed an ICE Tracking app will be reported to the government and perhaps added to a terrorist list.
No you can still install APKs offline but they have to be signed (likely enforced by Google Play Services). Not to mention you can still install unisgned APKs like before with adb. Which doesn't make this any better of course.
Nice strawman. People want the ability to decide for themselves whether or not to install some APK, they are not saying every APK under the sun is trustworthy.
If you want to make the decision to install Hay Day, the user should be able to know that it is the Hay Day from Supercell or from Sketchy McMalwareson.
99.9% of apps should have no issue with their name being associated with their work. If you genuinely need to use an anonymously published app, you will still be able to do that as a user.
Android already tells users when they're installing software from outside the Play Store and shows big scary warnings if Play Protect is turned off. What else do you want? If I want to install something from Sketchy McMalwareson after all that, that's my phone and my business.
Sure thing, as long as it doesn't require any permissions. I have installed multiple apks on my phone from unknown people. Note that Google's requirement is also for completely permissionless apps like games.
As far as I know, it's implemented in the proprietary part of Android (Google Mobile Services, GMS), so it won't affect LineageOS users as long as they don't install the GMS.
Just here to register my disapproval of this, and to remind everyone that you should support Linux phones if you’re against it. Or Graphene OS, at the very least, even though this still supports Google due to the requirement for a Pixel phone.
Also, I’m going to coin a new term for the recurring names that I see promoting this kind of thing here: “safety fascists.” Safety fascists won’t sleep until there is a camera watching every home, a government bug in every phone, a 24/7 minder for every citizen. For your safety, of course.
I think I may hate safety fascists more than I hate garden variety fascists. That’s an accomplishment!
Would rather a more robust and distributed app store system that figures out how to police these edge cases of fraud rather than one vendor (Apple or Google) whose monopolies push developers into subscriptionware across the board. Something more akin to how internic moved from one domain name registrar to what we have today, chock full of competition and new top level domains.
It feels like independent development on devices has slowed in recent years. More stores appealing to different developer models/tools and monetization strategies please.
Many people online and in person telling me "Google backed down" or "Google has an advanced flow" are typically referring to these two statements from Google staff:
> Based on this feedback and our ongoing conversations with the community, we are building a new advanced flow that allows experienced users to accept the risks of installing software that isn't verified. [0]
> Advanced users will be able to"Install without verifying," but expect a high-friction flow designed to help users understand the risks. [1]
Firstly - I am yet to see "ongoing conversations with the community" from Google. Either before this blog post or in the substantial time since this blog post. "The community" has no insight into whether any such "advanced flow" is fit for purpose.
Secondly - I as an experienced engineer may be able to work around a "high-friction flow". But I am not fighting this fight for me, I am fighting it for the billions of humans for whom smart phones are an integral part of their daily lives. They deserve the right to be able to install software using free, open, transparent app stores that don't require signing up with Google/Samsung/Amazon for the privilege of: Installing software on a device they own.
One example of a "high friction flow" which I would find unacceptable if implemented for app installation on Android is the way in which browsers treat invalid SSL certificates. If I as a web developer setup a valid cert, and then the client receives an invalid cert, this means that the browser (which is - typically - working on behalf of the customer) is unable to guarantee that it is talking to the right server. This is a specific and real threat model which the browser addresses by showing [2]:
* "Your connection is not private"
* "Attackers might be trying to steal your information (for example, passwords, messages or credit cards)"
* "Advanced" button (not "Back to safety")
* "Proceed (unsafe)" link
* "Not secure" shown in address bar forever
In this threat model, the web dev asked the browser to ensure communication is encrypted, and it is encrypted with their private key. The browser cannot confirm this to be the case, so there is a risk that a MITM attack is taking place.
This is proportionate to the threat, and very "high friction". I don't know of many non-tech people who will click through these warnings.
When the developer uses HSTS, it is even more "high friction". The user is presented all the warnings above, but no advanced button. Instead, on Chromium based browsers they need to type "thisisunsafe" - not into a text box, just randomly type it while viewing the page. On Firefox, there is no recourse. I know of very few software engineers who know how to bypass HSTS certificate issues when presented with them, e.g. in a non-prod environment with corporate certs where they still want to bypass it to test something.
If these "high friction" flows were applied to certified Android devices each time a user wanted to install an app from F-Droid - it would kill F-Droid and similar projects for almost all non-tech users. All users, not just tech users, deserve the right to install software on their smart phone without having to sign up for an "app store" experience that games your attention and tries to get you to install scammy attention seeking games that harvest your personal information and flood you with advertisements
Hence, I don't want to tell people "Just install [insert non-certified AOSP based project here]". I want Android to remain a viable alternative for billions of people.
Banning apps installation outside PlayStore will be a disaster for power-ish users and will start a fight between Google and community. I abandoned rooting my devices because I could achieve all I wanted through apps (mostly ad- and nag-freedom, it's impossible to be online without ad blocking). But all these were downloaded as APKs. I cannot imagine how the first day without these will be.
The judge told Google that Apple is not anti-competitive because Apple has no competitors on it's platform (this all stemming from the Epic lawsuits).
Google listened.
Blame the judge for one of the worst legal calls in recent history. Google is a monopoly and Apple is not. Simple fix for Google...
Same comment I made a few days ago, I feel it bears repeating as much as possible until it's really driven home how detrimental and uninformed that decision was.
It is a non-sensical ruling. But IIRC the reason was basically that while Apple and Google did basically the same shit, only Google kept a written record of their monopolistic behaviour, so only Google was found guilty.
However, there is a relevant court case here. The one about Samsung's "Auto Blocker" (https://arstechnica.com/gadgets/2025/07/samsung-and-epic-gam...). Epic Games sued because Samsung made it too hard to install apps from "untrusted" sources. This may be a reason why Google is now trying to make the process more difficult on the developer side instead.
the Samsung case is very interesting, haven't bumped into that one before.
... as far as I understand the really nasty part of "contemporary" jurisprudence of antitrust enforcement is that the standard is to show that things would be cheaper for the consumers
(though I don't know why developers are not considered consumers of the app marketplace services, after all for them bringing their own payments and whatnot would be much more cost effective... well, anyway, unfortunately the courts are mostly locked to this very inefficient path-dependent way of regulating anything through super expensive arguments, which is an obvious (?) dysfunction of legislation)
There were parallel anti-competitive behavior cases brought against Apple and Google.
Apple was deemed not to be anticompetitive in app stores because there was no existing market of app stores on iOS. Google was more open in allowing other app stores, but deemed anticompetitive by discouraging their use relative to the Play store.
The irony is the more open player was deemed more anticompetitive. OP is saying Google is “fixing” their anticompetitive behavior by eliminating alternative app stores entirely.
Like many things in the US, this should be settled by congress not judges.
Things that everyone relies on for life are generally regulated by law. Telecom platforms for instance. I’d say the mandatory software platform I need for my bank, drivers license, daily communication, etc should be in this bucket.
The EU declaring both Apple and Google gateway platforms is a much better approach. Congress is abdicating its responsibility to craft the legal frameworks for equal access in the modern age.
"Like many things in the US, this should be settled by congress"
The US government is by design supposed to be as minimal as possible, and the laws affecting you kept as local as possible. We're not supposed to have a "the government" that's the same as EU governments. "The federal government should make laws" should be an absolute last resort. When you say "congress is abdicating its responsibility", I'd like you to point to where in the constitution it says that congress has such responsibilities.
The federal government regulates interstate commerce. Apple and Google fit that definition. This is really no constitutional ambiguity here. Congress is 100% capable of acting if they wanted to.
> Disproportionate impact on marginalized communities and controversial but legal applications
applies more to the elderly in third-world countries who are constantly scammed through fraudulent side-loaded apps than it does to hackers who want to install whatever software they want but do not want to use a non-Google AOSP distribution.
To be honest, if both Android and iOS were walled gardens, I'd choose iOS every time. I choose Android specifically because of its openness. But if that weren't the case, I'd prefer the smoother UX and stronger Apple ecosystem.
I think we're about to see an explosion in "mini apps". It's taken 10+ years for us to catch up to WeChat and China but this regulation and other issues are going to block a lot of innovation and we're better off surfacing tiny PWA or SPA like apps that get loaded in native apps or we just do away with that entirely. The time has come.
Elon's vision for the X "everything" app. It's great for them, now every single thing you do has the full gamut of privacy permissions. Playing a "mini-game"? Full accurate GPS coordinates available to it because you also have the ride-hailing "mini-app".
If I may advocate for the non HN partisan position here.
Let's consider that Google's Android was and is a huge improvement in security in terms of OS design (even if inspired by iOS) over the previous incumbent (let's call Windows that). That difference in security still exists today (probably due to Window's Backwards Compatibility prioritization, and its later positioning in the market as a cheap powertool (cheap compared to iOS, powertool compared to android).
That security advantage, by the way, was not just the result of initial design, but it required a lot of maintenance, in the form of the 'Play Store' App Store equivalent (at no cost to the user no less).
All this to say that let's consider this context, and consider what alternatives are proposed.
1- The windows 'install whatever you want model' (Now with OS approved certificates): As mentioned, worse, with almost no sandboxing.
2- Linux package managers + install whatever you want: Valid model for powerusers and programmers, not really relevant for massive personal computing.
3- Keeping the old Android system: This would imply simply ignoring the problem of growing professional and untouchable malicious actors that seem to be growing in power with the advent of anonymous financial tech. Is this the actual proposal? Do nothing about the problem? Pretend there is no problem?
I don't think the problem is necessarily malware, but to take a specific example, suppose a Casino from Isle of Man is allowing underaged and users from jurisdictions where it is illegal. Regardless of whether you think this is ok, or debatable or it depends on the circumstances. Isn't the ask to identify the developer rather trivial? Just a little bit of paperwork, you want to be a developer? Install code that someone else will use? Put your name in it, have skin in the game.
I think there's also a contradiction between the need for developer privacy and user privacy. Most HN users are privacy-sensitive. Well I propose there's a tradeoff between the privacy of the consumer and the producer. In order to provide privacy and rights to the user, the producer needs to come forward. There's no way to have the cake and eat it too, if both producer and consumer are shy, they will never find each other, if both producer and consumer stay anonymous, they won't trust each other, if both producer and consumer stay anonymous, they don't give any guarantees to the other party that they won't go rogue.
You know this if you've tried to start a business, you can either put your face, your name, register with the state, put your actual address. Or you can use an anonymous brand, a Registered Agent Address, etc... The latter is a harder sell than the former, and you only don't notice it if you are completely absorbed in your own world and cannot put yourself in the shoes of your customer.
tl;dr: Google has an impeccable data security track record. And User/Developer privacy is a tradeoff. Google is right to protect user privacy and not developer privacy.
"Don't be evil" → "Don't be evil without registering first and uploading your government ID."
The most telling detail is the sequencing. Google spent years in court arguing Android is open to fend off antitrust regulators, won key battles on that basis, and is now quietly closing the door they swore under oath was permanently propped open. The antitrust defense was the product roadmap's cover story.
And framing this as security is particularly rich from the company whose own Play Store routinely hosts malware that passes their review. The problem they're solving isn't "unverified developers distribute harmful apps" — it's "unverified developers distribute apps we can't monetize or control."
Can someone explain to me why Google's plans don't collide with the EU DMA? They're locking down the platform, that's what the DMA is supposed to prevent, I thought.
Google's concerns about security rings hollow to me. I believe it is strictly to exercise more control over the platform.
The appeals to people in Southeast Asia being scammed reminds me of a blog by Cory Doctorow last year: Every complex ecosystem has parasites [1]
The gist of it is that technology can be useful, but that usefulness comes with a price: sometimes bad actors are going to commit fraud or other undesirable actions.
As an example, you can reduce the amount of banking app scams to 0% by simply denying any banking apps on phones. But because of banking apps' usefulness we're not going to do that, so there will be some non-zero risk that you will get scammed.
As a technical user I chose Android for its usefulness, accepting that there may be a (minute) chance that I get scammed, but it is a risk I am willing to take, and Google will unilaterally take this choice away from me.
Still, I don't believe Google's security concerns are sincere, so I think I just wasted my time typing all of this
pmdr | a day ago
OutOfHere | a day ago
dfabulich | a day ago
In Google's announcement in Nov 2025, they articulated a pretty clear attack vector. https://android-developers.googleblog.com/2025/11/android-de...
> For example, a common attack we track in Southeast Asia illustrates this threat clearly. A scammer calls a victim claiming their bank account is compromised and uses fear and urgency to direct them to sideload a "verification app" to secure their funds, often coaching them to ignore standard security warnings. Once installed, this app — actually malware — intercepts the victim's notifications. When the user logs into their real banking app, the malware captures their two-factor authentication codes, giving the scammer everything they need to drain the account.
> While we have advanced safeguards and protections to detect and take down bad apps, without verification, bad actors can spin up new harmful apps instantly. It becomes an endless game of whack-a-mole. Verification changes the math by forcing them to use a real identity to distribute malware, making attacks significantly harder and more costly to scale.
I agree that mandatory developer registration feels too heavy handed, but I think the community needs a better response to this problem than "nuh uh, everything's fine as it is."
A related approach might be mandatory developer registration for certain extremely sensitive permissions, like intercepting notifications/SMSes...? Or requiring an expensive "extended validation" certificate for developers who choose not to register...?
verdverm | a day ago
Permissions are a great way to distinguish.
amiga386 | a day ago
Or would you be OK knowing that Thunderbird you downloaded from https://thunderbird.net/ is signed by the thunderbird.net certificate owner?
verdverm | a day ago
jyoung8607 | a day ago
The permissions approach isn't bad. I may trust Thunderbird for some things, but permission to read SMS and notifications is permission to bypass SMS 2FA for every other account using that phone number. It deserves a special gate that's very hard for a scammer to pass. The exact nature of the gate can be reasonably debated.
amiga386 | 12 hours ago
It's therefore on their choice of search engine, or choice of app store, to lead them from "thunderbird" to "The app downloadable from https://thunderbird.net/", which can then be validated as signed by the verified owner of the same domain.
I'm not proposing changing the permissions system.
joshuamorton | a day ago
amiga386 | 11 hours ago
If you search any web search engine for "thunderbird", https://thunderbird.net/ is the top result. You can choose your preferred search engine, you should be able to choose your own app store, and your level of confidence stems from your own estimation of that entity's past competence.
If you do search Google Play for "thunderbird", you'll find it lists an app with internal name "net.thunderbird.android" as the top result (along with lots of other mail clients). What I'm proposing is that if your choice of search engine or app store shows you https://thunderbird.net/ as the place to download Thunderbird, and you do, PKI can then verify that the app was independently signed by the owner of the matching domain, and that the certificate was issued to them by a CA who regularly validates they control that domain.
JoshTriplett | a day ago
harikb | a day ago
In contrast, convincing someone to read an OTP over the phone is a one-time manual bypass. To use your logic..
A insalled app - Like a hidden camera in a room.
Social engineering over phone - Like convincing someone to leave the door unlocked once.
JoshTriplett | a day ago
The motivating example as described involves "giving the scammer everything they need to drain the account". Once they've drained the account, they don't need ongoing access.
jyoung8607 | a day ago
TeMPOraL | 22 hours ago
sdenton4 | a day ago
hulitu | a day ago
Why would an app silently intercepts SMS/MMS data ? Why does an app needs network access ?
Running untrusted code in your browser is also "a persistent technical compromise" but nobody seems to care.
array_key_first | a day ago
A root cause solution is proper sandboxing. Google and apple will not do this, because they rely on applications have far too much access to make their money.
One of the fundamentals of security is that applications should use the minimum data and access they need to operate. Apple and Google break this with every piece of software they make. The disease is spreading from the inside out. Putting a shitty lotion on top won't fix this.
NewsaHackO | a day ago
Wow, that a major claim. What apps are malware, exactly?
>This is still not a root cause solution, it's just a mitigation.
Requiring signed apps solves the issue though, as it provides identification of whoever is running the scam and a method for remuneration or prosecution.
array_key_first | a day ago
I don't understand how this is a major claim at all, it should be obvious. All repositories of large enough sizes contain malware because malware doesn't declare itself as malware.
This is exacerbated by the fact the Google Play Store and Apple App Store allow closed-source applications. It's much easier to validate behavior on things like the Debian repos, where maintainers can, and do, audit the source code.
Google does not have a magic "is this malware" algorithm, that doesn't exist. They rely on heuristics and things like asking the authors "hey is this malware". As you can imagine, this isn't very effective. They don't even install and test the apps fully. Not that it matters much, obviously malware can easily change it's behavior to not be detectable from the end-user just running the app.
> Requiring signed apps solves the issue though, as it provides identification of whoever is running the scam and a method for remuneration or prosecution.
It doesn't, for three reasons:
1. Identifying an app doesn't magically make it not malware. I can tell you "hey I made this app" and you still have zero idea if it's malware. This is still a post mitigation. Meaning, if we somehow know an app is malware, we can find out who wrote it. It doesn't do the "is this malware" part of the mitigation, which is the most important part.
2. Bad actors typically have little allegiance to ethics, meaning they typically will not be honest about their identity. There are criminal organizations which operate in meatspace and fake their identities, which is 1000x harder than doing it online. Most malware will not have a legitimate identity tacked to it.
3. Bad actors typically come from countries which don't prosecute them as hard. So, even if you find out if something is malware, and then find out the actual people behind it, you typically can't prosecute them. Even large online services like the Silk Road lasted for a long time, and most likely still do exist, even despite the literal US federal government trying to stop them.
NewsaHackO | 22 hours ago
Let me know when you can provide a single specific name.
deaux | 15 hours ago
This has been going on for years, Google knows about it, and intentionally leaves it unfixed.
> Out of 47 Indian apps I randomly analyzed, 31 of them used the "ACTION_MAIN" filter - giving them access to see all the apps on your phone without any disclosure. That's 2 out of 3 apps.
Of course there's hundreds of other variants of malware, this is just one of the most prevalent.
NewsaHackO | 9 hours ago
That is not true, as those apps declare that they collect app activity data in their Play Store page though.
deaux | 6 hours ago
NewsaHackO | 5 hours ago
TeMPOraL | 22 hours ago
Oh they do this quite well. Thing is, these sandboxes are meant to protect apps from you, not the other way around. That's why some apps - not just platform vendor apps but also select third-party apps - get special access and elevated privileges, while you can't even see what data they store in `/storage/emulated/0/android/data` even with ADB trickery.
nine_k | a day ago
The sideloading warning is much much milder, something like "are you sure you want to install this?".
thefounder | a day ago
RobotToaster | a day ago
microtonal | a day ago
A fundamental difference with e.g. FIDO2 (especially hardware-backed) is that the private credentials are keyed to the relying party ID, so it's not possible for a phising site to intercept the challenge-response.
thefounder | 23 hours ago
JoshTriplett | a day ago
hollow-moe | a day ago
> Please enter the code we sent you in the app.
lol, lmao even
instagib | a day ago
mwwaters | a day ago
Passkeys are also an active area to defeat phishing as long as the device is not compromised. To the extent there is attestation, passkeys also create very critical posts about locking down devices.
Given what I see in scams, I think too much is put on the user as it is. The anti-phishing training and such try to blame somebody downward in the hierarchy instead of fixing the systems. For example, spear-phishing scams of home down payments or business accounts work through banks in the US not tying account numbers to payee identity. The real issue is that the US payment system is utterly backward without confirmation of payee (I.e. giving the human readable actual name of recipient account in the banking app). For wire transfers or ACH Credit in the US, commercial customers are basically expected to play detective to make sure new account numbers are legit.
As I understand it, sideloading apps can overcome that payee legal name display in other countries. So the question for both sideloading and passkeys is if we want banks liable for correctly showing the actual payee for such transfers. To the extent they are liable, they will need to trust the app’s environment and the passkey.
thousand_nights | a day ago
darkwater | a day ago
This reeks of "think of the children^Wscammed". I mean, following this principle the only solution is to completely remove any form of sideloading and have just one single Google approved store because security.
> A related approach might be mandatory developer registration for certain extremely sensitive permissions, like intercepting notifications/SMSes...? O
It doesn't work like that. What they mean with "mandatory developer registration" is what Google already does if you want to start as a developer in Play Store. Pay 25$ one-time fee with a credit card and upload your passport copy to some (3rd-party?) ID verification service. [1] In contrast with F-Droid where you just need a GitLab user to open a merge request in the fdroid-data repository and submit your app, which they scan for malware and compile from source in their build server.
[1] but I guess there are plenty of ways to fool Google anyway even with that, if you are a real scammer.
kotaKat | a day ago
Just like they went after Samsung for adding friction to the sideload workflow to warn people against scams.
https://www.macrumors.com/2024/09/30/epic-games-sues-samsung...
daveidol | a day ago
cherryteastain | a day ago
You can also cut yourself with a kitchen knife but nobody proposes banning kitchen knives. Google and the state are not your nannies.
john_strinlai | a day ago
oh nice, i love this game.
you cant carry a kitchen knife that is too long, you cant carry your kitchen knife into a school, you cant brandish your kitchen knife at police, you cant let a small child run around with a kitchen knife...
literally most of what "the state" does is be a "nanny"
(not agreeing or disagreeing with google here, i have no horse in this particular race. but this little knife quip is silly when you think about it for more than 5 seconds)
CamperBob2 | a day ago
What?
john_strinlai | a day ago
although, i would imagine at some length, it becomes a "sword" (even if marketed as a knife) and falls under some other "nanny"-ing. i have not googled that.
mikestew | a day ago
CamperBob2 | a day ago
So, having been given the proverbial inch (or centimeter), those obsessed with banning potentially-dangerous tools are trying to take the next mile (or kilometer): https://theconversation.com/why-stopping-knife-crime-needs-t...
Cyph0n | a day ago
john_strinlai | a day ago
mikestew | a day ago
kevin_thibedeau | a day ago
InsideOutSanta | a day ago
aclindsa | a day ago
plorg | a day ago
john_strinlai | a day ago
the point of my comment was that the state does implement a lot of rules (read: "is a nanny"), despite the claim otherwise.
plorg | 22 hours ago
ranger_danger | 22 hours ago
daveidol | a day ago
There is a point at which people have to think critically about what they are doing. We, as a society, should do our best to protect the vulnerable (elderly, mentally disabled, etc) but we must draw the line somewhere.
It’s the same thing in the outside world too - otherwise we could make compelling arguments about removing the right to drive cars, for example, due to all the traffic accidents (instead we add measures like seatbelts as a compromise, knowing it will never totally solve the issue).
bonoboTP | 6 hours ago
Yes, one could imagine some kind of mental test and if you fail you don't get to use your bank online, you have to walk to the physical location to make transactions. But this can obviously be abused to shut out people from banking based on political and other aspects. Generally democracies are wary of declaring too broad sets of people as incapable of acting independently without some guardian. Obviously beyond a certain threshold of mental incapacitation, dementia etc. it kicks in, but just imagine declaring that you're too easy to influence and scam and we can't let you handle your money,... But somehow we can rely on you using sane judgment when voting in elections. Or should we strip election rights too?
We rely on polite fictions around the abilities of the average person. The contradictions sometimes surface but there is no simple way to resolve it without revising some assumptions.
MSFT_Edging | a day ago
Manually installing an app might be close to the limit of what grandma can be coached through by an impatient scammer.
Multiple steps over adb, challenges that can't be copy and pasted in a script, etc. It can be done but it won't provide as much control over end user devices.
Cyph0n | a day ago
Because I hope you realize that clamping down on “sideloading” (read: installing unsigned software) on PCs is the next logical step. TPMs are already present on a large chunk of consumer PCs - they just need to be used.
bitwize | a day ago
Cyph0n | a day ago
kps | a day ago
bitwize | a day ago
eikenberry | a day ago
iamnothere | a day ago
That’s enough for me to distribute a few freedom devices to friends and neighbors, and still have extras to account for normal failures.
I also hoard source code, and will happily distribute that with the computers! Maybe that’s “programmer brained,” if so then fine by me!
RandomGerm4n | a day ago
Also every user is free to simply not use the option of installing things outside of the store.
bitwize | a day ago
Do you know anyone who works in a professional creative field that doesn't involve writing code? If so, ask them how they'd feel about their work bring out there on the internet free to all takers. What the implications would be for their ability to feed their children and pay their mortgage doing the things they love.
This is what I mean by "programmer-brained." Of all creative workers, only programmers seem okay with abolishing IP laws, I guess because they figure they'll be okay living out of an office at MIT, or even worse out of an office at some YC startup that turns the user into the product. But artists, musicians, writers, filmmakers, etc. all put food on the table because of those IP laws programmers hate so much. Taking that protection for the fruit of your labor away would be at least as disruptive as AI has been.
nmeagent | a day ago
No, no, a thousand times no. This is an argument for authoritarian clampdown on general computing and must be opposed by all means necessary. I have the right to run whatever code I wish on my own damn property without the permission of arbitrary authorities or whatever subset of society you favor, and if you or they have a problem with this, you or they can proceed to pound sand.
iamnothere | a day ago
It’s a good time to buy a pallet of old SFF computers, just in case.
tzs | a day ago
They are saying that claiming the underlying problem is not real or not big enough to need addressing is an ineffective way to argue.
Cyph0n | a day ago
Would it make sense to then argue that enforcing TPM-backed measured boot and binary signature verification is a legitimate way to address the problem?
tzs | a day ago
Cyph0n | a day ago
Are we saying that, because scamming exists and we haven’t proposed an alternative, it means that clamping down on software installation methods is a legitimate solution to the problem?
TeMPOraL | 23 hours ago
jeroenhd | a day ago
The problem lies in (technical) literacy, to some extent people's natural tendency to trust what others are telling them, the incompetence of investigative powers, and the unwillingness of certain countries to shut down scam farms and human trafficking.
My bank's app refuses to operate when I'm on the phone. It also refuses to operate when anything is remotely controlling the phone. There's nothing a banking app can do against vulnerable phones rooted by malware (other than force to operate when phones are too vulnerable according to whatever threshold you decide on so there's nothing to root) but I feel like the countries where banks and police are putting the blame on Google are taking the easy way out.
Scammers will find a way around these restrictions in days and everyone else is left worse off.
gjsman-1000 | a day ago
Well, in that case, Google has an easy escalation path that they already use for Google Business Listings: They send you a physical card, in the mail, with a code, to the address listed. If this turns out to be a real problem at scale, the patch is barely an inconvenience.
jeroenhd | a day ago
Now they'll need to pay off a local mailman to give them all of Google's letters with an address in an area they control so they can register a town's worth of addresses, big whoop. It'll cost them a bit more than the registration fee, but I doubt it'll be enough to solve the problem.
joshuamorton | a day ago
Yeah, this is a huge amount more work than, like, nothing.
iamnothere | a day ago
joshuamorton | a day ago
How? You've now moved the level of sophistication required from "someone runs some bots on the facebook website" to "someone is now committing complex fraud against a government".
If the only people who can run scams are state sponsored, that's still vastly better than the status quo.
iamnothere | 23 hours ago
joshuamorton | 23 hours ago
> Amazon has a huge problem with packages being sent to fake people at different addresses.
This usually involves those people getting weird packages and not doing anything with them, it doesn't require attacker-controlled addresses.
iamnothere | 23 hours ago
joshuamorton | 22 hours ago
This could work, but the issue here is that a lot of these scams rely on the "zero cost"-ness of turnup and use that as a asymmetry. If it costs you nothing to turn up new scam-accounts, and it costs me something to investigate and remove them, you win. If it costs you $10 to create new scam accounts then as long as I can get the EV of a scam account below $10, the scam isn't worthwhile.
jeroenhd | 13 hours ago
People are already effectively faking addresses for something as stupid as Amazon reviews. Apparently it's that cheap to fake an address, because those crapware spam stores that rotate their name/products/listings aren't exactly the size of the mob.
What this will probably do is raise the bar for scams a little so that dumb "mom-and-pop" criminals can no longer get started with a guide and a software kit they buy on Telegram, clearing the field for "professionals" while at the same time making identity fraud, address fraud, and (money) mules more lucrative.
All of that to shift away the blame from banks, public institutions, education, and to some extent people's personal financial responsibilities.
kodebach | a day ago
When a scammer pretending to be your bank tells you to install an app for verification and it says "This app was created by John Smith" even grandma will get suspicious and ask why it doesn't show the bank's name.
jeroenhd | 13 hours ago
This trick only works if the general public is aware of what the app developer label does, what it is used for, what it protects against, and what it's supposed to say. However, if that's the case, you already have all the info you need to deduce that you shouldn't be installing APKs sent by a guy over the phone anyway.
Tharre | a day ago
You can go a softer route of requiring some complicated mechanism of "unlocking" your phone before you can install unverified apps - but by definition that mechanism needs to be more complicated then even a guided (by a scammer) normal non-technical user can manage. So you've essentially made it impossible for normies to install non-playstore apps and thus also made all other app stores irrelevant for the most part.
The scamming issue is real, but the proposed solutions seem worse then the disease, at least to me.
Retr0id | a day ago
Tharre | a day ago
The next step is simply that the scammer modifies the official bank app, adds a backdoor to it, and convinces the victim to install that app and login with it. No hardware-bound credentials are going to help you with that, the only fix is attestation, which brings you back to the aformentioned issue of blessed apps.
Retr0id | a day ago
Tharre | a day ago
Retr0id | a day ago
The backdoored version of the app would need to have a different app ID, since the attacker does not have the legitimate publisher's signing keys. So the OS shouldn't let it access the legitimate app's credentials.
tadfisher | a day ago
The spoofed app can't request passkeys for the legit app because the legit app's domain is associated with the legit app's signing key fingerprint via .well-known/assetlinks.json, and the CredentialManager service checks that association.
mwwaters | a day ago
tadfisher | a day ago
No need for locking down the app ecosystem, no need to verify developers. Just don't use phishable credentials and you are not vulnerable to malware trying to phish credentials.
0: https://www.bankofamerica.com/.well-known/assetlinks.json
Tharre | a day ago
A simple scenario adapted from the one given in the android blog post: the attacker calls the victim and convinces them that their banking account is compromised, and they need to act now to secure it. The scammer tells the victim, that their account got compromised because they're using and outdated version of the banking app that's no longer suppported. He then walks them through "updating" their app, effectively going through the "new device" workflow - except the new device is the same as the old one, just with the backdoored app.
You can prevent this with attestation of course, essentially giving the bank's backend the ability to verify that the credentials are actually tied to their app, and not some backdoored version. But now you have a "blessed" key that's in the hands of Google or Apple or whomever, and everyone who wants to run other operating systems or even just patched versions of official apps is out of luck.
microtonal | a day ago
That doesn't work, because the scammer's app will be signed with a different key, so the relying party ID is different and the secure element (or whatever hardware backing you use), refuses to do the challenge-response.
tadfisher | a day ago
This is where the scheme breaks down: the new passkey credential can never be associated with the legitimate RP. The attacker will not be able to use the credential to sign in to the legitimate app/site and steal money.
The attacker controls the fake/backdoored app, but they do not control the signing key which is ultimately used to associate app <-> domain <-> passkey, and they do not control the system credentials service which checks this association. You don't even need attestation to prevent this scenario.
Tharre | a day ago
You're assuming the attacker must go through the credential manager and the backing hardware, but that is only the case with attestation. Without it, the attacker can simply generate their own passkey in software, because the backend on the banks side would have no way of telling where the passkey came from.
tadfisher | 23 hours ago
Tharre | 23 hours ago
tadfisher | 22 hours ago
Tharre | 21 hours ago
RandomGerm4n | a day ago
jrm4 | a day ago
Tharre | 23 hours ago
singpolyma3 | a day ago
This is also true if they can only install verified apps, because no company on earth has the resources to have an actually functional verification process and stuff gets through every day.
iamnothere | a day ago
This is true, but if this goes through, I imagine that the next step for safety fascists will be to require developer licensing and insurance like general contractors have. And after that, expensive audits, etc, until independent developers are shut out completely.
simonask | 22 hours ago
Why do drug companies deserve justice for developing and pushing heroin-analogues, but not tech companies?
Our work has real consequences.
iamnothere | 21 hours ago
simonask | 15 hours ago
The stakes aren't any lower for us.
iamnothere | 8 hours ago
simonask | 8 hours ago
iamnothere | 7 hours ago
If I write a trash library for a random project and someone else starts using it to run their nuke plant, that isn’t my fault. Read the license. NO WARRANTY.
jcynix | a day ago
OK, so instead of educating stupid (or overly naive) people, we implement "protections" to limit any and all people to do useful things with their devices? And as a "side effect" force them to use "our" app store only? Something doesn't smell that good here …
How about a less drastic measure, like imposing a serious delay for "side loading" … let's say I'd to tell my phone that I want to install F-Droid and then would have to wait for some hours before the installation is possible? While using the device as usual, of course.
The count down could be combined with optional tutorials to teach people to contact their bank by phone meanwhile. Or whatever small printed tips might appear suitable.
warkdarrior | a day ago
bigstrat2003 | a day ago
Why would the community give a different response? Everything is fine as it is. Life is not safe, nor can it be made safe without taking away freedom. That is a fundamental truth of the world. At some point you need to treat people as adults, which includes letting them make very bad decisions if they insist on doing so.
Someone being gullible and willing to do things that a scammer tells them to do over the phone is not an "attack vector". It is people making a bad decision with their freedom. And that is not sufficient reason to disallow installing applications on the devices they own, any more than it would be acceptable for a bank to tell an alcoholic "we aren't going to let you withdraw your money because we know you're just spending it at the liquor store".
gretch | a day ago
That's right, it's your decision to use Android. If you choose to do so, that's on you.
zarzavat | a day ago
raw_anon_1111 | a day ago
This is about like the geeks who hate the idea of ad supported services and think that everyone should just pay for every service they use.
FWIW: I do exclusively buy Apple devices, pay for streaming services ad free tier, the Stratechery podcast bundle, ATP and the Downstream podcasts and Slate. I also pay for ChatGPT and refuse to use any ad supported app or game.
jmholla | a day ago
sschueller | a day ago
TeMPOraL | 23 hours ago
zeroxfe | a day ago
The world does not consist of all rational actors, and this opens the door to all kinds of exploitation. The attacks today are very sophisticated, and I don't trust my 80-yr old dad to be able to detect them, nor many of my non-tech-savvy friends.
> any more than it would be acceptable for a bank to tell an alcoholic "we aren't going to let you withdraw your money because we know you're just spending it at the liquor store".
This is a false equivalence.
bigstrat2003 | a day ago
NewsaHackO | a day ago
bigstrat2003 | a day ago
NewsaHackO | a day ago
>There is a default restriction which is good enough for most cases, but the user has the ability to open things up further if he needs.
But this is what the other guy's point is. You are defining "good enough for most cases" in a way that he is not, then making the argument that what he says is equivalent to not allowing an alcoholic to buy beer. Why can you set what level is an acceptable amount of restriction, but he can't?
array_key_first | a day ago
sheiyei | a day ago
bigstrat2003 | a day ago
That is where we differ. It is, ultimately, the victim of a scam who makes the choice of "yes, this person is trustworthy and I will do what they say". The only way to prevent that is to block the user from having the power to make that decision, which is to say protecting them from themselves.
joshuamorton | a day ago
jrm4 | a day ago
NewsaHackO | a day ago
h3lp | a day ago
bigbadfeline | 21 hours ago
Then make sideloading disabled by default but enable it when the users tap 7 times on whatever settings item. At that time, explain those "negative consequences" to them, explain them real good, don't spare anything and if they still hit "Yes, continue to enable sideloading" you do that immediately in order to avoid increasing their haplessness with other made-up excuses.
Simple.
mwwaters | a day ago
But for regular people, that is not really the world they want. If the bank app wrongly shows they’re paying a legitimate payee, such as the bank, themselves or the tax authority, people politically want the bank to reimburse.
Then the question becomes not if the user trusts the phone’s software, but if the bank trusts the software on the user’s phone. Should the bank not be able to trust the environment that can approve transfers, then the bank would be in the right to no longer offer such transfers.
jibal | a day ago
Hizonner | a day ago
If random malware the user chose to install does that, then that is not the bank's fault. The bank is no more involved than anybody else. And no, I don't think "regular people" want to make that the bank's fault.
mwwaters | a day ago
For securities, if I own stock outright, the company has to indemnify if they do a transfer for somebody else or if I lack legal capacity. So transfer agents require Medallion Signature Guarantees from a bank or broker. MSGs thereby require a lengthy banking relationship and probably showing up in person.
For broker to broker transfers, there is ACATS. The receiving broker is in fact liable in a strict, no-fault way.
As far as I know, these liabilities are never waived. Basically for the sizable transfers, there is relatively little faith in the user’s computers (including phones). To the extent there is faith, it has total liability on some capitalized party for fraud.
These defaults are probably unknown for most people, even those with large amounts of securities. The system is expected to work since it has been set up this way.
Clearly a large number of programmers have a bent to go the complete opposite direction from MSGs, where everything is private keys or caveat emptor no matter the technical sophistication of the customer. I, well, disagree with that sentiment. The regime where it’s possible for no capitalized entity to be liable for wrongful transfers (defined as when the customer believes they are transferring to a different human-readable payee than actually receiving funds) should not be the default.
TeMPOraL | 23 hours ago
But that is expensive, so my impression is that for non-sizeable transfers, and beyond banking, for basically anything dealing with lots of regular people doing regular-people-sized operations, the default in the industry is to try and outsource as much liability onto end-users. So instead of treating user's computers as untrusted and make system secure on the back end, the trend is to treat them as trusted, and then deal with increased risk by a) legal means that make end-users liable in practice (keeping users uninformed about their rights helps), and b) technical means that make end-user devices less untrusted.
b) is how we end up with developer registries and remote attestation. And the sad thing is, it scales well - if device and OS vendors cooperate (like they do today), they can enable "endpoint security" for everyone who seeks to externalize liability.
jasonjayr | a day ago
This is more or less how people expect things to work today ....
mwwaters | a day ago
The money mule themselves is almost certainly insolvent to pay the damages. Currencies can also change by the money mule (either to a different fiat currency or crypto), putting the ultimate link completely out of reach of the originating country.
If intermediary banks are deputized and become liable in a no-fault sense, then legitimate transfers out become very difficult. How does a bank prove a negative for where the funds come from? De-banking has already been a problem for a process-based AML regime.
jrm4 | a day ago
Are banks POWERFUL? Do they have lots of money and/or connections to those who do? Do they have a vested interest in getting transactions right?
Absolutely!
Now, with all that money and power -- they -- whoever THEY are, need to come up with smart ways to verify transactions that don't involve me giving them all the keys to all my devices.
We have protections like this elsewhere - even when they have some "ownership." The bank kinda owns my house, but they still can't come in whenever they want.
kovek | a day ago
post-it | a day ago
scoofy | a day ago
It is not enough to write "be careful" on a bag you get from a pharmacy... certain medications require you to both have a prescription, and also to have a conversation with a pharmacist because of how dangerous the decisions the consumer makes can be.
Normal human beings can be very dumb. It's entirely reasonable to expect society to try to protect them at some level.
progbits | 23 hours ago
There are alternative solutions if the true goal is maintaining user freedom while protecting dumb users. But that is not the true goal of the upcoming changes.
TeMPOraL | 23 hours ago
Fine, just:
- Don't reset it every 5 days / 5 hours / 5dBm blip in Wi-Fi strength, because this pretty much defeats end-user automation, whether persistent or event-driven. This is the current situation with "Wireless Debugging", otherwise cool trick for "rootless root", if it only didn't require being connected to Wi-Fi (and not just a Wi-Fi, but the same AP, breaking when device roams in multi-AP networks).
- Don't announce the fact that this is on to everyone. Many commercial vendors, including those who shouldn't and those who have no business caring, are very interested in knowing whether your device is running with debugging features enabled, and if so, deny service.
Unfortunately, in a SaaS world it's the service providers that have all the leverage - if they don't like your device, they can always refuse service. Increasingly many do.
plst | 21 hours ago
scoofy | 21 hours ago
A non-trivial number of people should probably have to go see a specialist before being able to unlock sideloading in my opinion... which means we probably all would have to. It's annoying, but I actually care about other people.
hellojesus | 7 hours ago
Doesnt android require a specific permission to be user-accepted for an installed app to read notifications? I think it's separate from the post-notifications permission.
This seems to be an issue of user literacy. If so, doesn't it make more sense for a user to have the option to opt into "I'm tech illiterate, please protect me" than destroy open computing as we know it?
hbn | a day ago
pas | a day ago
relatively easy for devs, but hard to scale for scammers
giancarlostoro | a day ago
yjftsjthsd-h | 6 hours ago
mormegil | a day ago
pmontra | 23 hours ago
But I'm afraid that this is security theater and the true goal is to protect revenues by making it hard or impossible to install apps that impact Alfabet bottom line (eg third party YouTube clients.)
TeMPOraL | 23 hours ago
It's not just them. Every other SaaS, from banks to media providers to E2EE[0] chat clients to random apps whose makers feel insecure, or are obsessed with security [theater] best practices, just salivate at the thought of being able to check if you're a deviant running with root or debugging privileges, all because ${complex web of excuses that often sound plausible if you don't look too closely}. There's a huge demand for device attestation, remote or otherwise.
--
[0] - End-to-end Enshittified.
pmontra | 7 hours ago
altruios | 23 hours ago
It solves the 'smartest bear / dumbest human' overlap design concern in this situation.
201984 | 23 hours ago
plst | 21 hours ago
But I guess not reading the TOS is another wide problem, also fueled by companies like Google.
gmueckl | a day ago
Education is also not that effective. Spreading warnings about scams is hard and warnings don't reach many people for a whole laundry list of reasons.
The status quo is decidedly not fine. Society must act to protect those that can't protect themselves. The only remaining question is the how.
Google has an approach that would work, but at a high cost. Is there an alternative change that has the same effects on scammers, but with fewer issues for other scenarios?
bigstrat2003 | a day ago
philistine | a day ago
Nope. We could, for example, ask developers to register with their legal identity to release apps.
bigstrat2003 | a day ago
pas | a day ago
Play store can be fast and verification based and the F/OSS stores can be slower, reputation and review based.
...
But fundamentally the easiest thing is to ask people to pay to unlock the phone's security barriers, this makes it harder and costlier for scammers.
hellojesus | 5 hours ago
dmantis | 16 hours ago
Simple example: I have a foss VPN app running on my phone to avoid censorship and surveillance in some countries I visit. While using this app is no problem, non-anonymous development might carry consequences to the developer in some dictatorship jurisdictions (which are plenty of). I'm not sure all devs of such system would be willing to give their ids.
Another example is that this way US can cut out countries and people they don't like from mobile usage (which basically equals to modern social life). Look into sanctioned judges of international court because US protects war criminals.
gmueckl | a day ago
Education isn't really working at this global scale. It doesn't reach people the way you seem to belive it does. Many, if not most people are generally disinterested in learning new things and this gets amplified when it involves technology.
crazygringo | a day ago
So... no food and safety regulations, because life is not safe, and people should have the freedom to poison food with cheaper, lethal ingredients because their freedom matters more?
You're right that things can't be made more safe without taking away the freedom to harm people. Which is why even the most freedom-loving countries on earth strike a balance. They actually have tons and tons of safety regulations that save tons and tons of lives, even you from your point of view that means not "treating people as adults". You have to wear a seatbelt, even if you feel like you're not being treated like an adult. Because it's also not just your own life you're putting at risk, but your passengers' as well.
You're taking the most extreme libertarian stance possible. Thank goodness that's an extremely minority view, and that the vast, vast majority of voters do actually think safety is important.
iamnothere | a day ago
If they make FOSS illegal, guess I’ll be a criminal. Come and take it.
bigstrat2003 | a day ago
> So... no food and safety regulations, because life is not safe, and people should have the freedom to poison food with cheaper, lethal ingredients because their freedom matters more?
This is harm to others and is very obviously something we should enforce. There are unreasonable laws about food (banning the sale of raw milk cheese for example, which most of the world enjoys with perfect safety), but by and large they are unobjectionable.
> You're right that things can't be made more safe without taking away the freedom to harm people. Which is why even the most freedom-loving countries on earth strike a balance.
I never said I was opposed to striking a balance. Of course we can strike a balance. Indeed we already have when it comes to installing apps on Android. But these measures are being advanced as if safety were the only consideration, which it isn't.
> You're taking the most extreme libertarian stance possible.
No, that is what you have projected onto me. That's not actually what my stance is.
crazygringo | 18 hours ago
> Life is not safe, nor can it be made safe without taking away freedom. That is a fundamental truth of the world... Someone being gullible and willing to do things that a scammer tells them to do over the phone is not an "attack vector". It is people making a bad decision with their freedom.
That sounds pretty black and white extreme to me, when you talk about things like "life is not safe" and a "fundamental truth". I don't see any appreciation of balance there.
Maybe it's not what you meant to write, but your comment continues to absolutely come across as extremist and anti-balance to me. It seems like I was mischaracterizing what you actually believe (now that you've elaborated), but I don't think I mischaracterized what you wrote.
jrm4 | a day ago
Food and seatbelts, that's literal health and life-and-death; very immediate and visible.
"Cybersecurity" rarely is; and even when it is, the problem is that the centralized established authorities (like google) aren't at all provably good at this.
simonask | 23 hours ago
pas | a day ago
And it seems Google thinks society is beginning to unravel in SEA due to scammers. Trust breaks down, people stop using phones to do important things, GDP can shrink, banks go back to cheques, trees will be cut down!!
It's bad to let people go and catch the zombie virus and the come back and spread it, right?
...
I don't like it, but the obvious decision is to set up a parallel authority that can issue certificates to developers (for side loading), so we don't have to trust Google. Let the developer community manage this. And if we can't then Google can revoke the intermediary CA. And of course Google and other manufacturers could sell development devices that are unlocked, etc.
TZubiri | a day ago
It signals that you don't care much about security, and that you don't care about non-technical users, and don't even have the capacity to see how they view a system.
Sure, you can analyze domain names effectively, you can distinguish between an organic post and an ad, you know the difference between Read and Write permissions to system files, etc...
But can you put yourself on the shoes of a user that doesn't? If not, you are rightfully not in a position as a steward of such users, and Google is.
danpalmer | 23 hours ago
Taking a step back though, I suspect there are cultural differences in approach here. Growing up in Europe, the idea of a regulation to make everyone safer is perfectly acceptable to me, whereas I get the impression that many folks who grew up in the US would feel differently. That's fine! But we also have to recognise these differences and recognise that the platforms in question here are global platforms with global impact and reach.
TeMPOraL | 23 hours ago
I grew up and live in Europe. I support the general idea of "regulation to make everyone safer" being an acceptable choice. At the same time, I vehemently oppose third-party interests reaching into my computing device and dictating what I can vs. cannot do with it.
But as you say, "global platforms with global impact and reach" - and so I can't set up my phone to conditionally read out text and voice messages aloud, because somewhere on the other side of the world, someone might get scammed into installing malware, therefore let's lock everything down and add remote attestation on top.
Unfortunately, the problem is political, not technological, and this here is but one facet of it. Ultimately, what SaaS does is give away all leverage: as users, it doesn't matter if we fully own the endpoints, or have a user-friendly vendor: any SaaS can ultimately decide not to serve a client that doesn't give the service a user-proof beachhead.
plst | 21 hours ago
And it's also not actual regulation, just new TOS from a company many are basically forced to interact with.
danpalmer | 19 hours ago
I've heard much criticism of it being too heavy-handed, but I don't think I understand criticism that it won't improve security. Could you expand on that?
tremon | 8 hours ago
em-bee | 22 hours ago
these people aren't gullible. they are ignorant (in the uneducated sense). they are not making bad decisions. they are not even aware that there is a decision to be made.
and worst of all, this problem affects the majority of those populations. if more than half of our population was alcoholic then we absolutely would restrict the access to alcohol through whatever means possible.
it's a pandemic. and we all know what restrictions that required.
plst | 22 hours ago
em-bee | 21 hours ago
how does it do that? (i am not getting hung up on "intuitive", i just mean you argue that the currently used design fuels incompetence)
how is a UI designed that doesn't fuel incompetence?
i have a hard time imagining what design aspects matter here, and how to improve upon them.
plst | 9 hours ago
I'm specifically talking about UX ("how a user interacts with and experiences a product, system, or service"), not necessarily UI.
> how does it do that? (i am not getting hung up on "intuitive", i just mean you argue that the currently used design fuels incompetence)
tl;dr We have a product, we want to make money, we need people to use the product. One of the things that stand in the way, is people not understanding how to use our product. We will make sure they can get started as fast as possible, and not mention how they may hurt themselves with the product, that would scare them away. Hurting yourself with our product is in the broad "don't do stupid things" category. We will never explain the "framework" (in case of an OS I mean apps, that apps can interact with each other and your data, how you can or cannot, control that), even in broad terms. Just click this button and get your solution.
It started with PCs and people not understanding how to not lose their documents. Now that every device is connected to the internet, the problem became worse.
You can now say that "sideloading" is stupid anyway, but this is not the only problem. Another thing that people still usually learn by painful experience is backups. There are fake apps, on both stores. Another thing, in-band signaling. You cannot trust email, phones, whatsapp, messenger... Even if your friend you often chat with is messaging you, they could've just been hacked. Try to explain that you also cannot trust websites and that even technical people don't have a good way of telling if an email of a website is real.
But at least enrollment is fast and adoption metrics are growing. Since we are already in "move fast and break things" mindset, we will think about fixing such issues when it actually becomes a problem.
To be clear, I'm not saying that making technology easy is always bad, that you should always expose the user to "the elements" and expect them pipe commands in the shell. But I think that often the focus is on only making enrollment fast. "Get started"
What if we actually expected people to understand something about technologies they want to use?
em-bee | 7 hours ago
but that's what we have now, and it's not working.
the implied question is: what if we don't allow people to use technology unless they can demonstrate that they understand it?
is that really something we want to do? this sounds like gatekeeping, elitism, and anti-innovation because if if less people are going to use a technology, then there is less motivation to build it.
remember, i think it was someone at IBM that said that the potential for computers is some small number? and then it grew beyond anyone's wildest expectations?
do you think that would have happened if we had required understanding before we let anyone buy a home computer?
besides education, i don't know how to approach this issue.
plst | 6 hours ago
My entire point is that education is the opposite of what we have now. That users are not expected to understand or know anything about IT technologies they use. Not the case with cars, recreational and prescription drugs...
> the implied question is: what if we don't allow people to use technology unless they can demonstrate that they understand it?
It's not exactly my point, but in extreme cases, maybe. I genuinely think that nobody has even tried to educate people about computers. Like, have you seen IT classes in schools? Assuming you are lucky enough for the classes to have any content, you will probably get some lessons in Word and Excel. Maybe some programming. Maybe Paint. But actually using the computer? Dangers of the internet, importance of backups, trusting websites, applications and emails? The concept of application and difference between applications and websites? And those technologies are not "developing" like they were 20 years ago, they are probably here to stay.
> is that really something we want to do? this sounds like gatekeeping, elitism, and anti-innovation because if if less people are going to use a technology, then there is less motivation to build it.
And the alternative Google and Apple present is giving them paternalizing control over the most popular computing device. The say over what people can do with their devices. After they made sure that these devices are embedded into our lives. I would much rather we slowed down with innovation for a second and resolved such issues first, because the way I see it, it's literally manipulation (also see: dark patterns).
As for the gatekeeping and etilism - Assuming we want a "computing license" (not necessarily what I'm arguing for), is "driving license" also gatekeeping and etilism? Or maybe some amount of gatekeeping is good?
As for anti-innovation - I genuinely think we might have had just enough innovation in the field and it may be time to slow down a little, take a step back and evaluate the results. And I honestly don't see much innovation in apps/computers/web space besides maybe AI, and governments are already working on regulating that.
> do you think that would have happened if we had required understanding before we let anyone buy a home computer?
Home computers were very harmless before the internet, but that's an aside. Assuming the tech is actually useful, not just slightly more convenient than "traditional" alternatives, then yes, I'm sure it would have still grown to sizes it has grown to today. Maybe a bit slower.
> besides education, i don't know how to approach this issue.
Same, I generally do think this whole situation needs more consideration.
tremon | 9 hours ago
-- C.S. Lewis
em-bee | 7 hours ago
the correct solution is of course education, but education takes time. we can educate today's children so that they can protect themselves in the future. but that's the next generation. for the current generation that kind of education is to late.
the proposed solution is a stopgap measure. do you have a better idea how to solve the problem? (maybe putting more effort into persecution, but that costs money. or making banks responsible for covering the loss. but then you'll get banks demanding the protection. tyranny of the banks then? is that any better? that's actually happening in europe now.)
not doing anything will hurt a lot of people and make them unhappy. as a government you really don't want that either.
hellojesus | 7 hours ago
acac10 | 22 hours ago
Then we will see how you will react.
hypeatei | a day ago
The community does not need to do that. Installing software on my device should not require identification to be uploaded to a third party beforehand.
We're getting into dystopian levels of compliance here because grandma and grandpa are incapable of detecting a scam. I sympathize, not everyone is in their peak mental state at all times, but this seems like a problem for the bank to solve, not Android.
iamnothere | a day ago
999900000999 | a day ago
"I am responsible for my own actions" mode.
You click that, the phone switches into a separate user space. Securenet is disabled, which is what most financial apps rely on.
Then you can install all the fun stuff you want.
This is really a matter of Google not sandboxing stuff right. Why the hell does App A need access to data or notifications from App B.
AAAAaccountAAAA | a day ago
thewebguyd | a day ago
Advertising networks. Just like how you see crap like a metronome app have a laundry list of permissions that it doesn’t need. Some cases they are just scammy data harvesters, but in other cases it’s the ad networks that are actually demanding those permissions.
Google won’t sandbox properly because it’s against their direct business interest for them to do so. Google’s Android is adware, and that is the fundamental problem.
renewiltord | a day ago
Retr0id | a day ago
Aren't we supposed to have sandboxing to prevent this kind of thing? If the malware relies on exploiting n-days on unpatched OSes, they could bypass the sideloading restrictions too.
UncleMeat | a day ago
On the Play store there is a bunch of annoying checking for apps that request READ_SMS to prevent this very thing. Off Play such defense is impossible.
Retr0id | a day ago
warkdarrior | a day ago
Retr0id | 22 hours ago
(I'm being facetious here but this is massively preferable to disabling sideloading altogether)
deaux | 15 hours ago
If you care about the topic, which you seemingly do, stop using this doubleplusgood term.
UncleMeat | 22 hours ago
I am pretty confident that if Google had enabled this policy only for apps which use these permissions that the community would still be upset.
EvanAnderson | 5 hours ago
[0] https://github.com/tmo1/sms-ie
jhasse | 23 hours ago
UncleMeat | 22 hours ago
I am pretty confident that if Google had enabled this policy only for apps which use these permissions that the community would still be upset.
hahn-kev | a day ago
Alternatively reading notifications could be opt in per app, so the reading app needs to have permission to read your SMS message app notifications, or your bank notifications, that would not be as full proof as that requires some tech literacy to understand.
marcprux | a day ago
For example, the "Restricted Settings"¹ feature (introduced in Android 13 and expanded in Android 14) addresses the specific scam technique of coaching someone over the phone to allow the installation of a downloaded APK. "Enhanced Confirmation Mode"², introduced in Android 15, adds furthers protection against potentially malicious apps modifying system settings. These were all designed and rolled out with specified threat models in mind, and all evidence points to them working fairly well.
For Google to suddenly abandon these iterative security improvements and unilaterally decide to lock-down Android wholesale is a jarring disconnect from their work to date. Malware has always been with us, and always will be: both inside the Play Store and outside it. Google has presented no evidence to indicate that something has suddenly changed to justify this extreme measure. That's what we mean by "Existing Measures Are Sufficient".
[^1]: https://support.google.com/android/answer/12623953
[^2]: https://android.googlesource.com/platform/prebuilts/fullsdk/...
mirekrusin | a day ago
tadfisher | a day ago
In other news, a new study shows that cutting off your feet is 100% effective against athlete's foot.
mirekrusin | a day ago
array_key_first | a day ago
microtonal | a day ago
Many Android phones still do not have a separate secure element.
Also, the Play Store itself regularly contains malware.
In the end it is mostly about control, dressed up as protecting users. If it was about security, Google would support GrapheneOS remote attestation for Google Pay (for being the most secure Android variant) and cut off many existing phones with deplorable security.
workfromspace | a day ago
dfabulich | a day ago
"Existing measures are working," perhaps?
kodebach | a day ago
In the section "Existing Measures Are Sufficient." your letter also mentions
> Developer signing certificates that establish software provenance
without any explanation of how that would be the case. With the current system, yes, every app has to be signed. But that's it. There's no certificate chain required, no CA-checks are performed and self-signed certificates are accepted without issue. How is that supposed to establish any form of provenance?
If you really think there is a better solution to this, I would suggest you propose some viable alternative. So far all I've heard for the opponents of this change is, either "everything is fine" or "this is not the way", while conveniently ignoring the fact that there is an actual problem that needs a solution.
That said, I do generally agree, with you that mandatory verification for *all* apps would be overkill. But that is not what Google has announced in their latest blog posts. Yes, the flow to disable verification and the exemptions for hobbyists and students are just vague promises for now. But the public timeline (https://developer.android.com/developer-verification#timelin...) states developer verification will be generally available in March 2026. Why publish this letter now and not wait a few weeks so we can see what Google actually is planning before getting everybody outraged about it?
Dusseldorf | 23 hours ago
kodebach | 22 hours ago
The exceptions for students/hobbyist were always promised, but the "advanced flow" came later based on this feedback. AFAICT Google has, so far, only made things better after the initial announcement. I don't see why we shouldn't give them the benefit of doubt, at least until we have some specifics.
Pushing this open letter out just days/weeks before Google promised the next major update just seems off.
renewiltord | a day ago
What is this evidence? Please share it.
svat | 7 hours ago
- From Dec 2024 there's https://www.bangkokpost.com/business/general/2915570/state-g... and https://theinvestor.vn/thai-govt-collaborates-with-google-to... which list some efforts done in “collaboration between the Digital Economy and Society (DES) Ministry [of Thailand] and Google”. It mentions “The initiative started in April, providing the Google Play Protect feature”, which “blocked attempts by criminals to install apps more than 4.8 million times on more than 1 million Android devices”. And https://www.nationthailand.com/blogs/business/tech/40036973 is from earlier (Apr 2024), about the introduction of the Google Play Protect feature.
- From April 2025 there's https://blog.google/company-news/inside-google/around-the-gl... a blog post from a “VP, Government Affairs & Public Policy”, which mentions “people in Asia Pacific feel it acutely, having lost an estimated $688 billion in 2024” (I think this may be across all scams?) and ends with “Combatting evolving online fraud in Asia-Pacific is critical” after listing a bunch of random things (unrelated to Android) Google is/was doing. This suggests to me that Google was under some criticism/pressure from governments for enabling scams, and eager to say “see, we're doing something”.
- The developer verification announcement came four months later in August 2025: https://android-developers.googleblog.com/2025/08/elevating-...
> In early discussions about this initiative, we've been encouraged by the supportive initial feedback we've received. In Brazil, the Brazilian Federation of Banks (FEBRABAN) sees it as a “significant advancement in protecting users and encouraging accountability.” This support extends to governments as well, with Indonesia's Ministry of Communications and Digital Affairs praising it for providing a “balanced approach” that protects users while keeping Android open. Similarly, Thailand’s Ministry of Digital Economy and Society sees it as a “positive and proactive measure” that aligns with their national digital safety policies.
This shows that it was a negotiation with the governments/agencies in Brazil, Indonesia, Thailand that were breathing down on Google to do something.
- The fourth country where this developer verification is rolling out first is Singapore, and https://www.channelnewsasia.com/singapore/android-malware-sc... is from Sep 2023 while https://www.channelnewsasia.com/singapore/google-android-dev... is from Feb 2024 which mentions that a certain upgrade to Google Play Protect (blocking apps if they “demands suspicious permissions such as access to restricted data like SMSes and phone notifications”) was first rolling out in Singapore.
- And the most recent https://android-developers.googleblog.com/2025/11/android-de... from November 2025 (which promised the “students and hobbyists” account type and the “experienced users” flow “in the coming months”) also has a “Why verification is important” section that mentions the “consistently acted to keep our ecosystem safe” and “common attack we track in Southeast Asia” and “While we have advanced safeguards and protections to detect and take down bad apps, without verification, bad actors can spin up new harmful apps instantly”.
The overall picture I get is less of “Google to suddenly abandon these iterative security improvements” but more like: under pressure from governments to stop scams, Google has been doing various things like the things you mentioned, and scammers have also been evolving and finding new ways to carry out scams at scale (like “impersonating developers”), and the latest upcoming change requiring developer verification on “certified Android devices” is simply the next step of the iteration. It sucks and feels like a wholesale lock-down, yes, but it does not seem a jarring disconnect from the previous steps in the progression of locking things down.
realusername | a day ago
Right now when I search for "ChatGPT", the top app is a counterfeit app with a fake logo, is it really this store which is supposed to help us fight scams?
warkdarrior | a day ago
Just did Play search for "ChatGPT" and the top-2 results were for OpenAI's app (one result was sponsored by OpenAI one result was from Google's search). So anecdotally your results may vary.
realusername | 15 hours ago
So maybe before talking about anything about direct installs, they could fix the big scams on the Play Store.
raincole | a day ago
Make the warning a full screen overlay with a button to call local police then.
(Seriously)
"but local police won't treat that seriously..." "the victim will be coached to ignore even that..." well no shit then you have a bigger problem which isn't for google to fix.
a456463 | a day ago
GeekyBear | a day ago
People choosing between the smartphone ecosystems already have a choice between the safety of a walled garden and the freedom to do anything you like, including shooting yourself in the foot.
You don't spend a decade driving other "user freedom" focused ecosystems out of the marketplace, only to yank those supposed freedoms away from the userbase that intentionally chose freedom over safety.
chopin | a day ago
Only immutable devices should be allowed as second factor.
RHSeeger | a day ago
So yes, "its fine the way it is" _is_ valid; but the meaning it "we're at a good point in the balance, any more cost is too much given the gains it generates"
shaky-carrousel | a day ago
microtonal | a day ago
glenstein | a day ago
I think my overriding concern is not nuking F-Droid. I actually think that's a great solution and, interestingly, F-Droid apps already don't use significant permissions (or often use any permissions!) so that might work. Also it would be good if perhaps F-Droid itself could earn a trusted distributor status if there's a way to do that.
Or a marriage of the two, F-Droid can jump through some hoops to be a trusted distributor of apps that don't use certain critical permissions.
I think there have to be ways of creatively addressing the issue that don't involve nuking a non-evil app distribution option.
pessimizer | a day ago
If you can be convinced by this, you can be convinced by anything. What if the scammer uses "fear and urgency" to make the person log onto their bank account and transfer the funds to the scammer?
If you can convince people to install new apps through "fear and urgency," especially with how annoying it often is to do outside of the blessed google-owned flow (and they're free to make it more annoying without taking this step), that person can be convinced of anything.
> I agree that mandatory developer registration feels too heavy handed, but I think the community needs a better response to this problem than "nuh uh, everything's fine as it is."
There's no other "solution" other than control by an authority that you totally trust if your "threat" is that a user will be able to install arbitrary apps.
The manufacturer, service provider, and google, of course, won't be held to any standard or regulations; they just get trusted because they own your device and its OS and you're already getting covertly screwed and surveilled by them. Google is a scammer constantly trying to exfiltrate information from my phone and my life in order to make money. The funny thing is that they are only pretending to defend me from their competition - they're not threatened by those small-timers - they're actually "defending" me from apps that I can use to replace their own backdoors. Their threat is that they might not know my location at all times, or all of my contacts, or be able to tax anyone who wants access to me.
rogerallen | 23 hours ago
People fearful about being scammed should buy a phone with a hardware lock to prevent it from ever accepting sideloads--no option to go to dev mode, ever. You could even charge more for the extra security.
People who want the freedom to sideload can choose to buy a phone without the extra hardware security feature.
miloignis | 23 hours ago
All phone calls, SMS, emails, and instant messages should be blocked unless the other party is in my contacts or I have reached out to them first (plus opt-in contact from contacts of contacts, etc). Ideally, cryptographically verified.
I would argue this is the real solution to spam and scamming - why on earth are random people allowed to contact me without my consent? Phone numbers or email addresses being all you need to contact me should be an artifact of an earlier time, just like treating social security numbers as secret.
I realize this isn't super practical to transition existing systems to (though spam warnings on email and calls helps, I suppose, and maybe it could be made opt-in). I dearly hope the next major form of communication works this way, and we eventually leave behind the old methods.
Also, SMS shouldn't be used for 2FA anyway.
wilsonnb3 | 21 hours ago
cjmoran | 18 hours ago
What do we replace it with? Haha, idk man. How about water? More difficult to hoard in ridiculous quantities, better spend it before it evaporates, and it occasionally falls from the sky (UBI). That's what I call a liquid asset!
cyberrock | 21 hours ago
eviks | 17 hours ago
You'll always find individual cases where people do extremely dumb stuff, but using that as a justification is also dumb. If you want to significantly curtail that freedoms of a large group, it's on you to come up with a good evaluation of tradeoffs, so
> the community needs a better response to this problem than "nuh uh, everything's fine as it is."
They already have, but you choose to use a fake simplification as a representative
kelp6063 | a day ago
gleenn | a day ago
shimman | a day ago
One thing, we the people can do, is pressure our politicians to break up Google along with the rest of big tech.
There are many primary challengers this cycle that are running anti-monopoly platforms. Help their cause, signing pointless petitions is just West Wing style fantasy that is extremely childish.
jhasse | 23 hours ago
jeroenhd | a day ago
Google will not change their minds, they're too busy buying goodwill from governments by playing along. There aren't any real alternatives to Android that are less closed off and they know it.
Retr0id | a day ago
jonathanstrange | a day ago
Concretely, my original plan was to provide an .apk for manual installation first and tackle all this app store madness later. I already have enough on my plate dealing with macOS, Windows, and Linux distribution. With the change, delaying this is no longer viable, so Android is not only one among five platforms with their own requirements, signing, uploading, rules, reviews, and what not, it is one more platform I need to deal with right from the start because users expect software to be multiplatform nowadays.
Quite frankly, it appears to me as if dealing with app stores and arbitrary and ever changing corporate requirements takes away more time than developing the actual software, to the detriment of the end users.
It's sad to watch the decline of personal computing.
verdverm | a day ago
jonathanstrange | a day ago
verdverm | a day ago
InsideOutSanta | a day ago
The result is unwarranted trust from users in stores that are full of scams.
Apple and Google effectively built malware pipelines under the guise of security.
verdverm | a day ago
pona-a | 15 hours ago
Meanwhile my parents are getting hammered by inescapable malvertisements from Google, a TTS voice ordering them to install a "cleaner" app or have their phone die, no matter how many you report or what knobs you touch under ad personalization. Facebook knew 20% of their yearly revenue was scams and intentionally deferred moderator action to keep that business. All this "trust" is so overwhelming, the only way to make our computing more trusted is if OEM auto-installed the malware themselves. Oh wait, Samsung does that!
jhasse | 23 hours ago
boje | a day ago
drnick1 | a day ago
turblety | a day ago
jhasse | 23 hours ago
wackget | a day ago
microtonal | a day ago
https://privsec.dev/posts/android/banking-applications-compa...
jamesnorden | a day ago
arjie | a day ago
array_key_first | a day ago
microtonal | a day ago
drnick1 | a day ago
jhasse | 23 hours ago
dvh | a day ago
hollandheese | a day ago
fsflover | a day ago
yndoendo | a day ago
Linux based phones are starting to become viable as daily drivers. [0] They are even coming with VM Android in case an application is needed that does not have a Linux equivalent.
I am interested in how Google's gatekeeper tactics are going to affect Android like platforms such as /e/os and GrapheneOS. [1]
[0] http://furilabs.com/
[1] https://murena.com/america/products/smartphones/
cesarb | a day ago
> No luck needed. Linux based phones are starting to become viable as daily drivers.
Then please tell me, which non-Android Linux-based phone can I buy here in Brazil (one of the first places where Android would have these new restrictions)? I'd love to know (not sarcasm, I'm being sincere). Keep in mind that only phones with ANATEL certification can be imported, non-certified phones will be stopped by customs and sent back.
iamnothere | a day ago
bitwize | a day ago
iamnothere | a day ago
Edit: apparently if it isn’t a “marketable product” then the law may not apply. So far they haven’t enforced it against Linux distros, likely because of this exception. However, IANAL (and definitely not a Brazilian lawyer).
bitwize | a day ago
yndoendo | 7 hours ago
I do not know all International laws. Nor do I respect countries and politicians that force such restrictive laws that prevent reuse of good devices that are now unsupported by the original manufacture.
Secondly if that law was enacted in the US ... I would buy a product that has a known bug to allow for loading a custom OS. In court I would push for jury-nullification too.
Authoritative governments suck at all fronts ... not just phone restrictions.
Would you mind pointing me to the ANATEL certification process? I am wondering if the voice of the law is worded to prevent competition ... sounds like something Google would of helped push through.
Are you allowed old school non-smart phones? That is how I would do it. Laptop and dumb phone.
thayne | a day ago
jeroenhd | a day ago
I'm kind of hoping Qualcomm's open sourcing work will also affect the ability to run mainline Linux on Android devices, but it's looking like a Linux OS that covers the bare basics seems to be a decade away.
jhasse | 23 hours ago
criddell | a day ago
In the time it took you to read this comment, 200 phones were sold.
sdsd | a day ago
criddell | a day ago
I've mostly owned Android devices but for my family I've always recommended iOS devices because they are more locked down.
shimman | a day ago
I'm sorry but people that think this way tend to also think having money is some morality signal and not one of a massive personality defect (greed).
jrm4 | a day ago
Do BOTH, when possible.
octoclaw | a day ago
Scammers will use stolen identities or shell companies. They already do this on the Play Store itself. The $25 fee and passport upload haven't prevented the flood of scam apps there.
Meanwhile F-Droid's model (build from source, scan for trackers/malware) actually provides stronger guarantees about what the app does. No identity check needed because the code speaks for itself.
The permission-based approach someone mentioned above makes way more sense. If your app wants to read SMS or intercept notifications, sure, require extra scrutiny. But a simple calculator app or a notes tool? That's just adding friction for no security benefit.
jeroenhd | a day ago
No permission system can work as well as a proper solution (such as banks and governments getting their shit together and investing in basic digital skills for their citizens).
rm30 | a day ago
This conflates identity verification with criminal deterrence, they're not the same thing.
nickorlow | a day ago
UncleMeat | a day ago
I don't know if this trade off is worth it, but the idea that it won't affect this abuse at all is false.
array_key_first | a day ago
UncleMeat | 22 hours ago
tavavex | a day ago
EmbarrassedHelp | a day ago
It would not be unsurprising for a government to tell Google they must block any VPN apps from being installed on devices, and Google using the developer requirements to carry out the ban.
criddell | a day ago
Don't they already have that power?
nickorlow | a day ago
scoofy | 23 hours ago
nickorlow | 17 hours ago
mhitza | a day ago
aftergibson | a day ago
criddell | a day ago
How can you judge if Google's plan is a good one? Add up the harms caused by the new rules and weigh that against the reduction in harm and see where the balance is?
I have a hard time believing the net outcome for the overall Android community would be negative.
OutOfHere | a day ago
sunaookami | a day ago
dsl | a day ago
I have an APK I would like you to install on your personal phones. No, I won't tell you who I am.
Please let me know when you are comfortable with this.
bigstrat2003 | a day ago
dsl | a day ago
If you want to make the decision to install Hay Day, the user should be able to know that it is the Hay Day from Supercell or from Sketchy McMalwareson.
99.9% of apps should have no issue with their name being associated with their work. If you genuinely need to use an anonymously published app, you will still be able to do that as a user.
nickorlow | a day ago
I'm pretty sure the goal of Google's changes is to make it so you can't
NicuCalcea | a day ago
nickorlow | a day ago
mixologic | 21 hours ago
nickorlow | 17 hours ago
pona-a | 14 hours ago
exe34 | a day ago
zem | a day ago
pona-a | 14 hours ago
jhasse | 23 hours ago
exe34 | a day ago
jech | a day ago
iamnothere | a day ago
Also, I’m going to coin a new term for the recurring names that I see promoting this kind of thing here: “safety fascists.” Safety fascists won’t sleep until there is a camera watching every home, a government bug in every phone, a 24/7 minder for every citizen. For your safety, of course.
I think I may hate safety fascists more than I hate garden variety fascists. That’s an accomplishment!
btreesOfSpring | a day ago
It feels like independent development on devices has slowed in recent years. More stores appealing to different developer models/tools and monetization strategies please.
pserwylo | a day ago
> Based on this feedback and our ongoing conversations with the community, we are building a new advanced flow that allows experienced users to accept the risks of installing software that isn't verified. [0]
> Advanced users will be able to"Install without verifying," but expect a high-friction flow designed to help users understand the risks. [1]
Firstly - I am yet to see "ongoing conversations with the community" from Google. Either before this blog post or in the substantial time since this blog post. "The community" has no insight into whether any such "advanced flow" is fit for purpose.
Secondly - I as an experienced engineer may be able to work around a "high-friction flow". But I am not fighting this fight for me, I am fighting it for the billions of humans for whom smart phones are an integral part of their daily lives. They deserve the right to be able to install software using free, open, transparent app stores that don't require signing up with Google/Samsung/Amazon for the privilege of: Installing software on a device they own.
One example of a "high friction flow" which I would find unacceptable if implemented for app installation on Android is the way in which browsers treat invalid SSL certificates. If I as a web developer setup a valid cert, and then the client receives an invalid cert, this means that the browser (which is - typically - working on behalf of the customer) is unable to guarantee that it is talking to the right server. This is a specific and real threat model which the browser addresses by showing [2]:
* "Your connection is not private"
* "Attackers might be trying to steal your information (for example, passwords, messages or credit cards)"
* "Advanced" button (not "Back to safety")
* "Proceed (unsafe)" link
* "Not secure" shown in address bar forever
In this threat model, the web dev asked the browser to ensure communication is encrypted, and it is encrypted with their private key. The browser cannot confirm this to be the case, so there is a risk that a MITM attack is taking place.
This is proportionate to the threat, and very "high friction". I don't know of many non-tech people who will click through these warnings.
When the developer uses HSTS, it is even more "high friction". The user is presented all the warnings above, but no advanced button. Instead, on Chromium based browsers they need to type "thisisunsafe" - not into a text box, just randomly type it while viewing the page. On Firefox, there is no recourse. I know of very few software engineers who know how to bypass HSTS certificate issues when presented with them, e.g. in a non-prod environment with corporate certs where they still want to bypass it to test something.
If these "high friction" flows were applied to certified Android devices each time a user wanted to install an app from F-Droid - it would kill F-Droid and similar projects for almost all non-tech users. All users, not just tech users, deserve the right to install software on their smart phone without having to sign up for an "app store" experience that games your attention and tries to get you to install scammy attention seeking games that harvest your personal information and flood you with advertisements
Hence, I don't want to tell people "Just install [insert non-certified AOSP based project here]". I want Android to remain a viable alternative for billions of people.
[0] - https://android-developers.googleblog.com/2025/11/android-de...
[1] - https://x.com/matt_w_forsythe/status/2012293577854930948
[2] - https://wrong.host.badssl.com/
cyanydeez | a day ago
tsoukase | a day ago
WarmWash | a day ago
Google listened.
Blame the judge for one of the worst legal calls in recent history. Google is a monopoly and Apple is not. Simple fix for Google...
Same comment I made a few days ago, I feel it bears repeating as much as possible until it's really driven home how detrimental and uninformed that decision was.
pas | a day ago
kodebach | a day ago
However, there is a relevant court case here. The one about Samsung's "Auto Blocker" (https://arstechnica.com/gadgets/2025/07/samsung-and-epic-gam...). Epic Games sued because Samsung made it too hard to install apps from "untrusted" sources. This may be a reason why Google is now trying to make the process more difficult on the developer side instead.
pas | 13 hours ago
the Samsung case is very interesting, haven't bumped into that one before.
... as far as I understand the really nasty part of "contemporary" jurisprudence of antitrust enforcement is that the standard is to show that things would be cheaper for the consumers
(though I don't know why developers are not considered consumers of the app marketplace services, after all for them bringing their own payments and whatnot would be much more cost effective... well, anyway, unfortunately the courts are mostly locked to this very inefficient path-dependent way of regulating anything through super expensive arguments, which is an obvious (?) dysfunction of legislation)
andyferris | a day ago
Apple was deemed not to be anticompetitive in app stores because there was no existing market of app stores on iOS. Google was more open in allowing other app stores, but deemed anticompetitive by discouraging their use relative to the Play store.
The irony is the more open player was deemed more anticompetitive. OP is saying Google is “fixing” their anticompetitive behavior by eliminating alternative app stores entirely.
andyferris | a day ago
Things that everyone relies on for life are generally regulated by law. Telecom platforms for instance. I’d say the mandatory software platform I need for my bank, drivers license, daily communication, etc should be in this bucket.
The EU declaring both Apple and Google gateway platforms is a much better approach. Congress is abdicating its responsibility to craft the legal frameworks for equal access in the modern age.
thegrim33 | a day ago
The US government is by design supposed to be as minimal as possible, and the laws affecting you kept as local as possible. We're not supposed to have a "the government" that's the same as EU governments. "The federal government should make laws" should be an absolute last resort. When you say "congress is abdicating its responsibility", I'd like you to point to where in the constitution it says that congress has such responsibilities.
paxys | 22 hours ago
redbell | 14 hours ago
This! I was about to reply that you have already posted this comment four days ago: https://news.ycombinator.com/item?id=47092480
arjie | a day ago
> Disproportionate impact on marginalized communities and controversial but legal applications
applies more to the elderly in third-world countries who are constantly scammed through fraudulent side-loaded apps than it does to hackers who want to install whatever software they want but do not want to use a non-Google AOSP distribution.
jdlyga | a day ago
singpolyma3 | a day ago
asim | a day ago
TheJoeMan | a day ago
TZubiri | a day ago
Let's consider that Google's Android was and is a huge improvement in security in terms of OS design (even if inspired by iOS) over the previous incumbent (let's call Windows that). That difference in security still exists today (probably due to Window's Backwards Compatibility prioritization, and its later positioning in the market as a cheap powertool (cheap compared to iOS, powertool compared to android).
That security advantage, by the way, was not just the result of initial design, but it required a lot of maintenance, in the form of the 'Play Store' App Store equivalent (at no cost to the user no less).
All this to say that let's consider this context, and consider what alternatives are proposed.
1- The windows 'install whatever you want model' (Now with OS approved certificates): As mentioned, worse, with almost no sandboxing. 2- Linux package managers + install whatever you want: Valid model for powerusers and programmers, not really relevant for massive personal computing. 3- Keeping the old Android system: This would imply simply ignoring the problem of growing professional and untouchable malicious actors that seem to be growing in power with the advent of anonymous financial tech. Is this the actual proposal? Do nothing about the problem? Pretend there is no problem? I don't think the problem is necessarily malware, but to take a specific example, suppose a Casino from Isle of Man is allowing underaged and users from jurisdictions where it is illegal. Regardless of whether you think this is ok, or debatable or it depends on the circumstances. Isn't the ask to identify the developer rather trivial? Just a little bit of paperwork, you want to be a developer? Install code that someone else will use? Put your name in it, have skin in the game.
I think there's also a contradiction between the need for developer privacy and user privacy. Most HN users are privacy-sensitive. Well I propose there's a tradeoff between the privacy of the consumer and the producer. In order to provide privacy and rights to the user, the producer needs to come forward. There's no way to have the cake and eat it too, if both producer and consumer are shy, they will never find each other, if both producer and consumer stay anonymous, they won't trust each other, if both producer and consumer stay anonymous, they don't give any guarantees to the other party that they won't go rogue.
You know this if you've tried to start a business, you can either put your face, your name, register with the state, put your actual address. Or you can use an anonymous brand, a Registered Agent Address, etc... The latter is a harder sell than the former, and you only don't notice it if you are completely absorbed in your own world and cannot put yourself in the shoes of your customer.
tl;dr: Google has an impeccable data security track record. And User/Developer privacy is a tradeoff. Google is right to protect user privacy and not developer privacy.
atlgator | 23 hours ago
The most telling detail is the sequencing. Google spent years in court arguing Android is open to fend off antitrust regulators, won key battles on that basis, and is now quietly closing the door they swore under oath was permanently propped open. The antitrust defense was the product roadmap's cover story. And framing this as security is particularly rich from the company whose own Play Store routinely hosts malware that passes their review. The problem they're solving isn't "unverified developers distribute harmful apps" — it's "unverified developers distribute apps we can't monetize or control."
eqvinox | 21 hours ago
schmorptron | 10 hours ago
ChoGGi | 8 hours ago
wernsey | 6 hours ago
The appeals to people in Southeast Asia being scammed reminds me of a blog by Cory Doctorow last year: Every complex ecosystem has parasites [1]
The gist of it is that technology can be useful, but that usefulness comes with a price: sometimes bad actors are going to commit fraud or other undesirable actions.
As an example, you can reduce the amount of banking app scams to 0% by simply denying any banking apps on phones. But because of banking apps' usefulness we're not going to do that, so there will be some non-zero risk that you will get scammed.
As a technical user I chose Android for its usefulness, accepting that there may be a (minute) chance that I get scammed, but it is a risk I am willing to take, and Google will unilaterally take this choice away from me.
Still, I don't believe Google's security concerns are sincere, so I think I just wasted my time typing all of this
[1] https://pluralistic.net/2025/04/24/hermit-kingdom/