The title of the post, which the submitter dutifully copied, is IMHO unfortunate since the post seeks to answer the following question:
What I need is to understand why it is designed this way, and to see concrete examples of use cases that motivate the design
It's not "just another" explanation for how OAuth does, which was my immediate guess when reading the title.
However glad I opted to give it a chance, and likely especially illuminating for the younger crowd who didn't get to experience the joys of the early web 2.0 days.
OAuth has always been quite hard to grasp, even though I use it every day. One day I'll write an implementation to properly understand how it works from the bottom up and go through each of the standards that have evolved over time.
I did this for OAuth and OAuth2 in Unison. It was a headache to be sure I did everything procedurally correct. The hash token is based off using certain KVPs from a dictionary of various bits of data, and you sort it in a certain order before hashing, and certain steps require certain bits of data, and sometimes it's URL encoded and sometimes it's not, and all of this dramatically changes the hash.
I remember how stoked I was to finally get it working. It was a massive pain, but luckily there were websites that would walk through the process procedurally, showing how everything worked, one step at a time.
The thing about OAuth is that it’s really very simple. You just have to grasp a lot of very complicated details (that nobody explains) first before it becomes simple.
I remember building oauth logins back when “login with your twitter” was a brand new revolutionary idea, before there were libraries to handle the details.
Still have scars from building directly based off the blogposts Twitter and Facebook engineers wrote about how to integrate with this. Think it wasn’t even a standard yet.
I credit that painful experience with now feeling like OAuth is really quite simple. V2 cleaned it up a lot
It doesn’t seem that way on the surface. But once your finished with out of band callback validation, localhost, refresh tokens, and PKCE, you realize what a monster OAuth 2 actually is.
For me, it really helped to read the Microsoft pages[1] on OAuth 2.0 which has some nice illustrative flow charts, and then go back to the RFCs.
That said, there's a lot of details that are non-trivial, especially since in many cases you actually have to deal with OIDC[2] which builds on OAuth 2.0, and so then you're suddenly dealing with JWKs and whatnot in addition.
For Oauth I'd like to borrow what I would describe humbly as a better analogy, and it comes from Douglas Crockford, and so adapting it from him commenting on Monads in Functional Programming, it goes something like this:
"OAuth is a simple idea, but with a curse: once you understand it, you lose the ability to explain it."
I think the reason a lot of people struggle is because they start with OAuth from a consumer perspective, that is, they are the third party requesting data, and their OAuth implementation is imposed by the resource holder, so they have to jump through a lot of hoops that don't have a clear reason for being.
If you start with OAuth from the perspective of a Service Provider/resource holder, it will all come clear.
Web security is often like that as well, most people facing stuff like CORS or HTTPS, is usually not because they are trying to solve a security issue, but it's because an upstream provider is forcing them to increase their security standards in order to be trusted with their user's data.
If you go to most Fortune 500 companies they will have a whole team of people dedicated to running an IdP and doing integrations. Most people on these teams cannot explain oauth, oidc, or saml even though they work with it every single day. It’s that bad.
that is because oauth, oidc, and saml fall under the category "webshit" that doesn't matter, there are also thousands of C++ programmers who cannot explain the latest reactular .js and other bullshit the script kiddies continue to pump from their collective anus
I've been meaning to set up some nginx-level oauth. I have some self-hosted apps I want to share with friends / family but forcing them to remember a user / pass (basic auth) or run a vpn is a bit too much friction.
I don’t know whether the free version of Nginx has a Relying Party Implementation, but I have used this plugin for Apache2 and OIDC in the past: https://github.com/OpenIDC/mod_auth_openidc
I know it’s not just OAuth but OIDC had a pretty decent provider support and I could even self-host a Keycloak instance - it was annoying to setup but worked okay in practice, could define my own users and then just get a decent login page when needed and otherwise just got into the sites I wanted.
Personally though, it felt a bit overkill when compared to basicauth for anything not run in public or for a lot of users.
I've been happily using oauth2-proxy[1] with nginx as an extra layer of authentication to prevent situations where e.g. home-assistant had an unauthenticated RCE.
It's pretty neat since you can have one oauth instances for all virtual hosts, e.g.:
A has an account at B, A has another account at C, A wants to allow C to access data at B (or to send data to B on A's behalf).
How can B be sure that C is acting on A's behalf? Can A only allow C to access certain data (or send only certain data) in order to reduce risk?
A protocol that allows for that three way negotiation is OAuth.
Like with most specs, a lot of the complexity is added in the later years, by companies that have thousands of users and complex edge cases and necessities, and they are the ones dominating the council, and their needs are the ones that push forward newer versions.
So with most specs, the best way to start learning it is by learning from the oldest specs to the newest ones, so if you start by reading or using OAuth2, you will be bombarded with a lot of extra complexities, not even the current experts started like that.
If you need to catch up, always start with the oldest specs/versions.
That'd be RFC (checks notes) 1945 for HTTP1.0 and later RFC (checks notes again) 2616 for HTTP 1.1. I think there's HTTP 0.9 but I went directly for 1.0
Fwiw it's entirely possible to build a web server by listening on port 80 and reading the text stream and writing to the output stream, no libraries no frameworks no apache no ngninx. And I don't mean you need to rebuild a general purpose an apache like server, maybe for a landing page you can just serve a static page and you will be implementing a very small subset of HTTP.
I have implemented OAuth both as a client and a server. The most complicated part is the scattered documentation, and little gotchas from different providers. In itself, the whole thing is not complex.
OAuth says nothing about authentication other than you have to be redirected back to the client once authentication is complete, by unspecified means, before the client can proceed with authorization and get a token proving they are now authorized to do something. There is no slippery slope.
I alluded to the usage of being hijacked for the same reason. From what I have seen, the nuance around oAuth1 vas auth2 vs auth2.1 vs OIDC is just something that most people use without understanding the details just in order to achieve the end goal. On the top you can add PCKE, client credential, password credential and now we are talking about something thats not comprehensible anymore. I am not a purist by any means but it still pains when people do thing whiteout understanding them.
I’m one of the few I guess that have implemented OAuth at scale and enjoy it more than other forms of auth. Remember Windows Login Auth? Or each system having to be sync’ed to some sort of schedule so that passwords were the same? Yeah, no, that sucks.
OAuth is just a process framework for doing authentication and authorization such that a system doesn’t need to create those mechanisms themselves but can ask a server for it. More recently, in the form of a JWT token with those permissions encoded within.
It all boils down to how long your login token (some hash), or in the case of OAuth, your refresh token, can request an access token (timeboxed access to the system). “Tokens” in this case are just cryptographic signatures or hmac hashes of something. Preferably with a nonce or client_id as a salt.
Traditional login with username and password gives you a cookie (or session, or both) that you use to identify that login, same thing for refresh tokens. Only, refresh tokens are authentication, you need access so you request an access token from a resource server or api, now you have your time boxed access token that allows you to “Authentication: Bearer <token>” at your resource.
From a server perspective, your resource server could have just proxied the username and password auth to get the refresh token or (what I like to do) check the signature of the refresh token to verify that it came from me. If it did, I trust it, so long as the user isn’t in the ban table. If it’s good and they aren’t banned, issue them a time boxed access token for 24h.
If you fail to grasp the JWT aspect, I suggest you learn more about RSA/PKI/SHA and HMAC encryption libraries in your programming language of choice. Knowing how to bcrypt is one thing, knowing how to encrypt/sign/verify/decrypt is The Way.
(Sorry to the grey beards in the back and they know this already and probably wrote that RFC).
I guess it says something about OAuth when you implement it "at scale" and still have multiple misconceptions (all very common though).
Most importantly, OAuth is an authorization framework, OIDC is an authentication extension built on top.
Refresh tokens are part of authorization, not authentication.
HTTP header is Authorization: Bearer..., not Authentication.
There's no such thing as "HMAC encryption", it's a message authentication code. RSA in OAuth is also typically used for signing, not encryption. Not much "encryption" encryption going on in OAuth overall TBH.
Nonce and client IDs are not "salts", but ok that's nitpicking :)
Baby steps my guy, baby steps. Yes, I don’t even mention OIDC, but I think the way I explained it was the middle schoolers version we all can understand (even if there are some minor mistakes in nomenclature).
The point I was trying to make at 2am is that it’s not scary or super advanced stuff and that you can get away with OAuth-like (as so many do). But yes, OAuth is authorization, OIDC is authentication. The refresh token is an authorization but it makes sense to people who have never done it to think of it as a “post-login marker”.
Terrible explanation what Oauth is. But the insight at the end of the article is great. UX should always be the driving factor.
I've seen so many integrations use Oauth where it wasn't a good fit or where the spec was not followed. It always results in an abomination and insecure mess.
Maybe it's a know the rules before you can break them thing, but I've found designing custom auth integrations from UX first perspective result in amazing features. It's rare that both parties are willing to put the effort in it though. Usually people try to shoehorn the usecase into an existing oauth platform.
The main selling point of Oauth is to scale auth and authz to thousands of clients and use cases.
OAuth didn't make a lot of sense to me until I learned about RFC7517. JSON Web Keys allow for participants to effectively say "all keys at this URL are valid, please check here if not sure". The biggest advantage being that we can now rotate out certificates without notifying or relying on other parties. We can also onboard with new trusted parties by simply providing them a URL. There is no manual certificate exchange if this is done all the way.
I am seeing many fintech vendors move in this direction. The mutual clients want more granular control over access. Resource tokens are only valid for a few minutes in these new schemes. In most cases we're coming from a world where the same username and password was used to access things like bank cores for over a decade.
The central paragraph of this is still really hard to parse:
"At its core, OAuth for delegation is a standard way to do the following:
The first half exists to send, with consent, a multi-use secret to a known delegate.
The other half of OAuth details how the delegate can use that secret to make subsequent requests on behalf of the person that gave the consent"
This paragraph has a bunch of words that need defining (the word delegate does not appear on the page until then) and a confusing use of 'first half', 'second half' .... First half of what?
The sentence is correct and accurately describes OAuth.
Delegate is just a standard word, look it up on a dictionary. If anything its internal technical definition is precisely in that sentence.
>"First half of what"
The spec, the standard, half of it deals with X, the other half of it deals with Y. Namely one half being how user grants permission to a third party, and the other half being how the third party makes requests to the main data holder.
If you need another definition: OAuth is a three way protocol between users, a service provider, and a third party. A user gives specific permissions to the third party, so that the service provider can share specific resources with the third party, who acts on the user's behalf.
Op didn't say that it's incorrect. I read the article myself as well and wasn't much smarter than before. For better or worse, llms did a much better job explaining it on a high level and then giving technical details if I still wanted to know more.
I thought I knew what OAuth was (we actually use it in several projects) until I read this “explanation”. If this was supposed to clarify anything, well, it didn’t.
no offense but it looks like the reason behind oauth confusion is the author. I had to read half way through to get to a definition which was a poor explanation. Sometimes certain topics are difficult to understand because the initial person behind it wasn't good at communicating the information.
In homelab, I push myself to use proxy (header) authentication. I know I'm burdening many responsibilities in a reverse proxy (tls, ip blocking, authentication) but it seems I can better handle those complexity as compared to oauth setup.
> There are very credible arguments that the-set-of-IETF-standards-that-describe-OAuth are less a standard than a framework. I'm not sure that's a bad thing, though.
This is the only thing you need to know about OAuth. As FYI ...Eran Hammer is the author of OAuth 1.0 and original editor of the OAuth 2.0 spec.
[1] "...Eran Hammer resigned from his role of lead author for the OAuth 2.0 project, withdrew from the IETF working group, and removed his name from the specification in July 2012. Hammer cited a conflict between web and enterprise cultures as his reason for leaving, noting that IETF is a community that is "all about enterprise use cases" and "not capable of simple". "What is now offered is a blueprint for an authorization protocol", he noted, "that is the enterprise way", providing a "whole new frontier to sell consulting services and integration solutions". In comparing OAuth 2.0 with OAuth 1.0, Hammer points out that it has become "more complex, less interoperable, less useful, more incomplete, and most importantly, less secure". He explains how architectural changes for 2.0 unbound tokens from clients, removed all signatures and cryptography at a protocol level and added expiring tokens (because tokens could not be revoked) while complicating the processing of authorization. Numerous items were left unspecified or unlimited in the specification because "as has been the nature of this working group, no issue is too small to get stuck on or leave open for each implementation to decide."
David Recordon later also removed his name from the specifications for unspecified reasons. Dick Hardt took over the editor role, and the framework was published in October 2012.
David Harris, author of the email client Pegasus Mail, has criticised OAuth 2.0 as "an absolute dog's breakfast", requiring developers to write custom modules specific to each service (Gmail, Microsoft Mail services, etc.), and to register specifically with them."
> IETF is a community that is "all about enterprise use cases" and "not capable of simple". "What is now offered is a blueprint for an authorization protocol", he noted, "that is the enterprise way", providing a "whole new frontier to sell consulting services and integration solutions".
At the end of a talk about Oauth 2.0 at some indie or fediverse conference during lockdown, Aaron Parecki, who was then and still is employed at Okta, was asked if it might not be worth isolating the parts of the protocol/flow that actually requires a service (i.e. protocol-aware server in the loop) from those that don't, so that you could still get limited authentication/identity-tagging if your "provider" is your personal domain where you're just hosting static site. He immediately acted like he was addressing the dumbest person in the virtual room (it was a remote conference), telegraphing through his response that he might actually be on the verge of physical pain having to deal with such an imbecilic question.
Meanwhile, in cryptography engineering circles, I recall the general sentiment as being "at least they stripped all the weird attempted cryptography out of it, so it's just same-origin/TLS security now".
It‘s almost like the documentation of OAuth itself, except that it‘s worded in such a way that you ask yourself if you‘re just too stupid to have parsed it correctly. The entire OAuth documentation feels like reading absent mindedly or if you‘re tired, where your eyes just wander until you snap back to reality and you have to start again from the beginning.
As someone who has had to deal with OAuth quite a bit at work, I like it for the most part, but it's just so dang big and complicated.
Almost everyone thinks of OAuth as the "three legged redirect" flow, where one site sends you to another, you say "Yes, I authorize" and it sends you back, and now that site can act as you on that other site.
But that's just the tip of the iceberg! That's called the "authorization code" grant, because implementation wise the one site gives a special one time authorization code to the other one, which it can then exchange server-to-server to get the actual sensitive access credential.
What about if the human isn't in the loop at that moment? Well you have the "client credentials" grant. Or what if it's limited input like a TV or something, well then you have the "device" grant.
And what if the client can't securely store a client secret, because it's a single page web app or a mobile application? Then it has to be a public client, and you can use PKCE or PAR for flow integrity.
What if you can't establish the clients up front? There's DCR, which is all important now with MCP. But then that either lets unauthenticated requests in to create resources in your data stores, or needs to be bootstrapped by some other form of authentication.
It's all just a sprawling behemoth of a framework, because it tries to do everything.
> It's all just a sprawling behemoth of a framework, because it tries to do everything.
Yeah, I mean it can be, but it doesn't have to be, depends entirely on what you need. And if you need those things, like machine-to-machine authentication, where you can't establish clients up front, you need to do something about it anyways, why not just go with something others might already know?
> It's all just a sprawling behemoth of a framework, because it tries to do everything.
I also interact with OAuth quite a bit at work. I also have dealt with SAML.
I'd pick OAuth over SAML any day of the week, and not just because OAuth (v2 at least) is 7 years younger.
It's also because OAuth, for all its sprawl, lets you pick and choose different pieces to focus on, and has evolved over time. The overall framework tries to meet everyone's needs, but accomplishes this via different specs/RFCs.
SAML, on the other hand, is an 800 page behemoth spec frozen in time. It tried to be everything to everyone using the tools available at the time (XML, for one). Even though the spec isn't evolving (and the WG is shut down) it's never going to go away--it's too embedded as a solution for so many existing systems.
I also don't know what could replace OAuth. I looked at GNAP but haven't seen anything else comparable to OAuth.
>It's all just a sprawling behemoth of a framework, because it tries to do everything.
it is, but at the same time, that's kind of great. it handles all the things. but you don't have to use them all. for me, the point of oauth is that there's this whole complicated mess of stuff that happens in the auth layer, but the end result is a bearer token and maybe a refresh token.
you can build that mess of auth yourself, or you can swap in any of a bunch of different third-party providers to do it for you, because they all just give you the same bearer and refresh token. then you can build your app in a way that doesn't care about auth, because it all gets handled inside that oauth box. are you currently serving a request that came from a client credentials grant, or an authorization code grant? was it a pkce client? it doesn't matter outside the client.
Good description of OAuth from one of the folks there at the beginning. I think the author doesn't do a great job of answering the question in the concrete, though. This sibling comment does a lot better[0].
I'm partial to this piece[1], which I helped write. It covers the various common modalities of OAuth/OIDC. (It's really hard to separate them, to be honest; they're often conflated.) Was discussed previously on HN[2].
The oauth2-proxy suggestion above is probably the easiest path for this. The main thing to watch out with nginx-level oauth is token expiry. If you set short-lived tokens (which you should), you need the proxy layer to handle refresh silently or your friends will keep getting kicked back to the login screen mid-session. If you just need Google or GitHub login for a few people, oauth2-proxy with an email allowlist is way less overhead than running a full identity provider.
skybrian | 20 hours ago
phrotoma | 10 hours ago
skybrian | 5 hours ago
magicalhippo | 19 hours ago
What I need is to understand why it is designed this way, and to see concrete examples of use cases that motivate the design
It's not "just another" explanation for how OAuth does, which was my immediate guess when reading the title.
However glad I opted to give it a chance, and likely especially illuminating for the younger crowd who didn't get to experience the joys of the early web 2.0 days.
chrisweekly | 19 hours ago
beratbozkurt0 | 19 hours ago
VladVladikoff | 19 hours ago
chrysoprace | 19 hours ago
KPGv2 | 18 hours ago
I remember how stoked I was to finally get it working. It was a massive pain, but luckily there were websites that would walk through the process procedurally, showing how everything worked, one step at a time.
chrysoprace | 17 hours ago
tndata | 14 hours ago
brabel | 14 hours ago
skeptrune | 19 hours ago
clickety_clack | 19 hours ago
Swizec | 18 hours ago
Still have scars from building directly based off the blogposts Twitter and Facebook engineers wrote about how to integrate with this. Think it wasn’t even a standard yet.
I credit that painful experience with now feeling like OAuth is really quite simple. V2 cleaned it up a lot
paulddraper | 16 hours ago
It doesn’t seem that way on the surface. But once your finished with out of band callback validation, localhost, refresh tokens, and PKCE, you realize what a monster OAuth 2 actually is.
magicalhippo | 18 hours ago
That said, there's a lot of details that are non-trivial, especially since in many cases you actually have to deal with OIDC[2] which builds on OAuth 2.0, and so then you're suddenly dealing with JWKs and whatnot in addition.
[1]: https://learn.microsoft.com/en-us/entra/identity-platform/v2...
[2]: https://openid.net/developers/how-connect-works/
why-el | 17 hours ago
"OAuth is a simple idea, but with a curse: once you understand it, you lose the ability to explain it."
bsder | 16 hours ago
warp | 13 hours ago
TZubiri | 10 hours ago
If you start with OAuth from the perspective of a Service Provider/resource holder, it will all come clear.
Web security is often like that as well, most people facing stuff like CORS or HTTPS, is usually not because they are trying to solve a security issue, but it's because an upstream provider is forcing them to increase their security standards in order to be trusted with their user's data.
mberning | 18 hours ago
gfody | 16 hours ago
SgtBastard | 14 hours ago
Frotag | 18 hours ago
KronisLV | 14 hours ago
This page might have something, but I can’t read it myself on mobile cause it shows up broken: https://openid.net/certification/certified-openid-relying-pa...
I know it’s not just OAuth but OIDC had a pretty decent provider support and I could even self-host a Keycloak instance - it was annoying to setup but worked okay in practice, could define my own users and then just get a decent login page when needed and otherwise just got into the sites I wanted.
Personally though, it felt a bit overkill when compared to basicauth for anything not run in public or for a lot of users.
emilburzo | 14 hours ago
It's pretty neat since you can have one oauth instances for all virtual hosts, e.g.:
[1] https://github.com/oauth2-proxy/oauth2-proxyuserbinator | 15 hours ago
kennywinker | 14 hours ago
Thanks, it did not.
OAuth and OpenID Connect are a denial of service attack on the brains of the humans who have to work with them.
BrandoElFollito | 12 hours ago
I do not understand what I am doing and trust the docs, but it has never been a particularly difficult setup.
SahAssar | 12 hours ago
I would argue that then you do not "have to work with them", you are merely using products built with them.
layer8 | an hour ago
TZubiri | 10 hours ago
How can B be sure that C is acting on A's behalf? Can A only allow C to access certain data (or send only certain data) in order to reduce risk?
A protocol that allows for that three way negotiation is OAuth.
Like with most specs, a lot of the complexity is added in the later years, by companies that have thousands of users and complex edge cases and necessities, and they are the ones dominating the council, and their needs are the ones that push forward newer versions.
So with most specs, the best way to start learning it is by learning from the oldest specs to the newest ones, so if you start by reading or using OAuth2, you will be bombarded with a lot of extra complexities, not even the current experts started like that.
If you need to catch up, always start with the oldest specs/versions.
mettamage | 10 hours ago
So thanks!
I'll start reading the oldest HTTP spec for funzies.
TZubiri | 44 minutes ago
Fwiw it's entirely possible to build a web server by listening on port 80 and reading the text stream and writing to the output stream, no libraries no frameworks no apache no ngninx. And I don't mean you need to rebuild a general purpose an apache like server, maybe for a landing page you can just serve a static page and you will be implementing a very small subset of HTTP.
frizlab | 10 hours ago
user3939382 | 7 hours ago
Meanwhile https://www.couchbase.com/blog/wp-content/uploads/2021/05/oa...
hahn-kev | 9 hours ago
bob1029 | 8 hours ago
tptacek | 5 hours ago
dadrian | 5 hours ago
The PGP packet has entered the chat.
clarkdale | an hour ago
sandeepkd | 14 hours ago
brabel | 14 hours ago
sandeepkd | 13 hours ago
andhuman | 14 hours ago
[0]: https://youtu.be/996OiexHze0
reactordev | 14 hours ago
OAuth is just a process framework for doing authentication and authorization such that a system doesn’t need to create those mechanisms themselves but can ask a server for it. More recently, in the form of a JWT token with those permissions encoded within.
It all boils down to how long your login token (some hash), or in the case of OAuth, your refresh token, can request an access token (timeboxed access to the system). “Tokens” in this case are just cryptographic signatures or hmac hashes of something. Preferably with a nonce or client_id as a salt.
Traditional login with username and password gives you a cookie (or session, or both) that you use to identify that login, same thing for refresh tokens. Only, refresh tokens are authentication, you need access so you request an access token from a resource server or api, now you have your time boxed access token that allows you to “Authentication: Bearer <token>” at your resource.
From a server perspective, your resource server could have just proxied the username and password auth to get the refresh token or (what I like to do) check the signature of the refresh token to verify that it came from me. If it did, I trust it, so long as the user isn’t in the ban table. If it’s good and they aren’t banned, issue them a time boxed access token for 24h.
If you fail to grasp the JWT aspect, I suggest you learn more about RSA/PKI/SHA and HMAC encryption libraries in your programming language of choice. Knowing how to bcrypt is one thing, knowing how to encrypt/sign/verify/decrypt is The Way.
(Sorry to the grey beards in the back and they know this already and probably wrote that RFC).
mstaoru | 11 hours ago
Most importantly, OAuth is an authorization framework, OIDC is an authentication extension built on top.
Refresh tokens are part of authorization, not authentication.
HTTP header is Authorization: Bearer..., not Authentication.
There's no such thing as "HMAC encryption", it's a message authentication code. RSA in OAuth is also typically used for signing, not encryption. Not much "encryption" encryption going on in OAuth overall TBH.
Nonce and client IDs are not "salts", but ok that's nitpicking :)
reactordev | 8 hours ago
The point I was trying to make at 2am is that it’s not scary or super advanced stuff and that you can get away with OAuth-like (as so many do). But yes, OAuth is authorization, OIDC is authentication. The refresh token is an authorization but it makes sense to people who have never done it to think of it as a “post-login marker”.
jvuygbbkuurx | 14 hours ago
I've seen so many integrations use Oauth where it wasn't a good fit or where the spec was not followed. It always results in an abomination and insecure mess.
Maybe it's a know the rules before you can break them thing, but I've found designing custom auth integrations from UX first perspective result in amazing features. It's rare that both parties are willing to put the effort in it though. Usually people try to shoehorn the usecase into an existing oauth platform.
The main selling point of Oauth is to scale auth and authz to thousands of clients and use cases.
bob1029 | 13 hours ago
I am seeing many fintech vendors move in this direction. The mutual clients want more granular control over access. Resource tokens are only valid for a few minutes in these new schemes. In most cases we're coming from a world where the same username and password was used to access things like bank cores for over a decade.
codeulike | 13 hours ago
"At its core, OAuth for delegation is a standard way to do the following:
The first half exists to send, with consent, a multi-use secret to a known delegate.
The other half of OAuth details how the delegate can use that secret to make subsequent requests on behalf of the person that gave the consent"
This paragraph has a bunch of words that need defining (the word delegate does not appear on the page until then) and a confusing use of 'first half', 'second half' .... First half of what?
Surely it can be explained better than that?
TZubiri | 10 hours ago
Delegate is just a standard word, look it up on a dictionary. If anything its internal technical definition is precisely in that sentence.
>"First half of what"
The spec, the standard, half of it deals with X, the other half of it deals with Y. Namely one half being how user grants permission to a third party, and the other half being how the third party makes requests to the main data holder.
If you need another definition: OAuth is a three way protocol between users, a service provider, and a third party. A user gives specific permissions to the third party, so that the service provider can share specific resources with the third party, who acts on the user's behalf.
Bishonen88 | 3 hours ago
grodriguez100 | 13 hours ago
TZubiri | 10 hours ago
scandox | 12 hours ago
A classic explainer from almost a decade ago. This explains it from the point of view of the original problem it was designed to solve.
halayli | 11 hours ago
kuekacang | 10 hours ago
vrnvu | 9 hours ago
rustybolt | 9 hours ago
Spoiler alert: it is.
Betelbuddy | 8 hours ago
[1] "...Eran Hammer resigned from his role of lead author for the OAuth 2.0 project, withdrew from the IETF working group, and removed his name from the specification in July 2012. Hammer cited a conflict between web and enterprise cultures as his reason for leaving, noting that IETF is a community that is "all about enterprise use cases" and "not capable of simple". "What is now offered is a blueprint for an authorization protocol", he noted, "that is the enterprise way", providing a "whole new frontier to sell consulting services and integration solutions". In comparing OAuth 2.0 with OAuth 1.0, Hammer points out that it has become "more complex, less interoperable, less useful, more incomplete, and most importantly, less secure". He explains how architectural changes for 2.0 unbound tokens from clients, removed all signatures and cryptography at a protocol level and added expiring tokens (because tokens could not be revoked) while complicating the processing of authorization. Numerous items were left unspecified or unlimited in the specification because "as has been the nature of this working group, no issue is too small to get stuck on or leave open for each implementation to decide."
David Recordon later also removed his name from the specifications for unspecified reasons. Dick Hardt took over the editor role, and the framework was published in October 2012.
David Harris, author of the email client Pegasus Mail, has criticised OAuth 2.0 as "an absolute dog's breakfast", requiring developers to write custom modules specific to each service (Gmail, Microsoft Mail services, etc.), and to register specifically with them."
[1] https://en.wikipedia.org/wiki/OAuth
browningstreet | 6 hours ago
pwdisswordfishs | 6 hours ago
At the end of a talk about Oauth 2.0 at some indie or fediverse conference during lockdown, Aaron Parecki, who was then and still is employed at Okta, was asked if it might not be worth isolating the parts of the protocol/flow that actually requires a service (i.e. protocol-aware server in the loop) from those that don't, so that you could still get limited authentication/identity-tagging if your "provider" is your personal domain where you're just hosting static site. He immediately acted like he was addressing the dumbest person in the virtual room (it was a remote conference), telegraphing through his response that he might actually be on the verge of physical pain having to deal with such an imbecilic question.
tptacek | 3 hours ago
afiori | 2 hours ago
tdiff | 8 hours ago
Author managed to simultaneously praise the question and avoid answering it at all.
j-krieger | 2 hours ago
losvedir | 8 hours ago
Almost everyone thinks of OAuth as the "three legged redirect" flow, where one site sends you to another, you say "Yes, I authorize" and it sends you back, and now that site can act as you on that other site.
But that's just the tip of the iceberg! That's called the "authorization code" grant, because implementation wise the one site gives a special one time authorization code to the other one, which it can then exchange server-to-server to get the actual sensitive access credential.
What about if the human isn't in the loop at that moment? Well you have the "client credentials" grant. Or what if it's limited input like a TV or something, well then you have the "device" grant.
And what if the client can't securely store a client secret, because it's a single page web app or a mobile application? Then it has to be a public client, and you can use PKCE or PAR for flow integrity.
What if you can't establish the clients up front? There's DCR, which is all important now with MCP. But then that either lets unauthenticated requests in to create resources in your data stores, or needs to be bootstrapped by some other form of authentication.
It's all just a sprawling behemoth of a framework, because it tries to do everything.
embedding-shape | 8 hours ago
Yeah, I mean it can be, but it doesn't have to be, depends entirely on what you need. And if you need those things, like machine-to-machine authentication, where you can't establish clients up front, you need to do something about it anyways, why not just go with something others might already know?
mooreds | 7 hours ago
I also interact with OAuth quite a bit at work. I also have dealt with SAML.
I'd pick OAuth over SAML any day of the week, and not just because OAuth (v2 at least) is 7 years younger.
It's also because OAuth, for all its sprawl, lets you pick and choose different pieces to focus on, and has evolved over time. The overall framework tries to meet everyone's needs, but accomplishes this via different specs/RFCs.
SAML, on the other hand, is an 800 page behemoth spec frozen in time. It tried to be everything to everyone using the tools available at the time (XML, for one). Even though the spec isn't evolving (and the WG is shut down) it's never going to go away--it's too embedded as a solution for so many existing systems.
I also don't know what could replace OAuth. I looked at GNAP but haven't seen anything else comparable to OAuth.
notatoad | 6 hours ago
it is, but at the same time, that's kind of great. it handles all the things. but you don't have to use them all. for me, the point of oauth is that there's this whole complicated mess of stuff that happens in the auth layer, but the end result is a bearer token and maybe a refresh token.
you can build that mess of auth yourself, or you can swap in any of a bunch of different third-party providers to do it for you, because they all just give you the same bearer and refresh token. then you can build your app in a way that doesn't care about auth, because it all gets handled inside that oauth box. are you currently serving a request that came from a client credentials grant, or an authorization code grant? was it a pkce client? it doesn't matter outside the client.
mooreds | 7 hours ago
I'm partial to this piece[1], which I helped write. It covers the various common modalities of OAuth/OIDC. (It's really hard to separate them, to be honest; they're often conflated.) Was discussed previously on HN[2].
0: https://news.ycombinator.com/item?id=47100073
1: https://fusionauth.io/articles/oauth/modern-guide-to-oauth
2: https://news.ycombinator.com/item?id=29752918
dhayabaran | 5 hours ago
jwr | 5 hours ago