I think this is solving a real operational pain point, definitely one that I've experienced. My biggest hesitation here is the direct exposure of the managing account identity not that I need to protect the accounts key material, I already need to do that.
While "usernames" are not generally protected to the same degree as credentials, they do matter and act as an important gate to even know about before a real attack can commence. This also provides the ability to associate random found credentials back to the sites you can now issue certificates for if they're using the same account. This is free scope expansion for any breach that occurs.
I guarantee sites like Shodan will start indexing these IDs on all domains they look at to provide those reverse lookup services.
Exactly. They should provide the user with a list of UUIDs(or any other randomish ID tied to the actual account) that can be used in the accounturi URL for these operations.
I think the previous post is talking about a search that will find the sibling domain names that have obtained certificates with the same account ID. That is a strong indication that those domains are in the same certificate renewal pipeline, most likely on the same physical/virtual server.
Run ACME inside a Docker container, one instance (and credentials) for each domain name. Doesn't consume much resources. The real problem is IP addresses anyway, CT logs "thankfully" feed information to every bad actor in real time, which makes data mining trivially easy.
CAA records including an accounturi already expose the account identity in the same manner, so I feel like that ship has already sailed somewhat (and I would prefer that the CAA and persist record formats match).
I think the difference is that using the existing DNS method listing the account is entirely optional. I have left it out on domains that I don't want correlated for that very reason.
Years ago, I had a really fubar shell script for generating the DNS-01 records on my own (non-cloud) run authoritative nameserver. It "worked," but its reliability was highly questionable.
I like this DNS-PERSIST fixes that.
But I don't understand why they chose to include the account as a plain-text string in the DNS record. Seems they could have just as easily used a randomly generated key that wouldn't mean anything to anyone outside Let's Encrypt, and without exposing my account to every privacy-invasive bot and hacker.
Those who choose to use DNS-PERSIST-01 should fully commit to automation and create one LetsEncrypt account per FQDN (or at least per loadbalancer), using a UUID as username.
There is no username in ACME besides the account URI, so the UUID you’re suggesting isn’t needed. The account uri themselves just have a number (db primary key).
If you’re worried about correlating between domains, then yes just make multiple accounts.
There is an email field in ACME account registration but we don’t persist that since we dropped sending expiry emails.
2. Its consistent across an account, making it easier to set up new domains without needing to make any API calls
3. It doesn’t pin a users key, so they can rotate it without needing to update DNS records - which this method assumes is nontrivial, otherwise you’d use the classic DNS validation method
> they could have just as easily used a randomly generated key
Isn't that pretty much what an accounturi is in the context of ACME? Who goes around manually creating Let's Encrypt accounts and re-using them on every server they manage?
We then can just staple the Persist DNS key to the certificate itself.
And then we just need to cut out the middleman and add a new IETF standard for browsers to directly validate the certificates, as long as they confirm the DNS response using DNSSEC.
This decreases the salience of DANE/DNSSEC by taking DNS queries off the per-issuance critical path. Attackers targeting multitenant platforms get only a small number of bites at the apple in this model.
Sure. It's yet another advantage of doing True DANE. But it still requires DNS to be reliable for the certificate issuance to work, there's no way around it.
So why not cut out the middleman?
(And the answer right now is "legacy compatibility")
When you shoot yourself in the foot with DNSSEC, you typically end up with a non-working setup.
The biggest problem is that DNS replies are often cached, so fixes for the mistakes can take a while to propagate. With Let's Encrypt you typically can fix stuff right away if something fails.
When you shoot yourself in the foot with DNSSEC, your entire domain falls of the Internet, as if it had never existed in the first place. It's basically the worst possible case failure and it's happened to multiple large shops; Slack being the most notorious recent example.
Yes, and it'd be great if DNSSEC added an "advisory" signature level. So it can be deployed without doing a leap of faith.
But let's not pretend that WebPKI is perfect. More than one large service failed at some point because of a forgotten TLS certificate renewal. And more than one service was pwned because a signing key leaked. Or a wildcard certificate turned out to be more wildcard than expected.
I understand the failures of DNSSEC and DNS in general. And we need to do something about it because it's really showing signs of its age as we continue to pile on functionality onto it.
I don't have an idea for a good solution for everything, but I just can't imagine us piling EVERYTHING onto WebPKI either.
I don't really understand most of this comment but you opened up this subthread with "Come on. It's not dangerous", and, as you're acknowledging here, it clearly is quite dangerous.
I'm mostly thinking about dangerous from the security point of view. I agree that it might not be the best from the operational point of view. DNSSEC in its current state makes DNS updates even more risky than they are, I agree with that.
In order for an attacker to reduce a site's Availability via DNS they must alter the records received by resolvers.
If they can do that, they can just refuse to send the records at all (or mangle them such that they are ignored). DNSSEC makes the situation no worse.
It does, however, increase Integrity.
For the record, the 'A' in CIA refers to resilience against some party's purposeful attempt to make something unavailable. It does not stand for Areliability or Asimplicity.
Care to explain what you think is correct, if that is incorrect?
CIA is about security. It's not about some kind of operational best practices.
Supporting example: creating a system where someone failing to enter their password correctly one time locks them out for a day is problematic, because that system can be made unavailable by an attacker. This is not an Available system, and thus not as secure as one that has a more flexible lockout policy.
Supporting example: creating a system where an application is only available from one IP address is problematic, because an attacker can take out one ISP and knock that IP address off the Internet. Making the system more Available by allowing users to access it from other IPs improves the overall security posture.
You're commenting on a post about LetsEncrypt working with other entities in the industry to make improvements to WebPKI. It's safe to say that nobody's claiming it's perfect.
But you can't go from ~"WebPKI isn't perfect" and ~"DNSSEC/DANE exist" and draw a magic path where using DNSSEC or DANE is actually a good thing for people to roll out. They'd need to be actually a good fit, and for DANE we have direct evidence that it isn't: a rollout was attempted and it was walked back due to multiple issues.
DNS queries are still part of the critical path, as let's encrypt needs to check that the username is still allowed to receive a cert before each issuance.
I wonder why they switched from a super-secure-super-complex (in terms of operations) way of doing DNS auth to a super-simple-no-cryptography-involved method that just relies on the account id.
Why not using some public/private key auth where the dns contains a public key and the requesting server uses the private key to sign the cert request? This would decouple the authorization from the actual account. It would not reveal the account's identity. It could be used with multiple account (useful for a wildcard on the DNS plus several independent systems requesting certs for subdomains).
The most common vector for DNS-based attacks on issuance is compromised registrar accounts, and no matter how complicated you make the cryptography, if you're layering it onto the DNS, those attacks will preempt the cryptography.
Because LE keeps a mapping of account ids to emails and public keys. You have to have the private key to the ACME account to issue a cert. The cryptography is still there but the dance is done by certbot behind the scenes.
Prior to this accounts were nearly pointless as proof of control was checked every time so people (rightfully) just threw away the account key LE generated for them. Now if you use PERSIST you have to keep it around and deploy it to servers you want to be able to issue certs.
Using the account identifier in the record and and LE mapping the identifier to a public key internally enables key rotario without touching the record again.
This adds a new validation method that people can use if they want. The existing validation methods (https://letsencrypt.org/docs/challenge-types/) aren't going away, so your current setup will keep working.
And to elaborate, the reasons you might want to use a DNS challenge are to acquire wildcard certificates, or to acquire regular certificates on a machine or domain which isn't directly internet-facing. If neither of those apply to you then the regular HTTP/TLS methods are fine.
OK I was sort of thinking that might be the case but wanted to make sure in case I had to start prepping now, thanks. We use no wildcard domains today, maybe down the road.
Wildcard domains are a great way to get certs for all your "internal systems" with only having to expose one (or a bit of one on DNS) to the Internet at large.
This is going to greatly simplify some of my scripts.
I'm surprised the ballot passed, unanimously even! I get that storing the DNS credentials in the certificate renewal pipeline is risky, but many DNS providers have granular API access controls, so it is already possible to limit the surface area in case the keys get leaked. Plus, you can revoke the keys easily.
The ACME account credentials are also accessible by the same renewal pipelines that has the DNS API credentials, so this does not provide any new isolation.
~It's also not quite clear how to revoke this challenge, and how domain expiration deal with this. The DNS record contents should have been at least the HMAC of the account key, the FQDN, and something that will invalidate if the domain is transferred somewhere else. The leaf DNSSEC key would have been perfect, but DNSSEC key rotation is also quite broken, so it wouldn't play nice.~
Is there a way to limit the challenge types with CAA records? You can limit it by an account number, and I believe that is the most tight control you have so far.
---
Edit: thanks to the replies to this comment, I learned that this would provide invalidation simply by removing the DNS record, and that the DNS records are checked at renewal time with a much shorter validation TTL.
> but many DNS providers have granular API access controls
And many providers don't. (Even big ones that are supposedly competent like Cloudflare.)
And basically everyone who uses granular API keys are storing a cleartext key, which is no better and possibly worse than storing a credential for an ACME account.
> It's also not quite clear how to revoke this challenge, and how domain expiration deal with this
CAs can cache the record lookup for no longer than 10 days. After 10 days, they have to check it again. If the record is gone, which would be expected if the domain has expired or been transferred, then the authorization is no longer valid.
(I would have preferred a much shorter limit, like 8 hours, but 10 days is a lot better than the current 398 day limit for the original ACME DNS validation method.)
Yes, you can limit both challenge types and account URIs in CAA records.
To revoke the record, delete it from DNS. Let’s Encrypt queries authoritative nameservers with caches capped at 1 minute. Authorizations that have succeeded will soon be capped at 7 hours, though that’s independent of this challenge.
This wasn’t the first version of the ballot, so there was substantial work to get consensus on a ballot before the vote.
CAs were already doing something like this (CNAME to a dns server controlled by the CA), so there was interest from everyone involved to standardize and decide on what the rules should be.
I use AWS Route53 and you can get incredibly granular with API permissions
Key condition keys for this purpose include:
route53:ChangeResourceRecordSetsActions: Limits actions to CREATE, UPDATE, or DELETE.
route53:ChangeResourceRecordSetsRecordTypes: Limits actions to specific DNS record types (e.g., A, CNAME, TXT).
route53:ChangeResourceRecordSetsRecordValues: Limits actions based on the specific value of the DNS record.
route53:ChangeResourceRecordSetsResourceRecords: For more complex scenarios, this can be used to control access based on the full record set details.
"Support for the draft specification is available now in Pebble, a miniature version of Boulder, our production CA software. Work is also in progress on a lego-cli client implementation to make it easier for subscribers to experiment with and adopt. Staging rollout is planned for late Q1 2026, with a production rollout targeted for some time in Q2 2026."
To get a Let's Encrypt wildcard cert, I ended up running my own DNS server with dnsmasq and delegating the _acme-challenge subdomain to it.
Pasting a challenge string once and letting its continued presence prove continued ownership of a domain is a great step forward. But I agree with others that there is absolutely no reason to expose account numbers; it should be a random ID associated with the account in Let's Encrypt's database.
As a workaround, you should probably make a new account for each domain.
You bothered to manage your LE accounts? I only say because when using the other two challenge types with most deployment scenarios you were generating a new account per cert so your account ID was just a string of random numbers.
The ACME account URI does not appear in issued certificates. X.509 certs contain the subject, issuer, SANs, validity period, SCTs, etc., but no ACME account identifier. You can verify this by inspecting any Let's Encrypt certificate. What CT logs do reveal is which CA issued certs for which domain(s), and multi-domain certs group SANs together, so some correlation is possible. But the account URI itself is not exposed — dns-persist-01 records in DNS would be a new exposure surface for that identifier. That's a real tradeoff, which is why the draft supports using separate accounts per domain if isolation matters to you.
Interesting. Think a lot of the security headaches went away for me when I discovered providers like CF can restrict the scope of tokens to a single domain and lock it to my IP.
In the meantime, if you use bind as your authoritative nameserver, you can limit an hmac-secret to one TXT record, so each webserver that uses rfc2136 for certificate renewals is only capable of updating its specific record:
key "bob.acme." {
algorithm hmac-sha512;
secret "blahblahblah";
};
key "joe.acme." {
algorithm hmac-sha512;
secret "blahblahblah2";
};
zone "example.com" IN {
type master;
file "/var/lib/bind/example.com.zone";
update-policy {
grant bob.acme. name _acme-challenge.bob.acme.example.com. TXT;
grant joe.acme. name _acme-challenge.joe.acme.example.com. TXT;
};
key-directory "/var/lib/bind/keys-acme.example.com";
dnssec-policy "acme";
inline-signing yes;
};
I like this because it means an attacker who compromises "bob" can only get certs for "bob". The server part looks like this:
If not using something like bind, but willing to run a dedicated dns server for acme challenges, acmedns offers something similar.
When you generate a new account, it gets given a unique subdomain. You then cname the challenge domain to the acmedns subdomain and the account can only affect the associated subdomain.
This might be the first time in ten years that a certificate proposal intends to make issuing certificates more reasonable and not less. More of this, less of 7-day-lifetime stupidity.
Here, certbot runs in Docker in the intranet, and on a VPS I have a custom-built nameserver to which all the _acme-challenge are redirected to via NS records.
The system in the intranet starts certbot, makes it pass it the token-domain-pair from letsencrypt, it then sends those pairs to the nameserver which then attaches the token to a TXT record for that domain, so that the DNS reply can send this to letsencrypt when they request it.
All that will be gone and I thank you for that! You add as much value to the internet as Wikipedia or OpenStreetMap.
I'm really excited for this. We moved 120+ hand renewed certs to ACME, but still manually validate the domains annually. Many of them are on private/internal load balancers (no HTTP-01 challenge possible), and our DNS host doesn't support automation (no DNS-01 challenges either). While manually renewing the DCV for ~30 domains once a year isn't too bad, when the lifetime of that validity shrinks, ultimately to 9 days, it'd become a full time job. I just hope Sectigo implements this as quickly as LE.
No. Cloudflare will give a key scoped to an entire administrative domain in the Cloudflare sense like “a.com”. They will not give you a key scoped to a single entry within that domain. (That entry would be a domain in the RFC 9499 sense, but do you really expect anyone to agree on the terminology?)
In particular, there is no support for getting a key scoped to _acme-challenge.a.b.c or, even better, to a particular RR.
Maybe if you have an enterprise plan you can very awkwardly fudge it using lots of CNAMEs and subdomains.
Some DNS hosts that support old-school dynamic dns can do this. dns.he.net is an example, but they have a login system that very much stuck in the nineties.
If it's for a business, I would contact them to see if they have a commercial offering, but I think the Hurricane Electric Free DNS might actually fit.
What open source DNS servers have an API? (I saw someone elsewhere in the thread talking about doing this with dnsmasq, but it sounded like they'd cobbled something together, rather than the software handling it.)
I personally wouldn't use dnsmasq for this (as its far more suited as a recursive server and DHCP provider with some basic authoritative records, rather than an authoritative-only server), but every open source authoritative DNS server worth using about has RFC 2136 support.
PowerDNS has an API which is working pretty well, I've been using it to generate ACME certificates since a few years and I also built a DNS hosting service around it.
Note that you can delegate the _acme-challenge subdomain to a validation-specific server or zone, so a different server that supports automation if you can't / don't want to change your main DNS provider.
I've changed my mind about the short lived cert stuff after seeing what is enabled by IP address certificates with the HTTP-01 verification method. I don't even bother writing the cert to disk anymore. There is a background thread that checks to see if the current instance of the cert is null or older than 24h. The cert selector on aspnetcore just looks at this reference and blocks until its not null.
Being able to distribute self-hostable software to users that can be deployed onto a VM and made operational literally within 5 minutes is a big selling point. Domain registration & DNS are a massive pain to deal with at the novice end of the spectrum. You can combine this with things like https://checkip.amazonaws.com to build properly turnkey solutions.
There are also a bunch of rate limit exemptions that automatically apply whenever you "renew" a cert: https://letsencrypt.org/docs/rate-limits/#non-ari-renewals. That means whenever you request a cert and there already is an issued certificate for the same set of identities.
Technically, because Let's Encrypt always publishes all requested certificates to the logs (this isn't mandatory, it's just easier for most people so Let's Encrypt always does this) your tool can go look in the logs to get the certificate. You do need to know your private key, nobody else ever knew that so if you don't have that then you're done.
While they do not have direct SLAs, they still have to comply with rules enforced by browser vendors, as they will remove you from CT checks and you'll be marked retired/untrusted (you can find some in the above list).
This means a 99% uptime on a 90 day rolling average, a 1 minute update frequency for new entries (24 hours on an older RFC). No split views, strict append-only, sharding by year, etc.
X509 certificates published in CT logs are "pre-certificates". They contains a poison extension so you don't be able to use them with your private key.
The final certificate (without poison and with SCT proof) is usually not published in any CT logs but you can submit it yourself if you wish.
OP idea won't work unless OP will submit final certificate himself to CT logs.
X509 certificates published in CT logs are "pre-certificates". They contains a poison extension so you don't be able to use them with your private key.
The final certificate (without poison and with SCT proof) is usually not published in any CT logs but you can submit it yourself if you wish.
Is it possible to create an ACME account without requesting a certificate? AFAICT is is not so you cannot use this method unless you have first requested a certificate with some other method. I hope I am wrong!
An account needs to be created before you can request a certificate. Some ACME clients might create the account for you implicitly when you request the first certificate, but in the background it still needs to start by registering an account.
`certbot register` followed by `certbot show_account` is how you'd do this with certbot.
Yeessss! This should finally make certificates for internal only web services actually easier to orchestrate than before ACME. This closes probably the biggest operational pain point I've had with letsencrypt/modern web certificates.
There's a missing part here, and that's validating your ACME account ownership.
I think most users depend on automation that creates their accounts, so they never have to deal with it. But now, you need to propagate some credential to validate your account ownership to the ACME provider. I would have liked to see some conversation about that in this announcement.
I'm not familiar with Let's Encrypt's authentication model. If they don't have token creation that can be limited by target domain, but I expect you'll need to create separate accounts for each of your target domains, or else anything with that secret can create a cert for any domain your account controls.
> There's a missing part here, and that's validating your ACME account ownership.
Why? ACME accounts have credentials so that the ACME client can authenticate against the certificate issuer, and ACME providers require the placement of a DNS record or a .well-known HTTP endpoint to verify that the account is authorized to act upon the demands of whoever owns the domain.
If either your ACME credentials leak out or, even worse, someone manages to place DNS records or hijack your .well-known endpoint, you got far bigger problems at hand than someone being able to mis-issue SSL certificates under your domain name.
> Why? ACME accounts have credentials so that the ACME client can authenticate against the certificate issuer, and ACME providers require the placement of a DNS record or a .well-known HTTP endpoint to verify that the account is authorized to act upon the demands of whoever owns the domain.
This is the previous models. In this case, DNS-Persist-01, the record is permanent and never changes. So to prove that your request is valid, they need to authenticate in some other manner. Otherwise, once you create that persistent record, anybody could request a cert for your domain.
> The timestamp is expressed as UTC seconds since 1970-01-01
That should be TAI, right? Is that really correct or do they actually mean unix timestamps (those shift with leap seconds unlike TAI which is actually just the number of seconds that have passed since 1970001Z)?
Do leap seconds even matter here? Doing anything involving DNS or certificates in a way that requires clock synchronization down to the second would seem to be asking for trouble.
Probably yeah, seconds don't really matter here. You would have to work hard for the 27 second difference to be material. But precision is nice.
unixtime is almost certainly what is meant by the standard, but it is not the count of UTC seconds since 1970; unix time is the number of seconds since 1970 as if all days had 86400 seconds. UTC, TAI, and GPS seconds are all the same length, and the same number have happened since 1970, but TAI appears 37 seconds ahead of UTC because TAI has days with 86400 seconds, while UTC has some days with 86401 seconds and was 10 seconds ahead of UTC in 1970. unixtime and UTC are in sync because unixtime allows some days to encompass 86401 UTC seconds while unixtime only counts 86400 seconds.
Abolition of the Leap Second is basically a done deal. So, the differences caused by leap seconds will become frozen as arbitrary offsets, GPS time versus UTC for example.
Basically when it was invented leap seconds seemed like a good idea because we assumed the inconvenience versus value was a good trade, but in practice we've discovered the value is negligible and the inconvenience more than we expected, so, bye bye leap seconds.
The body responsible has formal treaty promises to make UTC track the Earth's spin and replacing those treaties is a huge pain, so, the "hack" proposed is to imagine into existence a leap minute or even a leap hour that could correct for the spin, and then in practice those will never be used either because it's even less convenient than a leap second - but by the time they're asked to set a date for these hypothetical changes likely the signatory countries won't exist and their successors can just sign a revised treaty, countries only tend to last a few hundred years, look at the poor US which is preparing 250th anniversary celebrations while also approaching civil war.
I tried to think about and leap seconds on their own don't seem to be a real problem. The problem is that leap seconds, minutes, hours, days, years, etc are a human interface concept and therefore only make sense to humans, but we've decided to force machines to deal with these human interface concepts as the primary way of dealing with time, when only the presentation layer should even know what a leap second is.
Leap seconds are not a human interface concept. Humans don't care. People who haven't thought very hard about this tend to believe humans care but they don't.
If humans cared the existing systems couldn't exist. For more than a century we've all lived with time "zones" which are thousands of seconds wide and we're not bothered by that. Many of us have civil time systems which shift twice per year by 3600 seconds for really no good reason, and while that's annoying it's barely worth a brief mention on TV news or in small talk. Leap seconds are 3600 times smaller and happen way less often, they're entirely negligible.
They existed because we thought we cared, and we actually don't care, and we thought it was pretty easy to do, and it actually wasn't very easy after all.
After having to deal with VM hosts that do GeoIP blocking, which unintentionally blocks Let's Encrypt and others from properly verifying domains via http-01/tls-alpn-01, I settled on a DIY solution that uses CNAME redirects and a custom, minimal DNS server for handling the redirected dns-01 challenges. It's essentially a greatly simplified version of the acme-dns project tailored to my project's needs (and written in node.js instead of Go).
Unfortunately with dns-persist-01 including account information in the DNS record itself, that's a bit of a show stopper for me. If/when account information changes, that means DNS records need changing and getting clients to update their DNS records (for any reason) has long been a pain.
Key rotation doesn't change the account URI — ACME key rollover (RFC 8555 §7.3.5) replaces the key pair but keeps the same account URL, which is one of the reasons the draft uses account URI rather than a public key. Your DNS record stays unchanged through key rotations.
The only case that requires a DNS update is creating an entirely new account, and that's deliberate — the record binds a specific account to the domain so a stolen record can't be used by someone else.
For your setup with CNAME delegation to a custom DNS server, this should actually be simpler than dns-01. You would point _validation-persist instead of _acme-challenge, and the target record is static. No per-issuance dynamic updates at all.
This is going to make it way easier to get publicly trusted certs for LAN servers that aren't internet facing.
I'm looking forward to every admin UI out there being able to generate a string you can just paste into a DNS record to instantly get a Let's Encrypt cert.
Just experienced this with my heavily networked off openclaw setup. I gave up and will do manual renewals until I have more time to figure out a good way of doing it. I was trying to get a cert for some headscale magic dns setups, but I think that's way more complicated than I thought it would be.
Once again I would like to ask CA/B to permit name constrained, short lifespan, automatically issued intermediate CAs. Last year's request: https://news.ycombinator.com/item?id=43563676
Am I just stupidly missing something or does this in theory allow anyone who controls a DNS server for my domain or anyone who controls traffic between LE and the DNS server for my domain to get a TLS certificate they can use to impersonate my domain?
I suppose the same is true for DNS-01 but this would make it even easier because the attacker can just put up their LE account instead of mine into the DNS response and get a certificate.
At this point why not just put my public cert into a DNS record and be done with it?
That’s fair but I also have to trust every provider between my DNS server and LE’s servers to not intercept DNS responses. Since DNS isn’t encrypted anyone anywhere between them can modify the traffic and get a certificate if I understand correctly.
DNSSEC prevents any modification of records, but isn’t widely deployed.
We query authoritative nameservers directly from at least four places, over a diverse set of network connections, from multiple parts of the world. This (called MPIC) makes interception more difficult.
We are also working on DNS over secure transports to authoritative nameservers, for cases where DNSSEC isn’t or won’t be deployed.
This is why the big names pay MarkMonitor $250-$1000 per domain with a minimum $10,00/yr spend.
They have a good reputation, lock down the domain technically at all levels, and have the connections and people/social skills to take care of any domain issues involving person-to-person contact.
Which is not easy, I recall spending months like a decade ago on email/phone attempting (successfully) to get my personal domain out of expiry hell (made more complicated by wrong records).
To mitigate the threat from an attacker who controls the network between the cert issuer and the DNS server, CAs will check the DNS records from multiple vantage points.
Let's Encrypt has been doing this for several years, and it's a requirement for all CAs as of 2024.
Answer: "Yes, standard DNS queries are unencrypted. The threat model relies on MPIC - querying from multiple vantage points - not transport encryption. DNSSEC adds an integrity layer where available."
My LE experience (post HTTP-01 and now DNS-01) - its a bit of a palava. I don't have to open port 80 which is nice for ... security audits but gains zero security benefit.
I have a PowerDNS server running locally with a static IPv4 address via NAT and I have created a DNS domain and enabled dynamic DNS updates from certain IPv4 addresses with a pre-shared key.
For each cert you need a DNS CNAME pointing to my DNS domain in a specific format. Then we have to get to grips with software to do the deed. acme.sh is superb for !Windows. simple-acme is fine for Windows. I still setup each one by hand instead of ansible/Zenworks/whatever because I'm a sucker for punishment and still small enough for now.
DNS-Persist-01 is not something I think I will ever need but clearly someone does.
For local services, I don't see the benefit of using DNS challenges and a Let's Encrypt certificate over running my own CA and generating my own certificates. It's not that much work to trust my root certificate on each device, and then I don't need an internet connection to verify local service certificates.
> It's not that much work to trust my root certificate on each device
Sure, but is trusting your homebrewed CA on all your devices for essentially everything really a good idea?
When your homebrewed CA somehow gets compromised, all your devices are effectively compromised and not only for local connections, but everything that uses PKIX.
Make sure all the TLS clients you use have support for name constraints. When I evaluated this in 2023, Chrome was in the process of adding support. I'd love to see a caniuse style analysis of TLS features, people assume they work but support varies.
I can either add a Cloudflare API key and Certbot on my NAS, or I could generate a root certificate and add it to my desktop computers, laptop, tablet, phones, Apple TV, etc.
Doesn't seem that tough of a choice. I guess in the future I could even forego the Cloudflare API key and just have the persistent DNS record there once.
I'm surprised that this doesn't require DNSSEC or at the very least actively encourage configuring DNSSEC. While I used to be fully in the camp that DNSSEC was way more trouble than it was worth, in particular when access was de-facto secured by trusted CA certificates, more and more DNS record types (CAA, CERT, SSHFP, these TXT records) are responsible for storing information that can be manipulated in MITM attacks to seize control of a root of trust.
Of course, this has little applicability to anyone who is small enough not to have nation-state level actors in their threat model. But when I look behind the curtain of even Fortune 100 companies that really ought to have nation-state level actors in their threat model, too often you find people who are just not operating at that level or are swamped with unrelated work. So I'm starting to become of the opinion that guidance should change here and at the very least be documented recommendations - if it's not encouraged down the organizational size scale, too often it's not applied further up where it's needed.
The RFC wording is a little weird. If the zone has DNSSEC configured, then the wording should be stronger and use MUST wording, and not imply that CAs will be compliant if they choose to avoid verifying signatures despite the presence of signstures. Likewise, these TXT records for dns-persist-01 ideally "SHOULD NOT" be deployed when DNSSEC is not configured.
An open PR on the draft (#35) adds exactly this language: if a CA performs DNSSEC validation and it fails (expired signatures, broken chain of trust), the CA MUST treat it as a challenge failure and MUST NOT use the record. The rationale is that dns-persist-01 records are long-lived, so a DNSSEC failure has more severe consequences than it would for a transient challenge.
DNS has always been a single-point-of-failure for TLS cert issuance. The threat is real, but not at all unique to this validation method.
(For example, an attacker with control of DNS could switch the A record to their server and use that to pass HTTP-01 or TLS-ALPN-01 validation, or update the _acme-challenge TXT record and use that to pass DNS-01.)
I'm one of the draft authors. Several questions here touch real design tradeoffs — addressing the main threads:
Why account URI instead of a public key in the record? (micw, 9dev, csense)
Three reasons:
1. Key rotation without DNS changes. dns-persist-01 exists because DNS updates are expensive. Embedding a public key forces a DNS update on every key rotation — the exact problem this method solves. The account URI survives key rotation (RFC 8555 §7.3.5).
2. CAA alignment. The accounturi parameter matches CAA record syntax (RFC 8657 §3). Domain owners use the same identifier in validation and policy records.
3. Simplicity. Matching uses simple string comparison — no key encoding, no signature verification, no algorithm negotiation. The cryptographic binding between account URI and key pair happens inside ACME, where it belongs.
The account URI is opaque — a URL containing a database key, like https://acme-v02.api.letsencrypt.org/acme/acct/123456789. No email, no name. The privacy exposure is modest: it reveals which CA account controls the domain, similar to what CT logs already show about CA-domain relationships, but with explicit account-level grouping. If you want isolation between domains, use separate accounts.
The accounturi binds validation to a specific account so a stolen DNS record can't be used by a different subscriber. An open PR (#35) adds accounturi to the challenge object so clients can verify it before provisioning.
10-day reuse limit (agwa)
The 10-day maximum comes from the CA/Browser Forum ballot (SC-088), not the IETF draft. The draft defers reuse period to CA policy and caps it at the DNS TTL (see "Validation Data Reuse and TTL Handling" in the Security Considerations). Let's Encrypt is migrating to 7 hours. The TTL cap lets domain owners enforce shorter windows directly.
Mandatory DNSSEC (rmoriz)
Requiring DNSSEC would exclude most domains and block adoption. The current draft specifies DNSSEC validation as SHOULD. An open PR (#35) tightens this: if a CA performs DNSSEC validation and it fails — expired signatures, broken chain of trust — the CA MUST reject the record. This is stricter than general ACME guidance because dns-persist-01 records are long-lived. MPIC (multi-perspective validation) provides the primary defense against on-path attacks regardless of DNSSEC.
Unencrypted DNS queries (1vuio0pswjnm7)
Yes, standard DNS queries are unencrypted. The threat model relies on MPIC — querying from multiple vantage points — not transport encryption. DNSSEC adds an integrity layer where available.
CAA interaction (Ayesh)
Yes. A CAA record with validationmethods=dns-persist-01 combined with accounturi restricts who can validate using this method.
Name-constrained intermediate CAs (infogulch)
Separate problem. dns-persist-01 reduces operational cost of leaf cert issuance by eliminating per-issuance DNS interaction. Delegated intermediates shift the trust model. Both could coexist.
> Requiring DNSSEC would exclude most domains and block adoption.
I think this is a good call. For the web, the CAB sets CA requirements and they could choose to require DNSSEC at a later date. It would be a breaking change, but the CAB can, and has, made breaking changes to the BR. The IETF draft seems flexible enough that we could end up with a DNSSEC MUST for the web, in practice, based on the CAB's discretion.
Thank you, this draft is literally perfect and I wish we had this years ago. Most people don't know about acmev2 account rekeying either. It is great you decided to use account uri instead of public key thumbprint.
Recently I wrote a simple acmev2 tool specifically for manual upfront acmev2 account creation, rekeying and getting TXT records on stout for dns-persist-01:
This is a blessing for us Dynamic DNS folks whose DNS providers demand a static IP for changes to come from (i.e., Namecheap). In theory, it means we can set this up once (or on a schedule that works for our needs), and trust that renewals will happen without continued maintenance or involvement.
Eager to give this a try as I modernize the homelab.
TrueDuality | a day ago
While "usernames" are not generally protected to the same degree as credentials, they do matter and act as an important gate to even know about before a real attack can commence. This also provides the ability to associate random found credentials back to the sites you can now issue certificates for if they're using the same account. This is free scope expansion for any breach that occurs.
I guarantee sites like Shodan will start indexing these IDs on all domains they look at to provide those reverse lookup services.
krunck | a day ago
gsich | a day ago
Ayesh | a day ago
mschuster91 | a day ago
cortesoft | 22 hours ago
mschuster91 | 22 hours ago
liambigelow | a day ago
Bender | 3 hours ago
mmh0000 | a day ago
Years ago, I had a really fubar shell script for generating the DNS-01 records on my own (non-cloud) run authoritative nameserver. It "worked," but its reliability was highly questionable.
I like this DNS-PERSIST fixes that.
But I don't understand why they chose to include the account as a plain-text string in the DNS record. Seems they could have just as easily used a randomly generated key that wouldn't mean anything to anyone outside Let's Encrypt, and without exposing my account to every privacy-invasive bot and hacker.
ragall | a day ago
mcpherrinm | a day ago
If you’re worried about correlating between domains, then yes just make multiple accounts.
There is an email field in ACME account registration but we don’t persist that since we dropped sending expiry emails.
9dev | 23 hours ago
mcpherrinm | 21 hours ago
1. It matches what the CAA accounturi field has
2. Its consistent across an account, making it easier to set up new domains without needing to make any API calls
3. It doesn’t pin a users key, so they can rotate it without needing to update DNS records - which this method assumes is nontrivial, otherwise you’d use the classic DNS validation method
glzone1 | 23 hours ago
I didn't realize the email field wasn't persisted. I assumed it could be used in some type of account recovery scenario.
Ajedi32 | 23 hours ago
Isn't that pretty much what an accounturi is in the context of ACME? Who goes around manually creating Let's Encrypt accounts and re-using them on every server they manage?
bflesch | 21 hours ago
Simple: it's for tracking. Someone paid for that.
cyberax | a day ago
We then can just staple the Persist DNS key to the certificate itself.
And then we just need to cut out the middleman and add a new IETF standard for browsers to directly validate the certificates, as long as they confirm the DNS response using DNSSEC.
tptacek | a day ago
cyberax | a day ago
So why not cut out the middleman?
(And the answer right now is "legacy compatibility")
tptacek | a day ago
cyberax | a day ago
akerl_ | a day ago
cyberax | 23 hours ago
The biggest problem is that DNS replies are often cached, so fixes for the mistakes can take a while to propagate. With Let's Encrypt you typically can fix stuff right away if something fails.
tptacek | 23 hours ago
cyberax | 22 hours ago
But let's not pretend that WebPKI is perfect. More than one large service failed at some point because of a forgotten TLS certificate renewal. And more than one service was pwned because a signing key leaked. Or a wildcard certificate turned out to be more wildcard than expected.
I understand the failures of DNSSEC and DNS in general. And we need to do something about it because it's really showing signs of its age as we continue to pile on functionality onto it.
I don't have an idea for a good solution for everything, but I just can't imagine us piling EVERYTHING onto WebPKI either.
tptacek | 21 hours ago
cyberax | 19 hours ago
It's also more secure, compared to ACME. An on-path attacker can impersonate the site operator and get credentials. DNSSEC is immune to that.
tptacek | 19 hours ago
cyberax | 15 hours ago
tptacek | 15 hours ago
Borealid | 14 hours ago
If they can do that, they can just refuse to send the records at all (or mangle them such that they are ignored). DNSSEC makes the situation no worse.
It does, however, increase Integrity.
For the record, the 'A' in CIA refers to resilience against some party's purposeful attempt to make something unavailable. It does not stand for Areliability or Asimplicity.
akerl_ | 8 hours ago
That’s pretty clearly not correct.
Borealid | 2 hours ago
CIA is about security. It's not about some kind of operational best practices.
Supporting example: creating a system where someone failing to enter their password correctly one time locks them out for a day is problematic, because that system can be made unavailable by an attacker. This is not an Available system, and thus not as secure as one that has a more flexible lockout policy.
Supporting example: creating a system where an application is only available from one IP address is problematic, because an attacker can take out one ISP and knock that IP address off the Internet. Making the system more Available by allowing users to access it from other IPs improves the overall security posture.
akerl_ | 21 hours ago
You're commenting on a post about LetsEncrypt working with other entities in the industry to make improvements to WebPKI. It's safe to say that nobody's claiming it's perfect.
But you can't go from ~"WebPKI isn't perfect" and ~"DNSSEC/DANE exist" and draw a magic path where using DNSSEC or DANE is actually a good thing for people to roll out. They'd need to be actually a good fit, and for DANE we have direct evidence that it isn't: a rollout was attempted and it was walked back due to multiple issues.
NoahZuniga | a day ago
micw | a day ago
Why not using some public/private key auth where the dns contains a public key and the requesting server uses the private key to sign the cert request? This would decouple the authorization from the actual account. It would not reveal the account's identity. It could be used with multiple account (useful for a wildcard on the DNS plus several independent systems requesting certs for subdomains).
tptacek | a day ago
Spivak | a day ago
Prior to this accounts were nearly pointless as proof of control was checked every time so people (rightfully) just threw away the account key LE generated for them. Now if you use PERSIST you have to keep it around and deploy it to servers you want to be able to issue certs.
raquuk | 14 hours ago
newsoftheday | a day ago
/usr/bin/letsencrypt renew -n --agree-tos --email me@example.com --keep-until-expiring
Will I need to change that? Will I need to manually add custom DNS entries to all my domains?
PS To add, compared to dealing with some paid certificate services, LetsEncrypt has been a dream.
dextercd | a day ago
jsheard | a day ago
newsoftheday | a day ago
bombcar | 22 hours ago
This is going to greatly simplify some of my scripts.
newsoftheday | a day ago
Ayesh | a day ago
The ACME account credentials are also accessible by the same renewal pipelines that has the DNS API credentials, so this does not provide any new isolation.
~It's also not quite clear how to revoke this challenge, and how domain expiration deal with this. The DNS record contents should have been at least the HMAC of the account key, the FQDN, and something that will invalidate if the domain is transferred somewhere else. The leaf DNSSEC key would have been perfect, but DNSSEC key rotation is also quite broken, so it wouldn't play nice.~
Is there a way to limit the challenge types with CAA records? You can limit it by an account number, and I believe that is the most tight control you have so far.
---
Edit: thanks to the replies to this comment, I learned that this would provide invalidation simply by removing the DNS record, and that the DNS records are checked at renewal time with a much shorter validation TTL.
amluto | a day ago
And many providers don't. (Even big ones that are supposedly competent like Cloudflare.)
And basically everyone who uses granular API keys are storing a cleartext key, which is no better and possibly worse than storing a credential for an ACME account.
agwa | a day ago
CAs can cache the record lookup for no longer than 10 days. After 10 days, they have to check it again. If the record is gone, which would be expected if the domain has expired or been transferred, then the authorization is no longer valid.
(I would have preferred a much shorter limit, like 8 hours, but 10 days is a lot better than the current 398 day limit for the original ACME DNS validation method.)
mcpherrinm | a day ago
mcpherrinm | a day ago
To revoke the record, delete it from DNS. Let’s Encrypt queries authoritative nameservers with caches capped at 1 minute. Authorizations that have succeeded will soon be capped at 7 hours, though that’s independent of this challenge.
mcpherrinm | a day ago
CAs were already doing something like this (CNAME to a dns server controlled by the CA), so there was interest from everyone involved to standardize and decide on what the rules should be.
UltraSane | 20 hours ago
Key condition keys for this purpose include:
CqtGLRGcukpy | a day ago
csense | a day ago
Pasting a challenge string once and letting its continued presence prove continued ownership of a domain is a great step forward. But I agree with others that there is absolutely no reason to expose account numbers; it should be a random ID associated with the account in Let's Encrypt's database.
As a workaround, you should probably make a new account for each domain.
Spivak | a day ago
bombcar | 22 hours ago
pepdar | 4 hours ago
Havoc | a day ago
amluto | a day ago
cube00 | a day ago
jcalvinowens | a day ago
In the meantime, if you use bind as your authoritative nameserver, you can limit an hmac-secret to one TXT record, so each webserver that uses rfc2136 for certificate renewals is only capable of updating its specific record:
I like this because it means an attacker who compromises "bob" can only get certs for "bob". The server part looks like this:xav0989 | 10 hours ago
ocdtrekkie | a day ago
qwertox | a day ago
Here, certbot runs in Docker in the intranet, and on a VPS I have a custom-built nameserver to which all the _acme-challenge are redirected to via NS records.
The system in the intranet starts certbot, makes it pass it the token-domain-pair from letsencrypt, it then sends those pairs to the nameserver which then attaches the token to a TXT record for that domain, so that the DNS reply can send this to letsencrypt when they request it.
All that will be gone and I thank you for that! You add as much value to the internet as Wikipedia or OpenStreetMap.
itintheory | a day ago
9dev | 23 hours ago
amluto | 22 hours ago
(There might well be a nice one, but I haven’t found it yet.)
nfredericks | 22 hours ago
amluto | 22 hours ago
In particular, there is no support for getting a key scoped to _acme-challenge.a.b.c or, even better, to a particular RR.
Maybe if you have an enterprise plan you can very awkwardly fudge it using lots of CNAMEs and subdomains.
Some DNS hosts that support old-school dynamic dns can do this. dns.he.net is an example, but they have a login system that very much stuck in the nineties.
dboreham | 20 hours ago
radiator | 22 hours ago
toast0 | 22 hours ago
https://dns.he.net/
amluto | 18 hours ago
zufallsheld | 14 hours ago
Hetzner_OL | 11 hours ago
amluto | 7 hours ago
jcgl | 8 hours ago
eichin | 19 hours ago
skinner927 | 17 hours ago
https://datatracker.ietf.org/doc/html/rfc2136
aragilar | 11 hours ago
quicksilver03 | 11 hours ago
arccy | 11 hours ago
https://letsencrypt.org/docs/challenge-types/#:~:text=This%2...
bob1029 | a day ago
Being able to distribute self-hostable software to users that can be deployed onto a VM and made operational literally within 5 minutes is a big selling point. Domain registration & DNS are a massive pain to deal with at the novice end of the spectrum. You can combine this with things like https://checkip.amazonaws.com to build properly turnkey solutions.
cube00 | a day ago
muvlon | 23 hours ago
There are also a bunch of rate limit exemptions that automatically apply whenever you "renew" a cert: https://letsencrypt.org/docs/rate-limits/#non-ari-renewals. That means whenever you request a cert and there already is an issued certificate for the same set of identities.
dextercd | 22 hours ago
LE wouldn't see this as a legitimate reason to raise rate limits, and such a request takes weeks to handle anyway.
Indeed, some rate limits don't apply for renewals but some still do.
inahga | 23 hours ago
tialaramex | 21 hours ago
xyzzy_plugh | 19 hours ago
pests | 14 hours ago
While they do not have direct SLAs, they still have to comply with rules enforced by browser vendors, as they will remove you from CT checks and you'll be marked retired/untrusted (you can find some in the above list).
This means a 99% uptime on a 90 day rolling average, a 1 minute update frequency for new entries (24 hours on an older RFC). No split views, strict append-only, sharding by year, etc.
I think OP's original idea would work.
plagiat0r | 5 hours ago
The final certificate (without poison and with SCT proof) is usually not published in any CT logs but you can submit it yourself if you wish.
OP idea won't work unless OP will submit final certificate himself to CT logs.
plagiat0r | 5 hours ago
The final certificate (without poison and with SCT proof) is usually not published in any CT logs but you can submit it yourself if you wish.
chaz6 | a day ago
dextercd | a day ago
`certbot register` followed by `certbot show_account` is how you'd do this with certbot.
chaz6 | 23 hours ago
plagiat0r | 4 hours ago
That is precisely why I wrote this: https://github.com/pawlakus/acmecli
This small tool will allow you to just create, rekey and deactivate your acmev2 account(s).
aaomidi | a day ago
zamadatix | a day ago
Thank you so much to all inolved!
jmholla | a day ago
I think most users depend on automation that creates their accounts, so they never have to deal with it. But now, you need to propagate some credential to validate your account ownership to the ACME provider. I would have liked to see some conversation about that in this announcement.
I'm not familiar with Let's Encrypt's authentication model. If they don't have token creation that can be limited by target domain, but I expect you'll need to create separate accounts for each of your target domains, or else anything with that secret can create a cert for any domain your account controls.
mschuster91 | a day ago
Why? ACME accounts have credentials so that the ACME client can authenticate against the certificate issuer, and ACME providers require the placement of a DNS record or a .well-known HTTP endpoint to verify that the account is authorized to act upon the demands of whoever owns the domain.
If either your ACME credentials leak out or, even worse, someone manages to place DNS records or hijack your .well-known endpoint, you got far bigger problems at hand than someone being able to mis-issue SSL certificates under your domain name.
jmholla | 15 hours ago
This is the previous models. In this case, DNS-Persist-01, the record is permanent and never changes. So to prove that your request is valid, they need to authenticate in some other manner. Otherwise, once you create that persistent record, anybody could request a cert for your domain.
Edit: Spivak explains the flow differences better in their comment: https://news.ycombinator.com/item?id=47065821
basilikum | a day ago
That should be TAI, right? Is that really correct or do they actually mean unix timestamps (those shift with leap seconds unlike TAI which is actually just the number of seconds that have passed since 1970001Z)?
wtallis | a day ago
toast0 | 21 hours ago
unixtime is almost certainly what is meant by the standard, but it is not the count of UTC seconds since 1970; unix time is the number of seconds since 1970 as if all days had 86400 seconds. UTC, TAI, and GPS seconds are all the same length, and the same number have happened since 1970, but TAI appears 37 seconds ahead of UTC because TAI has days with 86400 seconds, while UTC has some days with 86401 seconds and was 10 seconds ahead of UTC in 1970. unixtime and UTC are in sync because unixtime allows some days to encompass 86401 UTC seconds while unixtime only counts 86400 seconds.
tialaramex | 21 hours ago
Basically when it was invented leap seconds seemed like a good idea because we assumed the inconvenience versus value was a good trade, but in practice we've discovered the value is negligible and the inconvenience more than we expected, so, bye bye leap seconds.
The body responsible has formal treaty promises to make UTC track the Earth's spin and replacing those treaties is a huge pain, so, the "hack" proposed is to imagine into existence a leap minute or even a leap hour that could correct for the spin, and then in practice those will never be used either because it's even less convenient than a leap second - but by the time they're asked to set a date for these hypothetical changes likely the signatory countries won't exist and their successors can just sign a revised treaty, countries only tend to last a few hundred years, look at the poor US which is preparing 250th anniversary celebrations while also approaching civil war.
imtringued | 12 hours ago
tialaramex | 5 hours ago
If humans cared the existing systems couldn't exist. For more than a century we've all lived with time "zones" which are thousands of seconds wide and we're not bothered by that. Many of us have civil time systems which shift twice per year by 3600 seconds for really no good reason, and while that's annoying it's barely worth a brief mention on TV news or in small talk. Leap seconds are 3600 times smaller and happen way less often, they're entirely negligible.
They existed because we thought we cared, and we actually don't care, and we thought it was pretty easy to do, and it actually wasn't very easy after all.
aragilar | 11 hours ago
mscdex | a day ago
Unfortunately with dns-persist-01 including account information in the DNS record itself, that's a bit of a show stopper for me. If/when account information changes, that means DNS records need changing and getting clients to update their DNS records (for any reason) has long been a pain.
pepdar | an hour ago
The only case that requires a DNS update is creating an entirely new account, and that's deliberate — the record binds a specific account to the domain so a stolen record can't be used by someone else.
For your setup with CNAME delegation to a custom DNS server, this should actually be simpler than dns-01. You would point _validation-persist instead of _acme-challenge, and the target record is static. No per-issuance dynamic updates at all.
Ajedi32 | 23 hours ago
I'm looking forward to every admin UI out there being able to generate a string you can just paste into a DNS record to instantly get a Let's Encrypt cert.
kami23 | 21 hours ago
infogulch | 23 hours ago
Once again I would like to ask CA/B to permit name constrained, short lifespan, automatically issued intermediate CAs. Last year's request: https://news.ycombinator.com/item?id=43563676
IgorPartola | 22 hours ago
I suppose the same is true for DNS-01 but this would make it even easier because the attacker can just put up their LE account instead of mine into the DNS response and get a certificate.
At this point why not just put my public cert into a DNS record and be done with it?
bombcar | 22 hours ago
Try to figure out a way to block me from getting a TLS certificate if I can modify your DNS.
IgorPartola | 21 hours ago
mcpherrinm | 21 hours ago
DNSSEC prevents any modification of records, but isn’t widely deployed.
We query authoritative nameservers directly from at least four places, over a diverse set of network connections, from multiple parts of the world. This (called MPIC) makes interception more difficult.
We are also working on DNS over secure transports to authoritative nameservers, for cases where DNSSEC isn’t or won’t be deployed.
IgorPartola | 20 hours ago
gurjeet | 22 hours ago
If someone can perform MITM attack between LetsEncrypt and a DNS server, we've got bigger problem than just certificate issuance.
pests | 14 hours ago
They have a good reputation, lock down the domain technically at all levels, and have the connections and people/social skills to take care of any domain issues involving person-to-person contact.
Which is not easy, I recall spending months like a decade ago on email/phone attempting (successfully) to get my personal domain out of expiry hell (made more complicated by wrong records).
echoangle | 21 hours ago
msmith | 20 hours ago
Let's Encrypt has been doing this for several years, and it's a requirement for all CAs as of 2024.
[1] https://cabforum.org/2024/08/05/ballot-sc067v3-require-domai...
tkel | 18 hours ago
https://www.sidn.nl/en/modern-internet-standards/e-mail-secu...
CaliforniaKarl | 21 hours ago
dangoodmanUT | 20 hours ago
1vuio0pswjnm7 | 19 hours ago
1vuio0pswjnm7 | 5 hours ago
https://news.ycombinator.com/item?id=47073054
rmoriz | 19 hours ago
gerdesj | 19 hours ago
I have a PowerDNS server running locally with a static IPv4 address via NAT and I have created a DNS domain and enabled dynamic DNS updates from certain IPv4 addresses with a pre-shared key.
For each cert you need a DNS CNAME pointing to my DNS domain in a specific format. Then we have to get to grips with software to do the deed. acme.sh is superb for !Windows. simple-acme is fine for Windows. I still setup each one by hand instead of ansible/Zenworks/whatever because I'm a sucker for punishment and still small enough for now.
DNS-Persist-01 is not something I think I will ever need but clearly someone does.
tripdout | 18 hours ago
sebiw | 17 hours ago
Sure, but is trusting your homebrewed CA on all your devices for essentially everything really a good idea?
When your homebrewed CA somehow gets compromised, all your devices are effectively compromised and not only for local connections, but everything that uses PKIX.
NewJazz | 17 hours ago
https://systemoverlord.com/2020/06/14/private-ca-with-x-509-...
8organicbits | 7 hours ago
Hamuko | 15 hours ago
Doesn't seem that tough of a choice. I guess in the future I could even forego the Cloudflare API key and just have the persistent DNS record there once.
blahgeek | 17 hours ago
solatic | 13 hours ago
Of course, this has little applicability to anyone who is small enough not to have nation-state level actors in their threat model. But when I look behind the curtain of even Fortune 100 companies that really ought to have nation-state level actors in their threat model, too often you find people who are just not operating at that level or are swamped with unrelated work. So I'm starting to become of the opinion that guidance should change here and at the very least be documented recommendations - if it's not encouraged down the organizational size scale, too often it's not applied further up where it's needed.
ajnin | 11 hours ago
solatic | 10 hours ago
pepdar | 2 hours ago
paulnpace | 7 hours ago
Ajedi32 | 6 hours ago
(For example, an attacker with control of DNS could switch the A record to their server and use that to pass HTTP-01 or TLS-ALPN-01 validation, or update the _acme-challenge TXT record and use that to pass DNS-01.)
pepdar | 9 hours ago
Why account URI instead of a public key in the record? (micw, 9dev, csense)
Three reasons:
1. Key rotation without DNS changes. dns-persist-01 exists because DNS updates are expensive. Embedding a public key forces a DNS update on every key rotation — the exact problem this method solves. The account URI survives key rotation (RFC 8555 §7.3.5).
2. CAA alignment. The accounturi parameter matches CAA record syntax (RFC 8657 §3). Domain owners use the same identifier in validation and policy records.
3. Simplicity. Matching uses simple string comparison — no key encoding, no signature verification, no algorithm negotiation. The cryptographic binding between account URI and key pair happens inside ACME, where it belongs.
"Exposing account numbers" / privacy (csense, mmh0000, bflesch)
The account URI is opaque — a URL containing a database key, like https://acme-v02.api.letsencrypt.org/acme/acct/123456789. No email, no name. The privacy exposure is modest: it reveals which CA account controls the domain, similar to what CT logs already show about CA-domain relationships, but with explicit account-level grouping. If you want isolation between domains, use separate accounts.
The accounturi binds validation to a specific account so a stolen DNS record can't be used by a different subscriber. An open PR (#35) adds accounturi to the challenge object so clients can verify it before provisioning.
10-day reuse limit (agwa)
The 10-day maximum comes from the CA/Browser Forum ballot (SC-088), not the IETF draft. The draft defers reuse period to CA policy and caps it at the DNS TTL (see "Validation Data Reuse and TTL Handling" in the Security Considerations). Let's Encrypt is migrating to 7 hours. The TTL cap lets domain owners enforce shorter windows directly.
Mandatory DNSSEC (rmoriz)
Requiring DNSSEC would exclude most domains and block adoption. The current draft specifies DNSSEC validation as SHOULD. An open PR (#35) tightens this: if a CA performs DNSSEC validation and it fails — expired signatures, broken chain of trust — the CA MUST reject the record. This is stricter than general ACME guidance because dns-persist-01 records are long-lived. MPIC (multi-perspective validation) provides the primary defense against on-path attacks regardless of DNSSEC.
Unencrypted DNS queries (1vuio0pswjnm7)
Yes, standard DNS queries are unencrypted. The threat model relies on MPIC — querying from multiple vantage points — not transport encryption. DNSSEC adds an integrity layer where available.
CAA interaction (Ayesh)
Yes. A CAA record with validationmethods=dns-persist-01 combined with accounturi restricts who can validate using this method.
Name-constrained intermediate CAs (infogulch)
Separate problem. dns-persist-01 reduces operational cost of leaf cert issuance by eliminating per-issuance DNS interaction. Delegated intermediates shift the trust model. Both could coexist.
Draft: <https://github.com/ietf-wg-acme/draft-ietf-acme-dns-persist> (PR #35 is an open pull request on the draft with several of the improvements mentioned above.)
8organicbits | 7 hours ago
I think this is a good call. For the web, the CAB sets CA requirements and they could choose to require DNSSEC at a later date. It would be a breaking change, but the CAB can, and has, made breaking changes to the BR. The IETF draft seems flexible enough that we could end up with a DNSSEC MUST for the web, in practice, based on the CAB's discretion.
plagiat0r | 5 hours ago
Recently I wrote a simple acmev2 tool specifically for manual upfront acmev2 account creation, rekeying and getting TXT records on stout for dns-persist-01:
https://github.com/pawlakus/acmecli
It also helps with stateless http01 printing thumbprint...
stego-tech | 6 hours ago
Eager to give this a try as I modernize the homelab.