$ unbound-host -t A www.denic.de
www.denic.de has address 81.91.170.12
This does not:
$ unbound-host -D -t A www.denic.de
www.denic.de has address 81.91.170.12
validation failure <www.denic.de. A IN>: signature crypto failed from 194.246.96.1 for DS denic.de. while building chain of trust
So it does seem DNSSEC-related.
EDIT My explanation was wrong, this is not how keytags work. The published keytag data is consistent:
de. 3600 IN DNSKEY 256 3 8 AwEAAfRLmzuIXVf7x5A0+U7hke0dS+GEJG0EdPhnOthCCLhy0t0WqLyoXJOhnfsTJ8vQX5fd9qOJc9gyr3SWJZkXAhPm3yPSC7FWWHF70WZTKKM9CekmKdqwMwq6ZCjMSUcecCuSF4Sbt1MRszV7rFmfGVklA1l5UzNbqwD+Dr5vfcLn ;{id = 33834 (zsk), size = 1024b}
de. 3600 IN DNSKEY 257 3 8 AwEAAbWUSd/QN9Ae543xzdiacY6qbjwtZ21QfmdgxRdm4Z7bjjHWy249uqxCyjjjoS4LDoRDKmj7ElffMKvTWKE1qFKu0p8TUy4wyhX0M+m5FUjvQ3CiZMi+qY7GSHA5B+Zd73cidmnTeb3e8lso6jEsXg05/VZ2AyAqWF6FexEIFxIqiwwLk4UP0BwZ17Ur3q1qx9VSbPMyHgQ9d6nHUN1EEJsTDA2v0vKumsUyp74ZanRZ/bB/6IzpaaZyr5BLF5pSCNdbRNjVmkwYD0993vm79LueyOeibsoHRc16jhALrIJou1PFjdq7YQsYN0KtqRiJtaAfPprDBREpeamPuW/MnW0= ;{id = 26755 (ksk), size = 2048b}
de. 3600 IN DNSKEY 256 3 8 AwEAAbTe1PJi8EgIudNGb+KRTxBL2aCu5rXkZ+aIe/TC88pwRdrXYeXODp1ihZWFop5CrbWRBLrk/YUPBE8aBc6oJP+58dSkdMLYkjSkmvdvYx+zXnRLWlF2bapxvZxshATJDfGjGbCiWxKEOoyRx3UhICtHC+cUSddsEvzfacUcBb6n ;{id = 32911 (zsk), size = 1024b}
de. 3600 IN RRSIG DNSKEY 8 1 3600 20260519030655 20260505013655 26755 de. ke56T5GZt/X6zMBAF+ouyCTnAd7RY7MsnDcfa9jyyOwSouRXhvzim/V13JDTMBAnpAHxWQXoruXrAZ6A6re5N+8Pp2utVkAEKTWs0r4UOLNKoZ2+zMwNplKjNNnY5PJIbHfa5myyziLiIsi//qDIgQEACFk+pZcHXrRdqRoXPCL3UtfaXjk3+duDQdlPnYsJys5UshjVpkALSMChW7J0anzr0sG+f9ytstBneymMwFYOUC3NqbejbLPZsXGPZBQKPAoVJuV5q3znopbcqrDFfjI7bmX3QPYNvOaiT1ElBfi2piJVpDzMaMAmm2jCmvrf5VeTOBccMroh8sBtDPsaEg== ;{id = 26755}
The signature on the SOA record still does not verify:
de. 86400 IN SOA f.nic.de. dns-operations.denic.de. 1778014672 7200 7200 3600000 7200
de. 86400 IN RRSIG SOA 8 1 86400 20260519205754 20260505192754 33834 de. aZoiAJ+PaHUDVSHNXfV/R26ZK3GpFB7ek2Z46VnZdmPEDaTww+a7PkiQ98W83xohUunXYSvQCMeGYfUre5UT76eBKThdxW2a6ImX9/x/oEzQ9x/69Y/NSeTckOv9m3HCLBOug01op1koiHOIAVEvonOmXEHHqo1P4sR/fNbcVg4= ;{id = 33834}
So glad I found someone mention this. Amazon.de, SPIEGEL.de is down. Highly prominent sites unreachable. I wonder how long this will last and how big of a thing this ends up being once people talk about it :o Feels big to me
dig manages to dig out ips for heise.de and tagesschau.de but not spiegel.de amazon.de and google.de However, dig @8.8.8.8 has still amazon.de cached, unlike 1.1.1.1 so perhaps Google to the rescue?
[Edit] After playing around with it, google seems to have at least some pages cached. After setting dns to 8.8.8.8 amazon.de and spiegel.de work again, my blog does not.
Looks like a DNSSEC issue, not a nameserver outage. Validating resolvers SERVFAIL on every .de name with EDE:
RRSIG with malformed signature found for
a0d5d1p51kijsevll74k523htmq406bk.de/nsec3 (keytag=33834)
dig +cd amazon.de @8.8.8.8 works, dig amazon.de @a.nic.de works. Zone data is intact, DENIC just published an RRSIG over an NSEC3 record that doesn't validate against ZSK 33834. Every validating resolver therefore refuses to answer.
Intermittency fits anycast: some [a-n].nic.de instances still serve the previous (good) signatures, so retries occasionally land on a healthy auth. Per DENIC's FAQ the .de ZSK rotates every 5 weeks via pre-publish, so this smells like a botched rollover.
So a single configuration mistake in a single place wiped out external reachability of a major economy. It happened in the evening local time and should be fixable, modulo cache TTLs, by morning. This will limit the blast radius somewhat.
Still, at this level, brittle infrastructure is a political risk. The internet's famous "routing around damage" isn't quite working here. Should make for an interesting post mortem.
It looks like a failed key replacement during a scheduled maintenance event. Normally this sort of thing is thoroughly tested and has multiple eyes on for detailed review and planning before changes get committed, but obviously something got missed.
Would be interesting to know how something could get missed. You'd think the system was set up so that new keys could not be published without being verified working in a staging system.
fail-closed protocols have introduced some brittleness. A HTTP 1.0 server from 1999 probably still can service visitors today. A HTTPS/TLS 1.0 server from the same year wouldn't.
I think I see the point you're making here and I agree.
There is designing something to be fail-closed because it needs to be secure in a physical sense (actually secure, physically protected), and then there's designing something fail-closed because it needs to be secure from an intellectual sense (gatekept, intellectually protected). While most of the internet is "open source" by nature, the complexity has been increased to the point where significant financial and technical investment must be made to even just participate. We've let the gatekeepers raise the gates so high that nobody can reach them. AI will let the gatekeepers keep raising the gates, but then even they won't be able to reach the top. Then what?
I think the point you're trying to make, put another way is in the context of "availability" and "accessibility" we've compromised a lot of both availability and accessibility in the name of security since the dawn of the internet. How much of that security actually benefits the internet, and how much of that security hinders it? How much of it exists as a gatekeeping measure by those who can afford to write the rules?
DNS is a centralization risk, yes. Somehow we've decided this is fine. DNSSEC isn't the only issue - your TLD's nameservers could also be offline, or censored in your country.
BGP, but the names in question are limited to 128 bits, of which at most 48 will be looked up, and you don't get to choose which 48 bits are assigned to you.
GNS is the obvious response here, in addition to the various blockchain based solutions. Nothing that enjoys widespread support or mindshare unfortunately.
Even the current centralized ICANN flavor could be substantially more resilient if it instead handed out key fingerprints and semi-permanent addresses when queried. That way it would only ever need to be used as a fallback when the previously queried information failed to resolve.
GP said it was a risk (and it is), not that there are better alternatives. Not all risks can be eliminated easily but you should still be aware of them.
Presumably if LetsEncrypt goes down and stays down for a week, the sites that go down are the ones that see that their CA went down and at no point in the week take the option to get certs from a different CA?
Normally it should not have been, with cache and all, but that was the past...
Think about what would happen the day that letsencrypt is borken for whatever reason technical or like having a retarded US leader and being located in the wrong country. Taken into account the push of letsencrypt with major web browsers to restrict certificate validities for short periods like only a few days...
I have a bad feeling, that the impact will be quite severe for some services, as monitoring, performance, and security services might get disrupted. and just cleaning up is a big mess.. Worst case, some ot will experience outage and / or damage. But maybe I am just overestimating the severity of this.
I am reminded of the warning that zonemaster gives about putting your domain name servers on a single AS, as is common practice for many larger providers. A lot of people do not want others to see this as a problem since a single AS is a convenient configuration for routing, but it has the downside of being a single point of failure.
Building redundant infrastructure that can withstand BGP and DNS configuration mistakes are not that simple but it can be done.
As the CPU/RAM resources to run an authoritative-only slave nameserver for a few domains are extremely minimal (mine run at a unix load of 0.01), it's a very wise idea to put your ns3 or something at a totally different service provider on another continent. It costs less than a cup of coffee per month.
For a very long time, the computer club I was in operated a DNS server on a Pentium 75MHz and after the last major hardware upgrade it had a total of 110MB RAM memory and 2G disk space. It worked great except that before the upgrade it tended to run out of ram whenever there was a Linux kernel update, a problem we solved forever by populating all the ram slots with the maximum that the motherboard could handle to that nice 110 MB.
This makes sense for larger providers but just for a small/personal website there is literally zero advantages to having distributed authoritative DNS servers when the webserver is on a single host.
Ironically, denic still requires you to have two separate name servers with different IPs for your domain (which can be worked around by changing the IP of the registered name server afterwards lol), a requirement that all other registries I use have dropped or never had because enforcing such a policy at the registry level makes zero sense.
Every FAANG company has their own fiber backbone. Why invest the internet that everyone uses when you can invest in your own private internet and then sell that instead?
It's not like the long-haul fiber not owned by FAANG is a public utility, at least not in most places.
Traffic that goes over "the Internet" traverses some mix of your ISP's fiber, fiber belonging to some other ISP they have a deal with, then fiber belong to some ISP they have a deal with, etc.
All those ISPs are being paid to provide service, they can invest in their own networks.
To be fair, advanced real world knowledge of public/private key PKIs (x.509 or other), things like root CAs, are a fairly esoteric and very specialized field of study. There's people whose regular day jobs are nothing but doing stuff with PKI infrastructure and their depth of knowledge on many other non-PKI subjects is probably surface level only.
As is the overlap between DNSSEC and DNS itself, to be honest.
I once worked at the level of administering DNSSEC for 300+ TLDs. It's its own world. When that company was winding down, I tried to continue in the field but the most common response (outside of no response, of course), was 'we already have a DNS team/vendor/guy.'
And well, then things like this happen. I won't throw stones though, it's a lot to learn and can be incredibly brittle.
Is that actually true, though? Even though it's not really my job, I find myself debugging certificates and keys at least once a month, and that's after automating as much as possible with certbot and cloud certificates. PKI always seems to demand attention.
In my initial comment, I meant more in terms of complexity and planning from the perspective of the people who are running the public/private key infrastructure on the other side/upstream of what you're doing as a letsencrypt end user.
Broadly similar general concept to the team responsible for the DNSSSEC signing keys for an entire ccTLD.
Yeah a x509 PKI / root CA is a very different thing than DNSSSEC but they have a number of general logical similarities in that the chain of trust ultimately comes down to a "do not fuck this up" single point of failure.
It's not made easier by the fact that a lot of cryptography is either very old and arcane or it's one hell of a mess of code that doesn't make sense without reading standards.
I had the misfortune of having to dig deep into constructing ASN.1 payloads by hand [1] because that's the only thing Java speaks, and oh holy hell is this A MESS because OF COURSE there's two ways to encode a bunch of bytes (BIT STRING vs OCTET STRING) and encoding ed25519 keys uses BOTH [2].
And ed25519 is a mess in itself. The more-or-less standard implementation by orlp [3] is almost completely lacking any comments explaining what is going on where and reading the relevant RFCs alone doesn't help, it's probably only understandable by reading a 500 pages math paper.
It's almost as if cryptographers have zero interest in interested random people to join the field.
I'm 100% certain that you also can do that with raw java.security. I did that about 15 years ago with raw RSA/EC keys. You can just directly specify the private exponent for RSA (as a bigint!) or the curve point for EC.
Ditto for ed25519, you can just take the canonical implementation from DJB. And you really really shouldn't do that anyway, please just use OpenSSL or another similar major crypto library.
> I'm 100% certain that you also can do that with raw java.security.
I tried that, the problem is Meshcore specific - they do their own weird shit with private and public keys [1]. Haven't figured out how to do the private key import either, because in the C source code (or in python re-implementations) Meshcore just calls directly into the raw ed25519 library to do their custom math... it's a mess.
The trick to asn.1 is to generate both parser and serializer from the spec. Elliptic curve math on the other hand is ... yeah, you need to know the math and also know the tricks to code that implements it. Both of those have steep learning curve, but it's hardly because it's a mess or it's old.
> Both of those have steep learning curve, but it's hardly because it's a mess or it's old.
Bitpacking structures used to be important in the 60s. That time has passed, unless you're dealing with LoRa, NFC or other cases of highly constrained bandwidth there are way better options to serialize and deserialize information. It's time to move on, and the complexity of all the legacy garbage in crypto has been the case of many a security vulnerability in the past.
As for the code, it might be personal preference but I'd love to have at least some comments referring back to a specification or original research paper in the code.
ASN.1 is not used because of just bitpacking. There are other benefits to ASN.1 and it's probably one of the least problematic parts there.
People who have thought they can do better have made things like PGP. It's one of the worst cryptographic solutions out there. You're free to try as well though.
I think you misunderstand the problem asn.1 solves and constrains it works within (both 30 years ago and now). We sure can have a better one now once we learned all the lessons and know what good parts to keep, but this critique of bitpacking is misplaced.
The problem with ASN.1 is that it is big and complicated, and you only need a fraction of it for cryptography, and it isn't really used for anything outside of pki anymore.
It wouldn't be as bad if asn.1 had cought on more as a general purpose serialization format and there were ubiquitous decent libraries for dealing with it. But that didn't happen. Probably partly because there are so many different representations of asn.1.
A bespoke serialization specifically for certificates might actually have aged better, if it was well designed.
Assuming there are some libraries for it, would this make a pretty good case for LLM-generated ports of these existing libraries into other languages or onto other OSs/platforms?
One implementation could be treated as "the spect".
I've been in IT 30+ years, been running DNS, web servers, etc. since at least 1994. I haven't bothered with DNSSEC due to perceived operational complexity. The penalty for a screw up, a total outage, just doesn't seem worth the security it provides.
How simple sysadmin was in 1994 with no cryptography on any protocol. Everything could be easily MITM'd. Your credit card number would get jacked left and right in the 90s.
Not in '94, sure. But a couple of years later it was common and SSL was still uncommon, for a bunch of reasons, and also everyone was storing the card numbers in plaintext on their servers too.
Telnet was sniffed. IRC was being sniffed and logged.
And your mailman can also just open your letters. So what, it mostly doesn't happen in developed countries. Not everything needs an airtight technical solution, we have way less costly ways to deal with unwanted behavior.
That was my experience too until I decided that just running email systems for 30 odd years when HN says that is unnatural piqued my weird or something!
I ran up three new VMs on three different sites. I linked all three systems via a private Wireguard mesh. MariaDB on each VM bound to the wg IP and stock replication from the "primary". PowerDNS runs across that lot. One of the VMs is not available from the internet and has no identity within the DNS. The idea is that if the Eye of Sauron bears down on me, I can bring another DNS server online quite quickly and fiddle the records to bring it online. It also serves as a third authority for replication.
its gonna be all .de domains once caches dry out, anything that still works right now is bound to eventually fail until the underlying issue is resolved
Almost certainly /s.
"Danke Merkel" ("Thanks Merkel") was once a sincere criticism from conservatives regarding her policies (esp. during 2015 refugee crisis), but it quickly evolved into a sarcastic, deadpan joke used to blame her for literally anything that goes wrong in daily Germany - even years after she left. Interesting phenomenon...
If using an open resolver, i.e., a shared DNS cache, e.g., third party DNS service such as Google, Cloudflare, etc., then it might fail, or it might not. It depends on the third party DNS provider
That's cool, ty for that. The only one I put credentials into is Amazon it is unsigned. [1] There probably needs to be a DNSSECv2 .vbis that reduces risk somehow to get more adoption.
I was STRESSING tf out because I wasn't able to connect to my services & apps through my domains like at all .. they only work when using my phone data ? .. thank god it's not my fault this time
I work with a few people specialised in IT security, and some of them take their jobs too seriously and will "lock down" everything to the point that it becomes a very real risk that they lock out everyone including themselves.
Fundamentally, security is a solution to an availability problem: The desire of the users is for a system to remain available despite external attack.
Systems that become unavailable to everyone fail this requirement.
A door with its keyhole welded shut is not "secure", it's broken.
Security is not just a solution to availability. It is also to keep sensitive data (PII, or business secrets, or passwords, or cryptographic private keys, and so on) away from the hands of bad actors.
If I’m unable to use Amazon for 24 hours it doesn’t really matter. If a photo copy of my passport is leaked that’s worries and potential troubles for years.
If you squint at it, you can convert all three to just availability.
Confidentiality = available to us, but nobody else.
Integrity = available to us in a pristine condition.
It's a bit reductive, I'll admit, but it can be a useful exercise in the same way that everything in an economy can be reduce to units of either: "human time", "money" or "energy". Roughly speaking they're interchangeable.
E.g.: What's the benefit to you if your data is so confidential that you can't read it either? This is a real problem with some health information systems, where I can't access my own health records! Ditto with many government bureaucracies that keep my records safe and secure from me.
maybe? I'm using PiHole and 8.8.8.8/1.1.1.1 as upstream, and both options show "DNSSEC" next to their options in settings, so I assumed DNSSEC was enabled (unless I have to enable this somewhere else as well?)
Frankfurt am Main, 5 May 2026 – DENIC eG is currently experiencing a disruption in its DNS service for .de domains. As a result, all DNSSEC-signed .de domains are currently affected in their reachability.
The root cause of the disruption has not yet been fully identified. DENIC’s technical teams are working intensively on analysis and on restoring stable operations as quickly as possible.
Based on current information, users and operators of .de domains may experience impairments in domain resolution. Further updates will be provided as soon as reliable findings on the cause and recovery are available.
DENIC asks all affected parties for their understanding.
For further enquiries, DENIC can be contacted via the usual channels.
Kind-of. But there are worse things than outages when it's PKIs we're talking about. DNSSEC is also extremely opaque and unmonitored. Any compromise will not be noticed. Nor will anyone have any recourse against misbehaving roots.
Fun fact, CloudFlare has used the same KSK for zones it serves more than a decade now.
Which is fine. Not because KSK rollover is supposedly complicated, but if you can't manage to keep your private keys and PKI safe in the first place then key rotation is just a security circus trick. But if you do know how to keep them safe, then...
It is not fine. Keeping key material safe is not a boolean between "permanently safe" and "leaks immediately".
Keeping key material secure for more than a decade while it's in active use is vastly more complex than keeping it secure for a month, until it rotates.
For all we know, some ex-employee might be walking around with that KSK, theoretically being able to use it for god knows what for an another decade.
> Keeping key material secure for more than a decade while it's in active use is vastly more complex than keeping it secure for a month, until it rotates.
Nope. Key material rotation is just circus when it's done for the sake of rotation.
> For all we know, some ex-employee might be walking around with that KSK, theoretically being able to use it for god knows what for an another decade.
Or maybe an employee has compromised the new key that is going to be rotated in, while the old key is securely rooted in an HSM?
The point of rotation for these kinds of keys is that it limits the blast radius of what happens if an employee compromises such a key. This is sort of like how there are one or two die-hard PGP advocates who have come up with a whole Cinematic Universe where authenticated encryption is problematic ("it breaks error recovery! it's usually not what you want!") because mainstream PGP doesn't do it. Except here, it's that key rotation is bad, because of how often DNSSEC has failed to successfully pull off coordinated key rotations.
I can see the periodic rotations used as a way to keep up the operational experience. This is indeed a valid reason, although it needs to be weighted against the increased risk of compromise due to the rotation procedure itself.
I'm just saying that rotating the key just in case someone compromised it is not a great idea. Doubly so if it's done infrequently enough for the operational experience to atrophy between rotations.
And yeah, I fully agree that anything surrounding the DNSSEC operations is a burning trash fire. It doesn't have to be this way, but it is.
I'm glad we agree about DNSSEC, but the rationale I'm giving you for key rotation is the same reason we use short-lived secrets everywhere in modern cryptosystems. It's not controversial (except among Unix systems administrators).
Oh, I never disagreed about the state of DNSSEC. It's horrible. Along with the rest of the DNS infrastructure (I just had the reason to remember the DNS haiku again today, unrelated to .de). My disagreement is that I believe that DNSSEC should be fixed, rather than abandoned. And I believe that this does not actually require all that much work.
And I just don't fully buy this rationale for asymmetric key rotation. It makes total sense for symmetric secrets (except for passwords).
> Nope. Key material rotation is just circus when it's done for the sake of rotation.
I'm a mere sysadmin and not a cybersecurity expert. But this is always something that leaves me torn.
On the one hand, yes, rotation periods for many/most credentials are long enough that you're not really de-risking yourself all that much.
On the other hand, doing regular rotations allows you to tighten up your threat model. A regularly-rotated credential allows you to say "I implicitly trust that this credential has not been compromised prior to the previous rotation."[0] Whereas, without credential rotation, you're saying "I implicitly trust that this credential has not been compromised ever."
The latter to me seems clearly like the inferior model. The question is just whether the cost-benefit pencils out. And that is obviously very situationally dependent. That calculus doesn't pencil out when dealing with user-owned passwords for instance (i.e. the costs of regular password rotation dominate the benefits of the improved threat model). Human limitations with memory and such are the main issue there. However, that doesn't apply to e.g. hypothetical sufficiently developed DNSSEC infrastructure. Does that calculus pencil out there? I don't know. But it seems plausible at least.
[0] Modulo attackers having been able to pivot into a persistent threat with a previously-compromised credential.
Let's Encrypt going down isn't equivalent to a rant about how encryption was a terrible idea from the very beginning and we should all just use unencrypted traffic.
I host my blog on HTTP/1.1 only. But I also have an amateur radio station and I listen occasionally to (unencrypted!) air traffic frequencies around nearby airport.
That said, the last few dnssec posts that got traction, tptacek tends to be at least 20% of the comments alone (ex, 55/259), ignoring word count. Today seems calm
I've considered hard-coding some addresses into firmware as a fallback for a DNS outtage (which is more likely than not just misconfigured local DNS.) Events like this help justify this approach to the unconcerned.
The global and distributed system relies on the system actually returning valid responses. If the root servers are broken, whether it's a problem with RRSIG records or A records, the TLD is broken.
If my domains' DNS servers start pointing at localhost, that doesn't mean DNS is a broken protocol.
On a slightly unrelated note, I was setting nameservers for two .de domains a few weeks ago and thought my provider was being crazily strict because they kept getting rejected. Turns out you can't point to a nameserver until that nameserver has a zone for the domain, and you can't use nameservers from two providers unless those two providers are both in the NS records at both ends
Common paint point with DNSSEC. It’s brutal in the domain industry because when you buy a name with DNSSEC enabled it oftentimes can’t be setup to resolve due to these sorts of issues. Typically seller needs to deactivate first.
Crazy. I can't remember an incident like this ever happened before and it's still not fixed? .de is probably the most important unrestricted domain after .com from an economical perspective. Millions of businesses are "down".
> For instance, the name "www.nytimes.com" corresponds to nine different computers that answer requests for The New York Times on the Web, one of which is 199.181.172.242
How is 1/10th the size of number 1 and 2 COMBINED small? In what world is that a small number? Especially as those two are 1.8 billion people vs 0.08 billion for Germany
I have never used DNSSEC and never really bothered implementing it, but do I understand it correctly that we took the decentralized platform DNS was and added a single-point-of-failure certificate layer on top of it which now breaks because the central organisation managing this certificate has an outage taking basically all domains with them?
What you see here is decentralisation working. The issue is with the operator of the de TLD, and as such only that TLD is affected.
DNS is not decentralised in such a way, that multiple organisations run the infrastructure of a TLD, those are always run by a single entity.(.com and .net are operated by Verisign)
So what the issue is, that the operator has, does not change the impact.
Resolvers are free to cache each TLD's keys. There's a finite, well-known list of TLDs and their keys - you can download all the root zone data from IANA: https://www.iana.org/domains/root/files (it's a few megabytes in uncompressed text form)
The world might be a little bit better with more decentralization of the root zone.
> which now breaks because the central organisation managing this certificate has an outage
The ".de" TLD is inherently managed by a single organization, and things wouldn't be much better if its nameservers went down. Some of the records would be cached by downstream resolvers, but not all of them, and not for very long.
> we took the decentralized platform DNS was and added a single-point-of-failure certificate layer on top of it
DNSSEC actually makes DNS more decentralized: without DNSSEC, the only way to guarantee a trustworthy response is to directly ask the authoritative nameservers. But with DNSSEC, you can query third-party caching resolvers and still be able to trust the response because only a legitimate answer will have a valid signature.
Similarly, without DNSSEC, a domain owner needs to absolutely trust its authoritative nameservers, since they can trivially forge trusted results. But with DNSSEC, you don't need to trust your authoritative nameservers nearly as much [0], meaning that you can safely host some of them with third-parties.
> DNSSEC actually makes DNS more decentralized: without DNSSEC, the only way to guarantee a trustworthy response is to directly ask the authoritative nameservers. But with DNSSEC, you can query third-party caching resolvers and still be able to trust the response because only a legitimate answer will have a valid signature.
but how would one verify the signature if the DNSKEY expired and you cannot fetch a fresh one because the organisation providing those keys is down? As far as I understood the TTL for those keys is different and for DENIC it seems to be 1h [0]. So if they are down for more than an hour and all RRSIG caches expire, DNS zones which have a higher TTL than 1h but use DNSSEC would also be down?
[0]
dig RRSIG de. @8.8.8.8
de. 3600 IN RRSIG DNSKEY 8 1 3600 20260519214514 20260505201514 26755 de. [...]
> but how would one verify the signature if the DNSKEY expired and you cannot fetch a fresh one because the organisation providing those keys is down?
In theory, this shouldn't happen, because if you use the same TTLs for your DNSSEC records and your "regular" records, then if the regular records are present in the cache, the DNSSEC records will be too.
> So if they are down for more than an hour and all RRSIG caches expire, DNS zones which have a higher TTL than 1h but use DNSSEC would also be down?
Yes, but I'd argue that the DNSSEC records should have the same TTLs for exactly this reason. That's how my domain is set up at least:
$ dig +nocmd +nocomments +nostats +dnssec @any.ca-servers.ca. maxchernoff.ca. DS
;maxchernoff.ca. IN DS
maxchernoff.ca. 86400 IN DS 62673 15 2 487B95FEFF04265826F037C9DB2E1F14FF9ADBF2C7BE246A2B9F9BFD 481BE928
maxchernoff.ca. 86400 IN RRSIG DS 13 2 86400 20260512131336 20260505104433 46762 ca. ppc9LrWniPWdAI2Xq1g3FrYJGQVYayA5TtgFRkJfqOqNfe6zu/n0gwti IO3c9pOoUpIum5gPB6GLOGbGU+sfhg==
$ dig +nocmd +nocomments +nostats +dnssec @ns.maxchernoff.ca. maxchernoff.ca. DNSKEY
;maxchernoff.ca. IN DNSKEY
maxchernoff.ca. 86400 IN DNSKEY 257 3 15 DYs9mPDMRx/hQ9R9iGLi1Ysx1eFdhlXeCujY6PqJWeU=
maxchernoff.ca. 86400 IN RRSIG DNSKEY 15 2 86400 20260518072823 20260504055823 62673 maxchernoff.ca. RgPyEvB/kjXIvoidRNF/hfm7utzDs0kxXn4qJL17TUAVYOdbLl0Vd8zt E52bGBBFv2TNEnf9O9LkiT2GBH0jAA==
$ dig +nocmd +nocomments +nostats +dnssec @ns.maxchernoff.ca. maxchernoff.ca. A
;maxchernoff.ca. IN A
maxchernoff.ca. 86400 IN A 152.53.36.213
maxchernoff.ca. 86400 IN RRSIG A 15 2 86400 20260518072823 20260504055823 62673 maxchernoff.ca. bRfTVHnMjCFRaIh5uc0aT1vD4yh1UZrqOZDRunLbxFI1eth6nNlTiOOC xti7axVoXwB6VAoHOAnW0nL0eeJNDQ==
Thanks for explaining. I thought that once any key in the chain-of-trust of any domains DNSSEC expired the whole record went stale but turns out that was a wrong assumption. If the DNSKEY and the other records have the same TTL and the DNSSEC verification is also "cached" then that makes a lot more sense.
> I thought that once any key in the chain-of-trust of any domains DNSSEC expired the whole record went stale but turns out that was a wrong assumption.
No, that actually is true, but I think (?) that the part that you were missing is that DNSSEC records are mostly the same as any other record, so they can be cached the same way. And since most resolvers are DNSSEC-enabled these days, they'll tend to request (and therefore cache) the DNSSEC records at the same time as the regular records.
There are tons of edge cases here, but it should hopefully be pretty rare for a cache to have a current A/AAAA record and stale/missing DNSSEC records.
> the DNSSEC verification is also "cached"
Technically the verification itself isn't cached, but since verification only depends on the chain of DNSSEC records, and those records are cached, it has the same effect.
DNSSEC doesn't change the degree to which DNS is decentralized. It's always been hierarchical. In the absence of caching, every DNS query starts with a request to the root DNS servers. For foo.com or foo.de, you first need to query the root servers to determine the nameservers responsible for .com and .de. Then you contact the .com or .de servers to ask for the foo.com and foo.de nameservers. All DNSSEC does is add signatures to these responses, and adds public keys so you can authenticate responses the next level down.
A list of root nameserver IP addresses is included with every local recursive DNS resolver. The list changes, albeit slowly, over the years. With DNSSEC, this list also includes public keys of those root servers, which also rotate, slowly.
nation state actor picking right time to sabotage a tiny part of the key rotation process. on monday someone cut major fiber lines, on tuesday DENIC is failing.
Unironically yeah, we are at the level of weaponizable sophistication that this metaphorical dick waving you are suggesting is probably something that happens
Interesting "bus problem" to have in a scenario where everyone who is qualified, experienced and trusted enough to commit lives changes (or perform a revert, undo results of a botched maintenance, etc) in an emergency situation is not completely sober.
Sobriety is just factor to be weighed in an emergency situation. 30 years ago I was at a ski resort with about 50 friends having a drinking competition in the resort's main bar. Late that night two ski lodges collapsed, trapping people inside. Around midnight, soon after the winner was announced, the police entered and asked "who's able to drive a crane truck?" The winner of the competition put his hand up and informed them of how much he had had to drink. Don't care they said, so he drove a crane big enough to lift a building up a single lane 35km mountain road in nighttime ice conditions. (The crane made it, but sadly most of the people in the ski lodges didn't. https://en.wikipedia.org/wiki/1997_Thredbo_landslide )
Sounds like Australian police. I remember 15 or so years ago being in a big team assisting the Australian police with something on a remote farm. There were 20 people that needed to be taken back to base and one 10 seater car. Someone asked the police if everyone could get in the car and policeman shrugged and said you can try. So the policeman drove a four wheel drive across farmland with 16 people stuffed into the back.
On Monday there was a huge outage affecting several cities quite close to Frankfurt because someone cut major fiber line; today DENIC is having a party and right when everyone is drunk this happens because some post-rotation task cannot be completed.
from my analysis DENIC resigned the .de zone today (May 5, 2026, ~17:49 UTC). The DNSSEC signature (RRSIG) for the NSEC3 record covering the hash range of nearly all .de TLD is cryptographically broken (malformed).
Things seem to be on their way up now, and https://status.denic.de/ is working again, at least from here.
DENIC's status page currently says "Frankfurt am Main, 5 May 2026 – DENIC eG is currently experiencing a disruption in its DNS service for .de domains. As a result, all DNSSEC-signed .de domains are currently affected in their reachability.
The root cause of the disruption has not yet been fully identified. DENIC’s technical teams are working intensively on analysis and on restoring stable operations as quickly as possible.
ok i picked a bad day to move from one register to another... i just spent the last hour frantically trying to figure out why the new register screwed us or the old register was screwing us...
It is indeed a bit sad that Cloudflare had to turn off DNSSEC completely. But I completely understand that they don't have a production-ready, tested path to override DNSSEC validation for only some domains.
It's a pretty hard argument to work around: WebPKI certificates should go in the DNS, and also the largest DNS providers might at any moment decide not to validate DNSSEC anymore to get through an outage.
If there's going to be a single point of failure in front of your website, that single point of failure may as well be the only single point of failure instead of having two single points of failure, and it's probably important that people can't spoof responses.
Nobody had to hack it. A system at DENIC broke, and so Cloudflare turned off DNSSEC validation for all of their users accessing .de. If DNSSEC was actually important for the security model of those users, that would be a huge deal.
If DNSSEC is part of your security model, you want local validation. Not relying on third party resolver that you don't have a contract with.
Beyond that, DNS has the AD bit. If you need DNSSEC secure data (for example for the TLSA record), then when Cloudflare turns off DNSSEC validation, the AD bit will be clear and things will stop working.
Yes, it's a crappy outcome, but endpoints can still choose to enforce this. Further, it's not a persuasive argument against more DNSSEC usage, since if there was more DNSSEC usage then resolvers would be more reluctant to disable it.
Probably the most common reason to use DNSSEC is to check a box on a list of compliance rules. And I don't think this will change anything for people who need DNSSEC for compliance.
There's no commercial compliance regime that requires DNSSEC (FedRAMP might be the only exception --- I'm uncertain about the current state of FedRAMP DNSSEC rules --- but that makes sense given that DNSSEC is a giant key escrow scheme.)
FedRAMP requires it, although like many requirements, you may be able to get out of it if you have a good reason and/or your sponsoring agency doesn't care about it.
There are also some large businesses that require, or strongly pressure SaaS providers to use DNSSEC. You can often contest that, but if you have DNSSEC, that's one less thing to argue about in the contract.
Which businesses are those? (I ask because if they're North American, I have a pretty good sense of which large North American businesses even have DNSSEC signatures set up, and it's not many; small enough that you can easily memorize the list.)
I just checked it. You mean the very small open padlock icon? The era of browsers warning loudly about HTTP was a decade ago, it got reversed due to pushback.
Well I checked both Chrome and Firefox on mobile and my desktop and they were all much more obvious than just an "open padlock". They both said "Not Secure" and in Firefox it was bright red text. Also in incognito mode Chrome refused to even open the site without a full screen warning. They all make it super clear non-HTTPS sites are not secure so I'm not really sure what your point is?
neverssl.com works fine for me, only a small warning in the place where the padlock usually is, that no-one checks anyway.
The browser would be very unhappy with an <input type="password"/> on a non-TLS site (localhost excepted). HSTS would trigger the "massive" warning and refuse to load the site, however.
I doubt it. The root cause of this was a root server misconfiguration or bug. It happened to DNSSEC records this time, which is a pain, but next time it might as well flip bits or point to wrong IP addresses instead.
Paradoxically, resolvers wouldn't have noticed the misconfiguration if it weren't for DNSSEC.
Given how amateurish German IT operations is, there is no guarantee whatsoever there will be a post-mortem nor whether it then will make it out under 3-6 months with all the necessary approvals.
"Die Störung ist inzwischen behoben und alle Systeme laufen wieder stabil. Die genaue Ursache wird derzeit noch analysiert. Sobald belastbare Erkenntnisse vorliegen, wird DENIC diese transparent zur Verfügung stellen."
translation:
‘The disruption has now been resolved and all systems are running smoothly again. The exact cause is currently being investigated. As soon as reliable findings are available, DENIC will make them publicly available.’
My poor fellow. You wrote about how something is a bad tool for a long list of serious reasons. Then it failed spectacularly because everybody decided to depend on it anyway - exactly what you were cautioning against. But somehow you have to respond to people who think you are the one who got it wrong! As a third party the whole affair gave me a good chuckle at least ;)
Even if example.com is unsigned, the delegation from .com to example.com will still be signed (including an attestation that example.com is unsigned). So lack of DNSSEC adoption by users of the TLD wouldn't save them here.
This is the kind of system failure that we need really good and well tested disaster recovery plans for. While not necessary this time, DENIC and any critical infrastructure provider should be able to rebuild their entire infrastructure from scratch in a tolerable amount of time (Rather days than hours in the case of a full rebuild). Importantly the disaster recovery plan has to work without reliance on either the system that is failing, but also on adjacent systems that might have hidden dependencies on the failing system.
I'm really not too close to Denic and know nothing about their internals, but just close enough to have experienced the stress of someone working for DENIC second hand during the outage. From the very limited information I happened to gather DENIC had some trouble in addressing the issue because, surprise, infrastructure that they need to do so runs on de domains. [1]
I'm convinced there are all kinds of extended cyclic decencies between different centralization points in the net.
If some important backbone of the internet is down for an extended time, this will absolutely cause cascading failures. And thesw central points of failure are only getting worse. I love Let's Encrypt, but if something causes them to hard fail things will go really bad once certificates start to expire.
We need concrete plans to cold start extended parts of the internet. If things go really bad once and communication lines start to fail, we're in for a bad time.
Maybe governments have redundant, ultra resistant, low tech communication lines, war rooms and a list of important people in the industry who they can find and put in these war rooms so they can coordinate the rebuild of infrastructure. But I doubt it.
[^1] I don't know if there is some kind of disaster plan in the drawer at DENIC that would address this. I don't mean to allege anything against DENIC specifically, but broadly speaking about companies and infrastructure providers, I would not be surprised if there was absolutely no plan on what to do if things really go down and how to cold start cyclic dependencies or where they even are.
[OP] warpspin | 15 hours ago
fweimer | 14 hours ago
EDIT My explanation was wrong, this is not how keytags work. The published keytag data is consistent:
The signature on the SOA record still does not verify:kaltsturm | 14 hours ago
[OP] warpspin | 14 hours ago
0123456789ABCDE | 13 hours ago
edb_123 | 13 hours ago
kangalioo | 14 hours ago
moltar | 14 hours ago
irundebian | 14 hours ago
balou23 | 14 hours ago
yk | 14 hours ago
[Edit] After playing around with it, google seems to have at least some pages cached. After setting dns to 8.8.8.8 amazon.de and spiegel.de work again, my blog does not.
theanonymousone | 14 hours ago
hmilch99 | 14 hours ago
krystofbe | 14 hours ago
RRSIG with malformed signature found for a0d5d1p51kijsevll74k523htmq406bk.de/nsec3 (keytag=33834) dig +cd amazon.de @8.8.8.8 works, dig amazon.de @a.nic.de works. Zone data is intact, DENIC just published an RRSIG over an NSEC3 record that doesn't validate against ZSK 33834. Every validating resolver therefore refuses to answer.
Intermittency fits anycast: some [a-n].nic.de instances still serve the previous (good) signatures, so retries occasionally land on a healthy auth. Per DENIC's FAQ the .de ZSK rotates every 5 weeks via pre-publish, so this smells like a botched rollover.
qazwsxedchac | 13 hours ago
Still, at this level, brittle infrastructure is a political risk. The internet's famous "routing around damage" isn't quite working here. Should make for an interesting post mortem.
walrus01 | 13 hours ago
account42 | 46 minutes ago
the8472 | 13 hours ago
fc417fc802 | 8 hours ago
zelon88 | 5 hours ago
There is designing something to be fail-closed because it needs to be secure in a physical sense (actually secure, physically protected), and then there's designing something fail-closed because it needs to be secure from an intellectual sense (gatekept, intellectually protected). While most of the internet is "open source" by nature, the complexity has been increased to the point where significant financial and technical investment must be made to even just participate. We've let the gatekeepers raise the gates so high that nobody can reach them. AI will let the gatekeepers keep raising the gates, but then even they won't be able to reach the top. Then what?
I think the point you're trying to make, put another way is in the context of "availability" and "accessibility" we've compromised a lot of both availability and accessibility in the name of security since the dawn of the internet. How much of that security actually benefits the internet, and how much of that security hinders it? How much of it exists as a gatekeeping measure by those who can afford to write the rules?
sam_lowry_ | 2 hours ago
account42 | 48 minutes ago
sam_lowry_ | 41 minutes ago
account42 | 50 minutes ago
pocksuppet | 13 hours ago
skywhopper | 12 hours ago
pocksuppet | 11 hours ago
fc417fc802 | 8 hours ago
Even the current centralized ICANN flavor could be substantially more resilient if it instead handed out key fingerprints and semi-permanent addresses when queried. That way it would only ever need to be used as a fallback when the previously queried information failed to resolve.
account42 | 54 minutes ago
cyberax | 12 hours ago
If Let's Encrypt goes down, half of the Internet will become inaccessible in a week.
akerl_ | 11 hours ago
bluejekyll | 10 hours ago
gpvos | 2 hours ago
fragmede | 2 hours ago
kenniskrag | 2 hours ago
sllabres | 10 hours ago
[1] https://outerspaceinstitute.ca/crashclock/
greatgib | 12 hours ago
Think about what would happen the day that letsencrypt is borken for whatever reason technical or like having a retarded US leader and being located in the wrong country. Taken into account the push of letsencrypt with major web browsers to restrict certificate validities for short periods like only a few days...
muvlon | 11 hours ago
tharkun__ | 10 hours ago
__float | 7 hours ago
htgb | 6 hours ago
stouset | 4 hours ago
kenniskrag | 2 hours ago
ale42 | 2 hours ago
account42 | 52 minutes ago
Arnt | 19 minutes ago
I haven't followed this closely, but have there been any... shall we say plain outages longer than six hours? That's not an outrageous TTL. Or a day.
lschueller | 12 hours ago
Muromec | 12 hours ago
And fuck nothing at all happened as a result.
Our_Benefactors | 11 hours ago
pinkgolem | 6 hours ago
We had a short discussion about migrating to .com, but decided risk != reward as no one would know the new tld
I assume there are a couple people working for denic who had a stressfull night..
belorn | 12 hours ago
Building redundant infrastructure that can withstand BGP and DNS configuration mistakes are not that simple but it can be done.
walrus01 | 10 hours ago
belorn | 2 hours ago
account42 | 58 minutes ago
Ironically, denic still requires you to have two separate name servers with different IPs for your domain (which can be worked around by changing the IP of the registered name server afterwards lol), a requirement that all other registries I use have dropped or never had because enforcing such a policy at the registry level makes zero sense.
deepsun | 7 hours ago
seabrookmx | 5 hours ago
deepsun | 4 hours ago
gerdesj | 10 hours ago
DNS is a look up service that runs on the internet.
Internet routing of IP packets is what the internet does and that is working fine (for a given value of fine).
You remind me of someone using the term "the internet is down" that really means: "I've forgotten my wifi password".
LastTrain | 10 hours ago
eru | 9 hours ago
Woodi | 6 hours ago
Real world beats sci-fi :) And isn't it why we love IT ? And hate it too, because of "peoples in charge"...
otabdeveloper4 | 6 hours ago
...is only for Pentagon networks and military stuff. It's not for us normal people. (We get Cloudflare and FAANG bullshit instead.)
zelon88 | 6 hours ago
Every FAANG company has their own fiber backbone. Why invest the internet that everyone uses when you can invest in your own private internet and then sell that instead?
profmonocle | 3 hours ago
Traffic that goes over "the Internet" traverses some mix of your ISP's fiber, fiber belonging to some other ISP they have a deal with, then fiber belong to some ISP they have a deal with, etc.
All those ISPs are being paid to provide service, they can invest in their own networks.
account42 | 44 minutes ago
number6 | 6 hours ago
dlopes7 | 13 hours ago
walrus01 | 13 hours ago
hannob | 13 hours ago
silisili | 13 hours ago
I once worked at the level of administering DNSSEC for 300+ TLDs. It's its own world. When that company was winding down, I tried to continue in the field but the most common response (outside of no response, of course), was 'we already have a DNS team/vendor/guy.' And well, then things like this happen. I won't throw stones though, it's a lot to learn and can be incredibly brittle.
hathawsh | 13 hours ago
walrus01 | 13 hours ago
Broadly similar general concept to the team responsible for the DNSSSEC signing keys for an entire ccTLD.
Yeah a x509 PKI / root CA is a very different thing than DNSSSEC but they have a number of general logical similarities in that the chain of trust ultimately comes down to a "do not fuck this up" single point of failure.
mschuster91 | 13 hours ago
I had the misfortune of having to dig deep into constructing ASN.1 payloads by hand [1] because that's the only thing Java speaks, and oh holy hell is this A MESS because OF COURSE there's two ways to encode a bunch of bytes (BIT STRING vs OCTET STRING) and encoding ed25519 keys uses BOTH [2].
And ed25519 is a mess in itself. The more-or-less standard implementation by orlp [3] is almost completely lacking any comments explaining what is going on where and reading the relevant RFCs alone doesn't help, it's probably only understandable by reading a 500 pages math paper.
It's almost as if cryptographers have zero interest in interested random people to join the field.
End of rant.
[1] https://github.com/msmuenchen/meshcore-packets-java/blob/mai...
[2] https://datatracker.ietf.org/doc/html/rfc8410#appendix-A
[3] https://github.com/orlp/ed25519/tree/master
cyberax | 12 hours ago
> because that's the only thing Java speaks
No, it most definitely is not. You can just construct a private key directly in BouncyCastle: https://downloads.bouncycastle.org/java/docs/bcprov-jdk18on-...
I'm 100% certain that you also can do that with raw java.security. I did that about 15 years ago with raw RSA/EC keys. You can just directly specify the private exponent for RSA (as a bigint!) or the curve point for EC.
Ditto for ed25519, you can just take the canonical implementation from DJB. And you really really shouldn't do that anyway, please just use OpenSSL or another similar major crypto library.
Muromec | 11 hours ago
mschuster91 | 11 hours ago
I tried that, the problem is Meshcore specific - they do their own weird shit with private and public keys [1]. Haven't figured out how to do the private key import either, because in the C source code (or in python re-implementations) Meshcore just calls directly into the raw ed25519 library to do their custom math... it's a mess.
[1] https://jacksbrain.com/2026/01/a-hitchhiker-s-guide-to-meshc...
cyberax | 9 hours ago
tptacek | 12 hours ago
Muromec | 12 hours ago
mschuster91 | 11 hours ago
Bitpacking structures used to be important in the 60s. That time has passed, unless you're dealing with LoRa, NFC or other cases of highly constrained bandwidth there are way better options to serialize and deserialize information. It's time to move on, and the complexity of all the legacy garbage in crypto has been the case of many a security vulnerability in the past.
As for the code, it might be personal preference but I'd love to have at least some comments referring back to a specification or original research paper in the code.
Avamander | 11 hours ago
People who have thought they can do better have made things like PGP. It's one of the worst cryptographic solutions out there. You're free to try as well though.
Muromec | 11 hours ago
thayne | 9 hours ago
And there is a related binary format that uses CBOR (COSE) as well.
Muromec | 11 hours ago
tptacek | 11 hours ago
dwattttt | 10 hours ago
thayne | 9 hours ago
It wouldn't be as bad if asn.1 had cought on more as a general purpose serialization format and there were ubiquitous decent libraries for dealing with it. But that didn't happen. Probably partly because there are so many different representations of asn.1.
A bespoke serialization specifically for certificates might actually have aged better, if it was well designed.
pocksuppet | 9 hours ago
jll29 | 4 hours ago
icedchai | 13 hours ago
qingcharles | 10 hours ago
gerdesj | 10 hours ago
I've just given them part of a recipe for using DNSSEC. I suspect you are not actually human .. qingcharles.
qingcharles | 9 hours ago
icedchai | 7 hours ago
qingcharles | 6 hours ago
Telnet was sniffed. IRC was being sniffed and logged.
account42 | 41 minutes ago
gerdesj | 10 hours ago
I ran up three new VMs on three different sites. I linked all three systems via a private Wireguard mesh. MariaDB on each VM bound to the wg IP and stock replication from the "primary". PowerDNS runs across that lot. One of the VMs is not available from the internet and has no identity within the DNS. The idea is that if the Eye of Sauron bears down on me, I can bring another DNS server online quite quickly and fiddle the records to bring it online. It also serves as a third authority for replication.
I also deployed https://github.com/PowerDNS-Admin/PowerDNS-Admin which is getting on a bit and will be replaced eventually but works beautifully.
Now I have DNS with DNSSEC and dynamic DNS and all the rest. This is how you start signing a zone and PowerDNS will look after everything else:
Grab a test zone and work it all out first, it will cost you not a lot and then go for "production".My home systems are DNSSEC signed.
bflesch | 13 hours ago
nuil | 14 hours ago
https://dnssec-analyzer.verisignlabs.com/nic.de
binghatch | 14 hours ago
phit_ | 14 hours ago
fossdd | 14 hours ago
meineerde | 14 hours ago
mrngm | 13 hours ago
We observed issues on a non-DNSSEC .de domain at 19:45Z and confirmed around 20:12Z it wasn't just us, but also more high profile domain names.
eliaskg | 14 hours ago
sundiver | 14 hours ago
taegee | 14 hours ago
Edit: Alternative link: https://www.cyberciti.biz/media/new/cms/2017/04/dns.jpg
notpushkin | 14 hours ago
yjftsjthsd-h | 14 hours ago
itvision | 14 hours ago
It's been like that for over two years now.
ricardo81 | 14 hours ago
9dev | 13 hours ago
londons_explore | 13 hours ago
_ache_ | 13 hours ago
Or: https://dns.kitchen/jingle
bflesch | 13 hours ago
pogii123 | 14 hours ago
benny_s | 14 hours ago
MikeNotThePope | 14 hours ago
pogii123 | 14 hours ago
Non-authoritative answer: Name: bmw.de Address: 160.46.226.165
$ nslookup www.bmw.de ~ ;; Got SERVFAIL reply from 8.8.8.8, trying next server Server: 8.8.4.4 Address: 8.8.4.4#53
* server can't find www.bmw.de: SERVFAIL
dark-star | 14 hours ago
MikeNotThePope | 10 minutes ago
dark-star | 14 hours ago
jamietanna | 14 hours ago
kaltsturm | 14 hours ago
iknowstuff | 14 hours ago
irundebian | 14 hours ago
mschuster91 | 13 hours ago
Medicineguy | 4 hours ago
tommit | 3 hours ago
nkydr0i0 | 3 hours ago
[0] https://en.wikipedia.org/wiki/Thanks,_Obama
merb | 14 hours ago
Looks like it failed after a maintenance: https://www.namecheap.com/status-updates/planned-denic-de-re...
https://status.denic.de/
gpvos | 13 hours ago
1vuio0pswjnm7 | 14 hours ago
DNSSEC not working
If using an open resolver, i.e., a shared DNS cache, e.g., third party DNS service such as Google, Cloudflare, etc., then it might fail, or it might not. It depends on the third party DNS provider
https://datatracker.ietf.org/meeting/118/materials/slides-11...
jeroenhd | 4 hours ago
It's the cryptographic version of that one time the same TLD told the world domains starting with certain letters didn't exist: https://www.theregister.com/2010/05/12/germany_top_level_dom...
lxgr | 14 hours ago
kuerbel | 14 hours ago
Good news though, if you add domain-insecure: "de" to your unbound config everything works fine
victorbjorklund | 14 hours ago
chromehearts | 14 hours ago
Bender | 14 hours ago
"Cloudflare Radar data shows 8.11% of domains are signed with DNSSEC, but only 0.47% of queries are validated end-to-end." [1]
Zones I may care about:
- Amazon.com: unsigned
- My banks: unsigned
- Hacker News: unsigned
- Email that I do not host: unsigned
- My power companies billing: unsigned
- I found some! id.me and irs.gov are signed.
[1] - https://technologychecker.io/blog/dnssec-adoption
tptacek | 10 hours ago
https://dnssecmenot.fly.dev/
Bender | 7 hours ago
[1] - https://dnssec-analyzer.verisignlabs.com/amazon.com
V__ | 13 hours ago
__michaelg | 14 hours ago
9753268996433 | 14 hours ago
sgbeal | 4 hours ago
But not before 8am.
throw1234567891 | 14 hours ago
sunaookami | 14 hours ago
EDIT: it says "Service Disruption" now
MASNeo | 14 hours ago
Edit: Now even the humor is gone.
sunaookami | 13 hours ago
EDIT: called it...
lschueller | 13 hours ago
niklasrde | 13 hours ago
port3000 | 13 hours ago
yorwba | 5 hours ago
tom1337 | an hour ago
gruselhaus | 13 hours ago
cubefox | 11 hours ago
Zopieux | 11 hours ago
chromehearts | 14 hours ago
Locke80 | 14 hours ago
AndroTux | 14 hours ago
chromehearts | 5 hours ago
lschueller | 13 hours ago
[OP] warpspin | 13 hours ago
Cockbrand | 13 hours ago
victorbjorklund | 14 hours ago
jiggawatts | 14 hours ago
Fundamentally, security is a solution to an availability problem: The desire of the users is for a system to remain available despite external attack.
Systems that become unavailable to everyone fail this requirement.
A door with its keyhole welded shut is not "secure", it's broken.
QuantumNomad_ | 14 hours ago
If I’m unable to use Amazon for 24 hours it doesn’t really matter. If a photo copy of my passport is leaked that’s worries and potential troubles for years.
senkora | 14 hours ago
or alternatively,
Security = (exclude unauth'd reads) + (exclude unauth'd writes) + (include auth'd reads and auth'd writes)
Gotta satisfy all parts in order to have security.
jiggawatts | 13 hours ago
E.g.: What's the benefit to you if your data is so confidential that you can't read it either? This is a real problem with some health information systems, where I can't access my own health records! Ditto with many government bureaucracies that keep my records safe and secure from me.
dnnddidiej | 13 hours ago
Bad UX and bugs are in general not always an availiability problem.
If it hard to get what you want due to bad design but the site is up, the site is still up.
yosamino | 14 hours ago
I am very happy that it doesn't happen more often.
kaltsturm | 14 hours ago
yes indeed
dark-star | 14 hours ago
pw6hv | 14 hours ago
AndroTux | 14 hours ago
dark-star | 14 hours ago
[OP] warpspin | 13 hours ago
kaltsturm | 14 hours ago
As fallback they should use their X account: https://x.com/denic_de
dgellow | 13 hours ago
May 5, 2026 23:28 CEST
May 5, 2026 21:28 UTC
INVESTIGATING
Frankfurt am Main, 5 May 2026 – DENIC eG is currently experiencing a disruption in its DNS service for .de domains. As a result, all DNSSEC-signed .de domains are currently affected in their reachability. The root cause of the disruption has not yet been fully identified. DENIC’s technical teams are working intensively on analysis and on restoring stable operations as quickly as possible. Based on current information, users and operators of .de domains may experience impairments in domain resolution. Further updates will be provided as soon as reliable findings on the cause and recovery are available. DENIC asks all affected parties for their understanding. For further enquiries, DENIC can be contacted via the usual channels.
elch | 13 hours ago
kaltsturm | 13 hours ago
sunaookami | 13 hours ago
niklasrde | 13 hours ago
tarruda | 14 hours ago
pocksuppet | 14 hours ago
aberoham | 14 hours ago
mike-cardwell | 13 hours ago
0123456789ABCDE | 13 hours ago
Avamander | 13 hours ago
Fun fact, CloudFlare has used the same KSK for zones it serves more than a decade now.
daneel_w | 12 hours ago
Avamander | 11 hours ago
Keeping key material secure for more than a decade while it's in active use is vastly more complex than keeping it secure for a month, until it rotates.
For all we know, some ex-employee might be walking around with that KSK, theoretically being able to use it for god knows what for an another decade.
cyberax | 9 hours ago
Nope. Key material rotation is just circus when it's done for the sake of rotation.
> For all we know, some ex-employee might be walking around with that KSK, theoretically being able to use it for god knows what for an another decade.
Or maybe an employee has compromised the new key that is going to be rotated in, while the old key is securely rooted in an HSM?
tptacek | 8 hours ago
cyberax | 7 hours ago
I'm just saying that rotating the key just in case someone compromised it is not a great idea. Doubly so if it's done infrequently enough for the operational experience to atrophy between rotations.
And yeah, I fully agree that anything surrounding the DNSSEC operations is a burning trash fire. It doesn't have to be this way, but it is.
tptacek | 7 hours ago
cyberax | 6 hours ago
And I just don't fully buy this rationale for asymmetric key rotation. It makes total sense for symmetric secrets (except for passwords).
Avamander | 2 hours ago
Also possible, but that'd be an active threat that has some probability of being caught.
Never replacing keys allows permanent compromise that can only be caught if someone directly observes misuse.
Though nobody monitors DNSSEC like that, nor uses it, so it's fine from that aspect I guess.
jcgl | 2 hours ago
I'm a mere sysadmin and not a cybersecurity expert. But this is always something that leaves me torn.
On the one hand, yes, rotation periods for many/most credentials are long enough that you're not really de-risking yourself all that much.
On the other hand, doing regular rotations allows you to tighten up your threat model. A regularly-rotated credential allows you to say "I implicitly trust that this credential has not been compromised prior to the previous rotation."[0] Whereas, without credential rotation, you're saying "I implicitly trust that this credential has not been compromised ever."
The latter to me seems clearly like the inferior model. The question is just whether the cost-benefit pencils out. And that is obviously very situationally dependent. That calculus doesn't pencil out when dealing with user-owned passwords for instance (i.e. the costs of regular password rotation dominate the benefits of the improved threat model). Human limitations with memory and such are the main issue there. However, that doesn't apply to e.g. hypothetical sufficiently developed DNSSEC infrastructure. Does that calculus pencil out there? I don't know. But it seems plausible at least.
[0] Modulo attackers having been able to pivot into a persistent threat with a previously-compromised credential.
pocksuppet | 9 hours ago
tptacek | 8 hours ago
greensh | 4 hours ago
sam_lowry_ | 2 hours ago
0123456789ABCDE | 2 hours ago
account42 | 24 minutes ago
apaprocki | 12 hours ago
tptacek | 12 hours ago
petee | 35 minutes ago
That said, the last few dnssec posts that got traction, tptacek tends to be at least 20% of the comments alone (ex, 55/259), ignoring word count. Today seems calm
elevation | 14 hours ago
whalesalad | 13 hours ago
cedilla | 13 hours ago
The only problem with DNSSEC here is that it's complex.
akerl_ | 11 hours ago
jeroenhd | 4 hours ago
If my domains' DNS servers start pointing at localhost, that doesn't mean DNS is a broken protocol.
siginator | 14 hours ago
aweiher | 13 hours ago
dnnddidiej | 13 hours ago
dwedge | 14 hours ago
whalesalad | 14 hours ago
siva7 | 13 hours ago
lschueller | 13 hours ago
[OP] warpspin | 13 hours ago
snapetom | 13 hours ago
lschueller | 13 hours ago
thih9 | 13 hours ago
https://en.wikipedia.org/wiki/Kehrwoche
layer8 | 12 hours ago
pimeys | 12 hours ago
greyhound | 13 hours ago
9dev | 13 hours ago
rasz | 13 hours ago
dgellow | 12 hours ago
croes | 10 hours ago
skrebbel | 5 hours ago
croes | 2 hours ago
carstenhag | 12 hours ago
daneel_w | 12 hours ago
sgc | 10 hours ago
1718627440 | 21 minutes ago
Cockbrand | 13 hours ago
lschueller | 13 hours ago
rwmj | 13 hours ago
https://archive.nytimes.com/www.nytimes.com/library/cyber/we...
ctippett | 13 hours ago
AndroTux | 13 hours ago
HDBaseT | 13 hours ago
trollbridge | 13 hours ago
Muromec | 11 hours ago
TacticalCoder | 10 hours ago
You can both be the 3rd biggest economy in the world and still only be 1/10th of US+China GDPs combined.
And only three companies in the Top 100 for Germany:
https://companiesmarketcap.com/
Germany is the kingdom of the "mittelstand": many, many, many SMEs.
Both GP and you are right: it's the 3rd largest economy in the world and yet it's simply not that big.
https://en.wikipedia.org/wiki/Mittelstand
In other words: I expect this German DNS SNAFU to have 0.000000001% impact on the world's GDP this year.
NooneAtAll3 | 10 hours ago
phillipseamore | 9 hours ago
ulfw | 9 hours ago
tommit | 3 hours ago
itsyonas | 5 hours ago
126 trillion USD * 0.00000000001 = 1260USD
I'm pretty sure the impact was higher than that ;)
carstenhag | 12 hours ago
8organicbits | 11 hours ago
dizhn | 3 hours ago
whalesalad | 13 hours ago
g4cg54g54 | 13 hours ago
-> no idea if that also "heals" anyone who had dnssec on before.
-> no idea if maybe they need to roll back something and then rebreak the new dnssec i made a minute later lol...
tom1337 | 13 hours ago
Medowar | 13 hours ago
So what the issue is, that the operator has, does not change the impact.
AndroTux | 13 hours ago
pocksuppet | 12 hours ago
The world might be a little bit better with more decentralization of the root zone.
gucci-on-fleek | 13 hours ago
The ".de" TLD is inherently managed by a single organization, and things wouldn't be much better if its nameservers went down. Some of the records would be cached by downstream resolvers, but not all of them, and not for very long.
> we took the decentralized platform DNS was and added a single-point-of-failure certificate layer on top of it
DNSSEC actually makes DNS more decentralized: without DNSSEC, the only way to guarantee a trustworthy response is to directly ask the authoritative nameservers. But with DNSSEC, you can query third-party caching resolvers and still be able to trust the response because only a legitimate answer will have a valid signature.
Similarly, without DNSSEC, a domain owner needs to absolutely trust its authoritative nameservers, since they can trivially forge trusted results. But with DNSSEC, you don't need to trust your authoritative nameservers nearly as much [0], meaning that you can safely host some of them with third-parties.
[0]: https://news.ycombinator.com/item?id=47409728
tom1337 | 12 hours ago
but how would one verify the signature if the DNSKEY expired and you cannot fetch a fresh one because the organisation providing those keys is down? As far as I understood the TTL for those keys is different and for DENIC it seems to be 1h [0]. So if they are down for more than an hour and all RRSIG caches expire, DNS zones which have a higher TTL than 1h but use DNSSEC would also be down?
[0] dig RRSIG de. @8.8.8.8
de. 3600 IN RRSIG DNSKEY 8 1 3600 20260519214514 20260505201514 26755 de. [...]
gucci-on-fleek | 12 hours ago
In theory, this shouldn't happen, because if you use the same TTLs for your DNSSEC records and your "regular" records, then if the regular records are present in the cache, the DNSSEC records will be too.
> So if they are down for more than an hour and all RRSIG caches expire, DNS zones which have a higher TTL than 1h but use DNSSEC would also be down?
Yes, but I'd argue that the DNSSEC records should have the same TTLs for exactly this reason. That's how my domain is set up at least:
tom1337 | 12 hours ago
gucci-on-fleek | 12 hours ago
No, that actually is true, but I think (?) that the part that you were missing is that DNSSEC records are mostly the same as any other record, so they can be cached the same way. And since most resolvers are DNSSEC-enabled these days, they'll tend to request (and therefore cache) the DNSSEC records at the same time as the regular records.
There are tons of edge cases here, but it should hopefully be pretty rare for a cache to have a current A/AAAA record and stale/missing DNSSEC records.
> the DNSSEC verification is also "cached"
Technically the verification itself isn't cached, but since verification only depends on the chain of DNSSEC records, and those records are cached, it has the same effect.
wahern | 12 hours ago
A list of root nameserver IP addresses is included with every local recursive DNS resolver. The list changes, albeit slowly, over the years. With DNSSEC, this list also includes public keys of those root servers, which also rotate, slowly.
0x80h | 13 hours ago
nfreising | 13 hours ago
edo888 | 13 hours ago
kaltsturm | 13 hours ago
sanbaideng | 13 hours ago
Aldipower | 13 hours ago
FinnKuhn | 13 hours ago
SOLAR_FIELDS | 13 hours ago
SpaceNoodled | 13 hours ago
bflesch | 13 hours ago
maybe someone is showing off?
SOLAR_FIELDS | 7 hours ago
walrus01 | 13 hours ago
Muromec | 12 hours ago
femto | 12 hours ago
jamesfinlayson | 11 hours ago
kaltsturm | 13 hours ago
yowmamasita | 13 hours ago
cwassert | 2 hours ago
Surely a wealth tax is not worth mentioning.
bwb | 28 minutes ago
They made the point that more immigration / growth wouldn't help fix the core problem if they don't fix that asap.
yassiniz | 13 hours ago
bflesch | 13 hours ago
There are too many coincidences happening.
kaltsturm | 13 hours ago
Animux | 12 hours ago
Oarch | 12 hours ago
kaltsturm | 12 hours ago
edb_123 | 12 hours ago
DENIC's status page currently says "Frankfurt am Main, 5 May 2026 – DENIC eG is currently experiencing a disruption in its DNS service for .de domains. As a result, all DNSSEC-signed .de domains are currently affected in their reachability. The root cause of the disruption has not yet been fully identified. DENIC’s technical teams are working intensively on analysis and on restoring stable operations as quickly as possible.
jiveturkey | 12 hours ago
There’s no way it’s DNS
It was DNSSEC
taf2 | 12 hours ago
amelius | 12 hours ago
https://edition.cnn.com/2026/05/01/politics/us-troop-withdra...
tom1337 | 12 hours ago
cluckindan | 12 hours ago
amluto | 12 hours ago
vendemiat | 11 hours ago
tptacek | 11 hours ago
tptacek | 12 hours ago
amluto | 12 hours ago
tptacek | 11 hours ago
pocksuppet | 8 hours ago
akerl_ | 8 hours ago
phicoh | 3 hours ago
Beyond that, DNS has the AD bit. If you need DNSSEC secure data (for example for the TLSA record), then when Cloudflare turns off DNSSEC validation, the AD bit will be clear and things will stop working.
tptacek | 8 hours ago
farfatched | 8 hours ago
thayne | 9 hours ago
tptacek | 8 hours ago
thayne | 6 hours ago
There are also some large businesses that require, or strongly pressure SaaS providers to use DNSSEC. You can often contest that, but if you have DNSSEC, that's one less thing to argue about in the contract.
tptacek | 6 hours ago
pocksuppet | 8 hours ago
liveoneggs | 8 hours ago
weird-eye-issue | 8 hours ago
pocksuppet | 6 hours ago
weird-eye-issue | 5 hours ago
pocksuppet | 4 hours ago
weird-eye-issue | 3 hours ago
red_admiral | 2 hours ago
The browser would be very unhappy with an <input type="password"/> on a non-TLS site (localhost excepted). HSTS would trigger the "massive" warning and refuse to load the site, however.
weird-eye-issue | 44 minutes ago
Ah yes I think the HSTS issue is what I was thinking of
whh | an hour ago
fulafel | 6 hours ago
elp | 2 hours ago
I haven't been able to find any cases of genuine dns hijack attacks in the last few years. Would love to know if anyone else can?
Only about 40% of the crypto companies seem to use dnssec. Seems like a target rich environment.
jeroenhd | 4 hours ago
Paradoxically, resolvers wouldn't have noticed the misconfiguration if it weren't for DNSSEC.
liveoneggs | 8 hours ago
weird-eye-issue | 8 hours ago
account42 | 27 minutes ago
petee | 22 minutes ago
Zopieux | 12 hours ago
retired | 11 hours ago
Culonavirus | 11 hours ago
"It all began with the decommissioning of the last nuclear power plant, ..."
alper | an hour ago
Tepix | an hour ago
https://blog.denic.de/denic-informiert-uber-die-behebung-der...
"Die Störung ist inzwischen behoben und alle Systeme laufen wieder stabil. Die genaue Ursache wird derzeit noch analysiert. Sobald belastbare Erkenntnisse vorliegen, wird DENIC diese transparent zur Verfügung stellen."
translation:
‘The disruption has now been resolved and all systems are running smoothly again. The exact cause is currently being investigated. As soon as reliable findings are available, DENIC will make them publicly available.’
SEJeff | 12 hours ago
https://sockpuppet.org/blog/2015/01/15/against-dnssec/
betaby | 12 hours ago
tptacek | 11 hours ago
SAI_Peregrinus | 10 hours ago
sgc | 10 hours ago
tptacek | 10 hours ago
cyberax | 9 hours ago
tptacek | 8 hours ago
cyberax | 7 hours ago
DENIC screwed up the TLD itself, and .com/.net are just as susceptible.
profmonocle | 3 hours ago
theMMaI | 5 hours ago
basilikum | 11 hours ago
I'm really not too close to Denic and know nothing about their internals, but just close enough to have experienced the stress of someone working for DENIC second hand during the outage. From the very limited information I happened to gather DENIC had some trouble in addressing the issue because, surprise, infrastructure that they need to do so runs on de domains. [1]
I'm convinced there are all kinds of extended cyclic decencies between different centralization points in the net.
If some important backbone of the internet is down for an extended time, this will absolutely cause cascading failures. And thesw central points of failure are only getting worse. I love Let's Encrypt, but if something causes them to hard fail things will go really bad once certificates start to expire.
We need concrete plans to cold start extended parts of the internet. If things go really bad once and communication lines start to fail, we're in for a bad time.
Maybe governments have redundant, ultra resistant, low tech communication lines, war rooms and a list of important people in the industry who they can find and put in these war rooms so they can coordinate the rebuild of infrastructure. But I doubt it.
[^1] I don't know if there is some kind of disaster plan in the drawer at DENIC that would address this. I don't mean to allege anything against DENIC specifically, but broadly speaking about companies and infrastructure providers, I would not be surprised if there was absolutely no plan on what to do if things really go down and how to cold start cyclic dependencies or where they even are.
NooneAtAll3 | 11 hours ago
aboardRat4 | 11 hours ago
0xbadcafebee | 6 hours ago
baby | 6 hours ago
jdthedisciple | 5 hours ago
adamas | 3 hours ago
alper | an hour ago