There's one point I don't really get and I would be glad if someone could clarify it for me. When the author says that even over wifi, the CSMDA/CD protocol is not used anymore. Then how does it actually work?
Discussing this, the author explains:
> If you have two wifi stations connected to the same access point, they don't talk to each other directly, even when they can hear each other just fine.
So, each station still has to decide at some point if what its hearing is for them or not, as it could be another station talking to the AP, or the AP talking to another station. How is that done if not using CSMA/CD (or something very similar at least)?
Thanks. So the author's point in the linked article is wrong, it's the opposite of what they wrote. Contrary to what they say, it's indeed a bus, and it isn't the case that CSMA/CD is useless, it's that isn't enough to deal with the situation, so additions have been made to it.
Thanks for your link that helped clarifying this for me!
When you have switches that link two nodes together, for only the duration of one-way transmission you don't need CSMA/CD. We literally have no use for it. We will never have two computers transmit onto the same Ethernet wire anymore.
WiFi is different of course. However as the author wrote, your WiFi devices always go through the access point where they use 802.11 RTS/CTS messages to request and receive permission to send packets. All nodes can see CTS being broadcasted so they know that somebody is sending something. So even CSMA/CA is getting less useful.
Yes I'm only talking about wifi networks. I get that CSMA/CD itself is getting less useful, but it's because something else is doing its job, not because what it did is useless (that's why I wrote "or something similar" when I asked). Wifi is still, necessarily, a common bus where everyone talks.
CSMA/CD - Collision Detection and CA Collision Avoidance. - FYI the article is from 2017!
for Non-WiFi, we don't use CD because all is bi-dirireactional and all communication have their own lane, no needed because there will never be a collision this is down to the port level on the switches, the algorithm might be still there but not use for it.
For WiFi, CD can never be good or work, because "Detecting" is pointless, it cannot work. we need to "Avoid" so it has functionality because is a shared lane or medium. CA is a necessity, now in 2026, we actually truly don't need it or use it as much since now WiFi and 802.11 functions as a switch with OFDM and with RF signal steering, at the PHY (physical level) the actual RF radio frequency side, it cancels out all other signals say from others devices near you and we "create" similar bi-directional lanes and functions similar as switches.
The article is good and represents how IETF operates a view (opinionated) of what happens inside. We actually need an IETF equivalent for AI. Its actually good and a meritocracy even though of late the Big companies try to corrupted or get their way, but academia is still the driver and steers it, and all votes count for when Working-Groups self organized. (my last IETF was 2018 so not sure how it is now in the 2020s)
How can wifi be a star topology when all clients connect to the base station using the same airwaves? If it really were a star topology, it would also not be possible to use aircrack-ng or other tools to gather data for WPA cracking by passive listening -- that can only happen on a shared medium network.
I think the most accurate classification is that wifi emulates a star topology at OSI layer 2 on top of a layer 1 bus topology.
Moving running computers around and maintaining connection would have required large trucks and very long cables at the time the internet was invented.
the mobility in context of article means "changing IP within same TCP connection".
IP + some dynamic routing handles the situation of "the connection site got nuked and we need to route around it", it's just not in the protocol, it's additional layer on top of it
But there's now multipath TCP handover? Weird behaviour to want different network interfaces on different network share the same IP, and pass it along like a volleyball?
Wi-Fi and ethernet also have different IPs. And what if you also add Wi-Fi peer-to-peer (Airdrop-ish), Wi-Fi Tunneled Direct Link Setup (literally Chromecast)?
If a vendor implemented simultaneous Dual Band (DBDC) Wi-Fi, that means it can connect to both 2.4ghz and 5ghz at the same time, each with their own mac & ip, because you're trying to connect to the same network on a different band. Or route packages from a 'wan' Wi-Fi to a 'lan' Wi-Fi (share internet on (BSS) infrastructure Wi-Fi A to a new (IBSS) ad-hoc Wi-Fi network B with your smartphone as the gateway on Android.
There's also 802.11 the IEEE 802.11 standard to add wireless access in vehicular environments (WAVE) and EV chargers or IP over the CCS protocol, etc. If all cars need to be 'connected' and 'have a unique address' NAT / CGNAT also isn't cutting it.
There's also IoT. Thread is ipv6 because it's the alternative to routing whatever between wan / lan / zigbee / Z-Wave / etc with a specific gateway at a remote point in the mesh network.
And how about the new DHCP / DNS specs for ipv6, you can now share encrypted DNS servers, DHCP client-ID, unique OUID, etc etc.
It's an infuriating post really. As if IP was only designed for a small scale VPN / overlay network service such as Tailscale.
> But there's now multipath TCP handover? Weird behaviour to want different network interfaces on different network share the same IP, and pass it along like a volleyball?
Mobile IP actually wanted to do this, it just never took off (not the least because both endpoints need to understand it to get route optimization). I think some Windows versions actually had partial Mobile IPv6 support.
The source and destination addresses don't change. If a bomb takes out a router in-between (the military scenario DARPA had in mind), it is NOT IP (L3) or TCP (L4) that handles it. Rather it is a dynamic routing protocol that informs all affected routers of the changed route. Since the early days of the Internet, that's been the job of routing protocols.
For smaller internets, protocols such as RIP (limited to 16 hops) broadcast routing information from each still-working router to other routers. Each router built a picture of the internet (simplifying a bit here, RIP and similar protocols used "distance vector" routing, but other more advanced routing protocols did have each a picture of the internet). So when a packet arrived at its router, that router can forward the pack towards the destination. Such protocols are "interior" routing protocols, used within an ISP's network.
The Internet is too big for such automatic routing and uses an "exterior" routing protocol called BGP. This protocol routes packets from one ISP to the next, using route and connectivity information input by humans. (Again I'm simplifying a bit.)
Wifi uses entirely different protocols to route packets between cells.
Fun fact: wifi is not an acronym for anything, the inventors simply liked how it sounded.
t was made to sound like Hi-Fi, which stands for high fidelity, and Wireless, but "wireless fidelity" is a meaningless phrase and not what it was intended to directly mean.
Mobile IPv6 is a thing. I could compile it into the kernel on my mobile machines, main reason to not do it is that I'm currently using a phone as a WiFi hotspot and it doesn't have Mobile IPv6 support.
Mobile IPv6 support is theoretically possible. Practically, like so many cool things you could do with your network, ISPs won't have it. The best you can do is hide it from your ISP by using some tunnel, but then you might as well just use a VPN.
> Now imagine that X changes addresses to Q. It still sends out packets tagged with (uuid,80), to IP address Y, but now those packets come from address Q. On machine Y, it receives the packet and matches it to the socket associated with (uuid), notes that the packets for that socket are now coming from address Q, and updates its cache. Its return packets can now be sent, tagged as (uuid), back to Q instead of X. Everything works! (Modulo some care to prevent connection hijacking by impostors.2)
And how the fuck anything in-between knows where to route it ? The article glows a blazing beacon of ignorance about everything in-between.
The whole entire problem with mobile IP is "how we get intermediate devices to know where to go?" we're back to
> The problem with ethernet addresses is they're assigned sequentially at the factory, so they can't be hierarchical.
Which author hinted at then forgot. We can't have globally routable, unique, random-esque ID precisely because it has to be hierarchical. Keeping connection flow ID at L4 instead of L3+L4 changes very little, yeah, you can technically roam the client except how the fuck server would know where to send the packet back when L3 address changes ? It would have to get client packet with updated L3 address and until then all packets would go to void.
But hey, at least it's some progress ? NOPE, nothing at protocol layer can be trusted before authentication, it would make DoS attacks far easier (just flood the host in a bunch of random uuids), and you would still end up doing it QUIC way of just re-implementing all of that stuff after encryption of the insides
> And how the fuck anything in-between knows where to route it ? The article glows a blazing beacon of ignorance about everything in-between.
Because the IP address changed, so classic routing still works. Their point is about identifying a session with something non-constant (the IP of the client), rather than a session token.
Instead of identifying the "TCP" socket with (src ip, src port, dst ip, dst port), they use (src uuid, dst uuid) which allows flows to keep working when you change IP addresses. Just like you can change networks and still have your browser still logged in to most websites.
The packets carrying those UUIDs still are regular old IP packets, UDP in the case of QUIC. Only the server needs to track anything, and only has to change the dst ip of outgoing packets.
As for flooding and DDoS, that’s what handshakes are for, and QUIC already does it (disclaimer: never dug deep in how QUIC works so I can’t explain the mechanism here).
> We can't have globally routable, unique, random-esque ID precisely because it has to be hierarchical
This is not, technically, true. We could have globally-routable, unique, random-esque IDs if every routing device in the network had the capacity to store and switch on a full table of those IDs.
I'm not saying this is feasible, mind you, just that it's not impossible.
That was said in my comment. The routing devices need to both be able to store the full table, and also to switch packets by looking up entries within it.
I guess it depends on what you mean by "impossible." If you only mean that it's theoretically possible, then sure, one can imagine a world where that is done. But even with IPv4's meager 32 bits of address space, it would explode TCAM requirements on routers if routers would start accepting announcements of /32s instead of the /24s that are accepted now. And /64 (or, jesus, /128) for IPv6? That's impossible.
You would also need something like O(N²) routing update messages to keep those tables updated, instead of the current... I'm guessing it grows more like O(log N) in the number of hosts. So everyone would need vast amounts of CPU and bandwidth to keep up with the announcements.
Isn’t the whole point that when the client roams it opens a brand new L3 connection to the server, then sends over L4 packets to reconnect the L4 session over the new L3 link. Thus keeping L4 session state separate from L3 routing mechanics.
As for L3 packets going into the void. Yeah they’re gonna get lost, can’t be helped. But the server also isn’t going to get any L4 acks for those packets. So when a new L3 connection is created, and the L4 session recovered, the lost packet just get replayed over the new L3 connection.
This is one of my favourite blog posts ever. For those unaware (or who didn't read right to the bottom), the author is the CEO of Tailscale.
One of the problems we have is when we're born we don't question anything. It just is the way it is. This, of course, lets us do things in the world much more quickly than if we had to learn everything from basic principles, but it's a disadvantage too. It means we get stuck in these local optima and can't get out. Each successive generation only finally learns enough to change anything fundamental once they're already too old and set in their ways doing the standard thing.
How I wish we could have a new generation of network engineers who just say "fuck this shit" and build their own internet.
> One of the problems we have is when we're born we don't question anything
I don't know about you personally but every grade-school, high-school, & college level instructor I ever had would probably vehemently disagree with this statement about me. I remember at least 70 year old college instructor becoming visibly irritated that I would ask what research supported the assertions he made
> How I wish we could have a new generation of network engineers who just say "fuck this shit" and build their own internet.
And doing so would improve nothing, and be no different than the IPV6 rollout. So you have to ship new code to every 'network element' to support an "IPv4+" protocol. Just like with IPv6.
So you have to update DNS to create new resource record types ("A" is hard-coded to 32-bits) to support the new longer addresses, and have all user-land code start asking for, using, and understanding the new record replies. Just like with IPv6. (A lot of legacy code did not have room in data structures for multiple reply types: sure you'd get the "A" but unless you updated the code to get the "A+" address (for "IPv4+" addresses) you could never get to the longer with address… just like IPv6 needed code updates to recognize AAAA, otherwise you were A-only.)
You need to update socket APIs to hold new data structures for longer addresses so your app can tell the kernel to send packets to the new addresses. Just like with IPv6. In any 'address extension' plan the legacy code cannot use the new address space; you have to:
* update the IP stack (like with IPv6)
* tell applications about new DNS records (like IPv6)
* set up translation layers for legacy-only code to reach extended-only destination (like IPv6 with DNS64/NAT64, CLAT, etc)
You're updating the exact same code paths in both the "IPv4+" and IPv6 scenarios: dual-stack, DNS, socket address structures, dealing with legacy-only code that is never touched to deal with the larger address space.
Deploying the new "IPv4+" code will take time, there will partial deployment of IPv4+ is no different than having partial deployment of IPv6: you have islands of it and have to fall back to the 'legacy' IPv4-plain protocol when the new protocol fails to connect:
The eternal problem with companies like Tailscale (and Cloudflare, Google, etc. etc.) is that, by solving a problem with the modern internet which the internet should have been designed to solve by itself, like simple end-to-end secure connectivity, Tailscale becomes incentivized to keep the problem. What the internet would need is something like IPv6 with automatic encryption via IPSEC, with IKE provided by DNSSEC. But Tailscale has every incentive to prevent such things to be widely and compatibly implemented, because it would destroy their business. Their whole business depends on the problem persisting.
I thought that too and I've written a very similar comment before. But in fact Tailscale's main product seems to be the zero trust stuff, not dealing with IPv4. At least that's what they say...
> What the internet would need is something like IPv6 with automatic encryption via IPSEC, with IKE provided by DNSSEC.
I understand the appeal of this vision, but I think history has shown that it's not consistent with the realities of incremental deployment. One of the most important factors in successful deployment is the number of different independent actors who need to change in order to get some value; the lower this number the easier it is to get deployment. By very rough analogy to the effectiveness of medical treatments, we might call it the Number To Treat (NTT).
By comparison to the technologies which occupy the same ecological niches on the current Internet, all of the technologies you list have comparatively higher NTT values. First, they require changing the operating system[0], which has proven to be a major barrier. The vast majority of new protocols deployed in the past 20 years have been implementable at the application layer (compare TLS and QUIC to IPsec). The reason for this is obviously that the application can unilaterally implement and get value right away without waiting for the OS.
IPv6 requires you not only to update your OS but basically everyone else on the Internet to upgrade to IPv6. By contrast, you can just throw a NAT on your network and presto, you have new IP addresses. It's not perfect, but it's fast and easy. Even the WebPKI has somewhat better NTT properties than DNSSEC: you can get a certificate for any domain you own without waiting for your TLD to start signing (admittedly less of an issue now, but we're well into path dependency).
Even if we stipulate that the specific technologies you mention would by better than the alternatives if we had them -- which I don't -- being incrementally deployable is a huge part of good design.
[0] DNSSEC doesn't strictly require this, but if you want it to integrate with IKE, it does.
Most tech businesses exist because problems exist. Tailscale delivers a solution that's available today. The only alternative is to sit and wait for IPv6. I don't imagine Tailscale is against IPv6 any more than security professionals are against memory-safe programming languages.
It was somewhat unexpected to find section headings such as "Is IPv6 a failure?" in the product support documentation, but I thought it was interesting and informative nonetheless.
> How I wish we could have a new generation of network engineers who just say "fuck this shit" and build their own internet.
There are plenty of anarchists and disaster aid groups interested in building a more decentralized alternative to the internet. Meshtastic, AnoNet, Reticulum, MeshCore, etc are all evidence of that
Then there's also stuff like Dave Ackley's robust-first computing that's looking towards a completely different paradigm for computing in general that focuses on robustness.
What is this article even on about? The stuff on my network assigns itself ipv6 addresses based on their mac address? That's how you can do stateless ipv6?
Regardless, ipv6 was to have more IP addresses because of ipv4 exhaustion and NAT?
My Xbox tells me my network sucks because it doesn't have ipv6, but this is a very North-American perspective regardless.
All Apple devices and iPhone apps are mandated to work on these networks, or you don't get into the app store. This is because many mobile networks are IPv6-only. IPv4-only servers are accessed through a separate gateway server provided by the network operator, with a 10-20% latency penalty, and limited throughput.
> My Xbox tells me my network sucks because it doesn't have ipv6
Pretty sure that it's complaining about lack of upnp. Which, yes, would not be an issue if we had ipv6... but ironically consoles typically have been slow to adopt ipv6 support themselves, so I'm curious if the xbox even supports it..
Steam and Meta Quest are both terrible at ipv6. At least from a year or so back. My home netowkr supported good ipv6 networking on two providers. Steam games would mess up constantly and Quest would take minutes to load.
Steam having issues makes sense given its been around ages. Meta Quest is all new OS and code yet they managed to bork ipv6. Super annoying.
> The stuff on my network assigns itself ipv6 addresses based on their mac address? That's how you can do stateless ipv6?
Nit: per RFC8064[0], most modern, non-server devices do/should configure their addresses with "semantically opaque interface identifiers"[1] rather than using their MAC address/EUI64. That stable address gets used for inbound traffic, and then outbound traffic can use temporary/privacy addresses that are randomized and change over time.
Statelessness is accomplished simply by virtue of devices self-assigning addresses using SLAAC, rather than some centralized configuration thing like DHCPv6.
I don't like this post's negativity towards ARP. ARP is the reason we can have IP networking on a LAN without a router. The default gateway just becomes a special case of general IP networking on a LAN.
Otherwise, the networking history part of this post is amazing. I haven't gotten to the IPv6 part yet.
But it's not the only way to tackle the problem of resolving layer 2 addresses, and you can do so without introducing the layering violations and expansive broadcast traffic that ARP implies (along with the consequent problems with WiFi and such).
For instance, IPv6's NDP is built on actual IPv6 packets (ICMPv6), rather than some spoofed IP-lookalike thing. No layering violation, and, thanks to multicasting, no need to dump a bunch of broadcast traffic on the layer 2 network.
> For instance, IPv6's NDP is built on actual IPv6 packets (ICMPv6), rather than some spoofed IP-lookalike thing. No layering violation, and, thanks to multicasting, no need to dump a bunch of broadcast traffic on the layer 2 network.
Only if the L2 network actually supports L2-multicast. Ethernet doesn't, except if your switches are intelligent enough. With cheap ethernet switches, multicast will be simulated by broadcast.
And actually, you can never avoid a layering violation. The only thing that NDP avoids is filling in the source/destination IP portions with placeholders. In NDP, you fill the destination with some multicast IPv6 address. But that is window dressing. You still need to know that this L3-multicast IPv6 address corresponds to a L2-multicast MAC address (or just do L2 broadcast). The NDP source you fill with an L3 IPv6 address that is directly derived from your L2 MAC address. And you still get back a MAC address for each IPv6 address and have to keep both in a table. So there are still tons of layering violations where the L2 addresses either have direct 1:1 correspondences to L3 addresses, or you have to keep L2/L3 translation tables and L3 protocols where the L3 part needs to know which L2 protocol it is running on, otherwise the table couldn't be filled.
> Only if the L2 network actually supports L2-multicast. Ethernet doesn't, except if your switches are intelligent enough. With cheap ethernet switches, multicast will be simulated by broadcast.
True, but outside bottom-barrel switches, any switch that's not super old should support multicast, no?
Regarding the rest of your comment, I really don't see how all those things count as layering violations. Yes, there is tight coupling (well, more like direct correspondence) between l2 and l3 addresses. However, these multicast addresses are actual addresses furnished by IPv6; nodes answer on these addresses. Basically, the fact that there is semantic correspondence between l2 and l3 is basically an implementation detail. Whereas ARP even needs its own EtherType!
And, yes, nodes need to keep state. But why is that relevant to whether or not this is a layering violation? When two layers are separate, they need to be combined somewhere ("gluing the layers together"). The fact that the glue is stateless seems irrelevant. But again, I'm just a sysadmin.
I think that's actually about avoiding NIC to CPU traffic. NICs have multicast address filters which determine which packets to receive, but they always receive broadcast packets. It would have been more useful in the 90s, when NICs weren't so programmable.
It's pretty silly anyway. NDP is more of a layering violation than ARP, because now IPv6 has a stupid circular dependency on itself. Mapping L3 addresses to L2 is a layer below 3, it is not part of layer 3, it is part of the sub-layer that adapts between 2 and 3. DHCP should be part of that sub-layer, too.
Did you know that for every kind of network that IP can run on top of, there's a whole separate standard specifying how to adapt one to the other? RFC 894 specifies how to run IP over Ethernet networks. RFC 2225 specifies how to run IP over ATM networks.
IMHO all this whining about "layering violations" is stupid. One will always need some kind of layer glue, neighbors bordering on each other need to know something about each other, correlate addresses, etc. It is impossible to do anything practical without such violations. And it doesn't really matter if that glue protocol belongs to the below layer, the above layer or is a weird hybrid of both. Because in the end, the glue will necessarily be a hybrid and it will be specific to the combination of both those layers.
The only thing one should really really really avoid is the TCP mistake of not just having some minimally necessary glue, but that tight coupling of TCP connections to IP addresses in the layer below.
It's a layering violation because it makes the inter-layer-glue more messy than it should be. IPv6 having a circular layer dependency on itself is not a good thing. It makes that glue messy. ARP is cleaner glue.
The ARP part of the article makes the case that there is no need for a protocol to resolve IP addresses to MAC addresses, with the argument that if only the default gateway was a MAC address rather than an IP address, there would be no need for such a protocol.
NDP may very well be a nicer protocol than ARP, but following the logic of the article, the neighbor solicitation part of NDP would be just as unnecessary as ARP.
And that 50% adoption is only as high as it is because China went from almost zero in 2017 to over 80% in 2025 after they included IPv6 adoption in one of their 5-year plans
I don't think v6 is the absolute pinnacle of protocol design, but whenever anybody says it's bad and tries to come up with a better alternative, they end up coming up with something equivalent to IPv6. If people consistently can't do better than v6, then I'd say v6 is probably pretty decent.
In retrospect I think just adding another 16 or 32 bits to V4 would have been fine, but I don’t disagree with you. V6 is fine and it works great.
All the complaints I hear are pretty much all ignorance except one: long addresses. That is a genuine inconvenience and the encoding is kind of crap. Fixing the human readable address encoding would help.
IPv4 is absolutely fine. Consumers can be behind NAT. That's fine. Servers can be behind reverse proxies, routing by DNS hostname. That's also fine. IPv4 address might be a valuable resource, shared between multiple users. Nothing wrong with it.
Yes, it denies simple P2P connectivity. World doesn't need it. Consumers are behind firewalls either way. We need a way for consumers to connect to a server. That's all.
No, they're not. That's other weird policies specific to your ISP.
With IPv4 + NAT, you have a public IP address. That public address goes to your router. Your router can forward any port to any machine on your LAN. I used to run Minecraft servers from a residential connection on IPv4, it was fine. Never had to call the ISP.
Nope, CGNAT means I need to call my ISP. We now have 2 levels of NAT because the IPv4 address situation has gotten so bad they can't even give every residence its own public IP. If your ISP hasn't adopted it yet its likely they got lucky and bought a ton of IPv4 addresses a long time ago when they were cheap and have decided using them is cheaper than upgrading their network to support CGNAT.
That's a fair point. In my mind, residential ISPs give out public IP addresses and CGNAT is just for cell phones. But I recognize that the philosophy of, "we don't need to solve IP address exhaustion, we just need to keep people able to access Facebook" leads to CGNAT or multi level NAT.
Still, I do think that the solution of, "one IPv4 address per household + NAT" is a perfectly good system. I view the IPv6 mentality of giving each computer in the world a globally unique IPv6 address as a non-goal.
Even if you go with one IPv4 per household + 1 per company you're going to be hard stretched to find room for that in 32 bits, at least after you add the routing infrastructure.
Hm? The ISP gives one IP address to a router in a house, that router uses NAT to let all the computers inside that house use the Internet through the one single shared public IP address. That's NAT, isn't it?
Nope. If you get assigned a routable IPv4 IP, you just have a shit ISP. I led the rollout one of the larger O365 implementations. Outlook and the office stack needed like 10-16 ports per user. We served like 150k people with 30 outbound IPs. If you have an IP, you have 64k+ ports to use.
I also deployed it as a pilot on an internal network. Other than getting direct IPv6 connectivity to some services, which sometimes gave us better performance, it conferred no advantage to us.
IPv6 is great for phones where you don't expect any inbound traffic. Even then, every US carrier is using Carrier NAT to route and proxy traffic for their own purposes.
IPv4 usage in its current state would've been much more limited and annoying in a world without IPv6. Therefore, IPv4 exists as-is thanks to others adopting IPv6.
> Yes, it denies simple P2P connectivity. World doesn't need it.
Worth pointing out that this article was written by the now-CEO of Tailscale. I don't know if "The world doesn't need P2P connectivity" is a compelling take.
With the obligatory caveat that I am but a single datapoint, I use various P2P apps through multiple levels of NAT without issue and I very intentionally prevent devices on my local LAN from being publicly reachable. So it rings true to me.
I do wish ISPs would refrain from intentionally breaking things though. It ought to be illegal for them to block specific ports or filter specific sorts of traffic absent a pressing and active security concern.
This comment exemplifies my worst fear and reinforces my somewhat incomplete idea that IPv4 is perhaps overall safer for the world, and that "worse is better" depending on what you're optimizing for.
Roughly, it's my belief that an IPv6 world makes it easier for centralizing forces and harder for local p2p or p2p-esque ones; e.g. an IPv6 world would have likely made it easier to do bad things like "charge for individual internet user in a home."
The decentralization of "routing power" is more a good thing than bad, what you pay for in complexity you get back in "power to the people."
> easier to do bad things like "charge for individual internet user in a home."
This idea comes up in every HN conversation about IPv6, and so I suppose this time it's my turn to point out RFC 8981[0]. tl;dr: typically, machines which receive IPv6 address assignment via SLAAC (functional equivalent of DHCP) periodically cycle their addresses. Supposed to offer pretty effective protection against host-counting.
Blame the WHATWG for that. They're the reason that v6 addresses in URLs are such a mess. http://[fe90::6329:c59:ad67:4b52%8]:8081/ should work, but doesn't because they refuse to allow a % there. (This is really damned frustrating, because link-locals are excellent for setting up routers or embedded machines, or for recovering from network misconfigurations.)
If it's on the same machine then just use http://[::1]:8081/. Dropping the interface specifier (http://[fe90::6329:c59:ad67:4b52]:8081/) works if the OS picks a default, which some will. curl seems to be happy to work. Or just use one of the non-link-local addresses on the machine, if you have any.
The other frustrating part of this is that it makes it impossible to come up with your own address syntax. An NSS plugin on Linux could implement a custom address format, and it's kind of obvious that the intention behind the URL syntax is that "[" and "]" enter and exit a raw address mode where other URI metacharacters have no special meaning. In general you can't syntax validate the address anyway because you don't know what formats it could be in (including future formats or ones local to a specific machine), so the only sane thing to do is pass the contents verbatim to getaddrinfo() and see if you get an error.
But no, they wrote the spec to only allow a subset of v6 addresses and nothing else.
I very much didn't test it, but this patch might do the job on Firefox (provided there's no code in the UI doing extra validation on top):
If you add new bits to v4 you invent an incompatible protocol, and you should add a lot of bits so you'll never have to invent another incompatible protocol again. You can also fix the minor annoyances in v4.
IPv4 was designed with extension headers: it boggles my mind that simply using the headers to extend the address was never seriously considered. It was proposed: https://www.rfc-editor.org/rfc/rfc1365.html
It still would have been a ton of work, but we could have just had what IPv6 claimed to be: IPv4 with bigger addresses. Except after the upgrade, there'd be no parallel system. And all of DJB's points apply: https://cr.yp.to/djbdns/ipv6mess.html
The people involved in core Internet protocol design were used to the net being a largely walled garden of governments, corporations, universities, and a small number of BBSes and niche ISPs.
Major protocol upgrades had happened before, not just for the core protocol but all kinds of other then-core services.
It had been a while but not that long, I think less than 20 years, and last time it was pretty easy. They assumed they could design something better and phase it in and all the members of the Internet community would just do the right thing.
That’s probably what made them feel they could push a more radical upgrade.
Unfortunately they started this right as the massive tsunami of Internet commercialization hit. Since V6 was too new, everyone went with V4. Now all the sudden you had thousands of times more nodes, sites, and personnel, and all of them were steeped in IPv4 and rushing to ship on top of it. You also lost the small town atmosphere of the early net where admins were a club and could coordinate things.
Had V6 launched five years earlier V4 would probably be dead.
V6 usage will probably keep creeping up, but as it stands we will likely be dual stack forever. Once the installed user base and sunk cost is this high the design is fixed and can never be changed without a hard core heavy handed measure like a government mandate.
iOS is benefitted from a heavy handed mandate so that it and all of its apps sing on IPv6 only networks. They just need to expose IPv4 internet as IPv6 addresses.
They weren't all that wrong. NAT was an incompatible protocol upgrade - that's why it broke protocols that made pre-NAT assumptions, like FTP - but it kept most of them working. DNS64 is also an incompatible protocol upgrade that breaks protocols that make pre-DNS64 assumptions, like hardcoding addresses - but it keeps most keeps of them working.
In DNS64, whenever your DNS resolver encounters an IPv4-only site, it translates it to an IPv6 address under a translator prefix, and returns that address to the client. The client connects to the translator server via that address, and the translator server opens an IPv4 connection to the website. Your side of the network is IPv6-only, not even running tunneled v4.
This only breaks things to about the same small extent that the introduction of NAT did.
You would have ended up with a protocol identical to IPv6, but with fewer address bits.
If you add *any* address bits you've already broken protocol compatibility and you need to upgrade the entire world. While you're already upgrading the entire world, you should add so many address bits that we'll never need more, because it costs the same, and you may as well fix those other niggling problems as well, right?
> they end up coming up with something equivalent to IPv6
Not just that. Almost every single thing people think up that's "better" is something that was considered and rejected by the IPv6 design process, almost always for well-considered reasons.
The converse also happens: people look at something IPv6 supports and says "that's crazy, why would that be allowed/designed for", without knowing that IPv4 does it too.
You know that's not what he meant. the world is always changing. it was designed in 1998 by networking gear companies, with their own company needs in mind. It wasn't engineered with end user, or even network administrators and app developers in mind.
The only reason it's around is because of sunken cost fallacy and people stuck in decades old tech-debt. A new protocol designed today will be different, much the same as how Rust is different than Ada. SD-WAN wasn't a thing in 1998, the cost of chips and the demand of mobile customers wasn't a thing. supply/demand economics have changed the very requirments behind the protocol.
Even concepts like source and destination addressing should be re-thought. The very concept of a network layer protocol that doesn't incorporate 0RTT encryption by default is ridiculous in 2026. Even protocols like ND, ARP, RA, DHCP and many more are insecure by default. Why is my device just trusting random claims that a neighbor has a specific address without authentication? Why is it connecting to a network (any! wired,wireless, why does it matter, this is a network layer concern) without authenticating the network's security and identity authority? I despise the corporatized term "zero trust" but this is what it means more or less.
People don't talk about security, trust, identity and more, because ipv6 was designed to save networking gear vendors money, and any new costly features better come with revenue streams like SD-WAN hosting by those same companies. There are lots and lots of new things a new layer-3 protocol could bring to the scene. But security aside, the main thing would be replacing numbered addressing with identity-based addressing.
It all comes down to how much money it costs the participants of the RFC committees. given how dependent the world is on this tech, I'm hoping governments intervene. It's sad that this is the tech we're passing to future generations. We'll be setting up colonies on mars, and troubleshooting addressing and security issues like it's 2005.
>There are lots and lots of new things a new layer-3 protocol could bring to the scene. But security aside, the main thing would be replacing numbered addressing with identity-based addressing
I don't know much about MPLS and only know IP routing, but that quote above sounds very hand-waving. How do you route "identity based addressing"?
Not to mention authenticated identity-based routing would mean embedding trusted centralized authorities into even deeper network layers. That is such a mess for TLS, after CAs started going rogue we've basically ended up with Google, a shitty ad company, deciding who should be trusted because they control Chrome.
Not at all, it doesn't even need to be PKI. But if it was, your routers would be the CA. Or more practically, whatever device is responsible for addressing, also responsible as the authority over those addresses. Your DHCP server would also be the CA for your LAN. Even a simple ND/ARP would require a claim (something like a short byte value end-devices can lookup/cache) that allows it to make that "the address x.x.x.x is at <mac>" statement. Smarter schemes might allow the network forwarder (router) to translate claims to avoid end devices looking up and caching lots of claims locally (and it would need to be authorized to do so).
You wouldn't need TLS. this scheme i just thought would actually decentralize/federate PKI a lot more. If you have a public address assigned, your ISP is the IP-CA. I don't want to get into the details of my DNS replacement idea, but similar to network operators being authorities over the addresses they're responsible for, whoever issued you a network name is also the identity authority over that name (so DNS registrars would be CA's). Ideally though, every device would be named, and the people that have logical control over the address will also be responsible for the name and all identity authentication and claims over those addresses and names. You won't have freaking google and browsers dictating which CA root to trust, it will instead be the network you're joining that does that (be it your DHCP server, or your ISP is up for debate, but I prefer the former). Ideally, your public key hash is your address. How others reach you would be by resolving your public key from your identity, the traffic will be sent to your public key (or see my sibling comment for the concept of cryptographic identity). All names would of course be free, but what we call "DNS" today will survive as an alias to those proper names. so your device might be guelo.lan123.yourisp.country but a registrar might sell you a guelo.com alias that points to the former name.
The implications of this scheme are wild, think about it!
Rogue trust providers will be a problem, but only to their domain. right now random CA roots can issue domains for anything. with the scheme I proposed, your country can mess with its own traffic, as can your isp, as can you over your lan. You won't be able to spoof traffic for a different lan, or isp using their name.
It wouldn't be a good idea to spell out an entire protocol in a comment section, but the key part is that it would cost a lot.
It is far from hand-waving. Right now we have numeric addressing, where routers look at bits and perform ASIC-friendly bitwise (and other) operations on that number to forward a lot of traffic really fast for cheap.
Identity and trust establishment won't be part of the regular data flow, but at network connection time, each end-device will discover the network authority it has connected to, and build trust that allows it to validate identities in that network, including address assignments, neighbor discovery, name resolution and verification, authorized traffic forwarders (routers) and more.
After the connection is established and the network is trusted, as part of the connection establishment, the network authority designates how addressing should be done. If Alice's Iphone wants to connect to Bob's server, it will encrypt the data, and as part of a very slim header designate Bob's server's cryptographic identifier, destination service identifier, and its own cryptographic identifier for the first packet. To reduce overhead, subsequent traffic can use a simple hash of the connection identifers mentioned earlier.
When devices come online in the network, their cryptographic identifers will become known to the entire network, including intermediate routers. Routing protocols work with the identity authority of the network to build forwarding tables based on cryptographic identifiers, and for established sessions, session ids.
"Cryptographic identifier" is also not a hand-wavy term. what it means must be dynamic, so as to avoid protocol updates like v4 and v6 over addressing. V6 presumed just having lots of bits is enough. An ideal protocol will allow the network itself to communicate the identifier type and byte-size. For example an FQDN, or an IPv4 address alike could be used directly, or a public key hash using a hash algorithm of the network's choice can be used. So long as the devices in the network support it, and the end device supports it, it should work fine.
Internet addressing can use a scheme like this, but it doesn't need to. IPv6 took the wrong approach with NAT, it got rid of it instead of formalizing it. we'll always need to translate addresses. But the internet is actually well-positioned for this, due to the prevalence of certificate authorities, but it will require rethinking foundational protocols like DNS, BGP, and PKI infrastructure.
But my original point wasn't this, it was that tech has come far, our requirements today are different than 30 years ago. Even the OSI layered model is outdated, among other things.
This is just my proposal that I just thought of as I'm typing this, smarter people that can sit down and think through the problem can think of better protocols. I only proposed it to demnostrate the concept isn't hand-wavy or ridiculous.
IPv6 was relatively rushed to meet the address shortage issue of IPv4 while at the same time solve lots of other problems. The next network layer protocol (and we do need one) should have the goal of making networking as a whole adaptable to new and unforeseen requirements (that's why I suggested the network authority be the one to dictate the addressing scheme, and with it, be responsible for translating it if needed). We're being held back, not just in tech but as a species, because of this short-sighted protocol design! exaggerated as that statement might sound, it is true.
I'll reserve further discussion on the topic for when it is required, but I hope this prevents more dismissive responses.
> it was designed in 1998 by networking gear companies
That's false. Firstly, rfc1883 was published in 1995 which means work started some time before that, and the RFC process included operating system vendors and RIR administrators. The primary author of rfc1883 worked at Xerox Parc, and the primary author of rfc1885 worked at DEC. Neither were networking gear companies.
No, I think proposed, draft and internet standard all have specific meanings we don't need to debate over. Your claim that IPv6 was first proposed in 1995 is correct, as is my claim that it was first accepted in 1998. No one actually uses a proposed standard, but when it is draft people start implementing it and giving feedback over issues until it is fully ratified is my understanding (correct me if that's wrong please).
Everyone forgets that the Internet Architecture Board took a religious view on "Internet transparency and the end-to-end principle" which was counter to the realities of limited tooling and actual site maintainers needs. [0]
There were many of us who, even when it was still IPng (IP Next Generation) in the mid 1990's, tried to get it working and spent significant amount of effort to do so, only to be hit with unrealistic ideological ideals that blocked our ability to deploy it, especially with the limitations of the security tools back in the day.
Remember when IPng started, even large regional ISPs like xmission had finger servers, many people used telnet and actually slackware enabled telnet with no root password by default!!! I used both to get wall a coworker who was late to work because he was playing tw2000.
Back then we had really bad application firewalls like Altavista and PIX was just being invented, and the large surveillance capitalism market simply didn't exist then.
The IAB hampered deployment by choosing hills to die on without providing real alternatives, and didn't relent until IPv4 exhaustion became a problem, and they had lost their battle because everyone was forced into CGNAT etc...because of the IETF, not in spite of it.
The IAB and IETF was living in a MIT ITS mindset when the real world was making that model hazardous and impossible. End to end transparency may be 'pretty' to some people, but it wasn't what customers needed. When they wrote the RFCs to make other services simply fail and time out if you enabled IPv6 locally, but didn't have ISP support they burned a lot of good will and everyone just started ripping out the IPv6 stack and running IPv4 only.
IMHO, Like almost all tech failures, it didn't flail based on technical merits, it flailed based on ignorance of the users needs and a refusal to consider them, insisting that adopters just had to drink their particular flavor of Kool-aid or stick to IPv4, and until forced most people chose the latter.
In fact, 30 years later, I just had to add a IPv6 block on Ubuntu’s apt mirrors this week, because the aaaa record query has higher priority and was timing out on my CI, killing build times.
That behavior is due to the same politics mentioned above.
A few more pragmatic decisions, or at least empathetic guidance would have dramatically changed the acceptance of ipv6.
AAAA records have lower priority than A records if you don't have a v6 address assigned on your system. (Link-locals don't count for this).
You would only see a timeout to an AAAA record if the connection attempt to the A record already failed. Some software (looking at you, apt-get) will only print the last connection failure instead of all of them, so you don't see the failure to connect to the A record. I've seen people blame v6 for this even though they don't have v6 and it's 100% caused by their v4 breaking.
Run `getent ahosts example.com` to see the order your system sorts addresses into. `wget example.com` (wget 1.x only though) is also nice, because it prints the addresses and tries to connect to them in turn, printing every error.
I mean... adding v6 is the right thing to do either way, but "AAAA is higher priority than A" isn't the reason.
IPv6 aaaa timeout was shown to be the problem, adding `Acquire::ForceIPv4 "true";` fixed the problem on several hosts.
$ getent ahosts us.archive.ubuntu.com
91.189.91.81 STREAM us.archive.ubuntu.com
91.189.91.81 DGRAM
91.189.91.81 RAW
91.189.91.82 STREAM
91.189.91.82 DGRAM
91.189.91.82 RAW
91.189.91.83 STREAM
91.189.91.83 DGRAM
91.189.91.83 RAW
2620:2d:4002:1::101 STREAM
2620:2d:4002:1::101 DGRAM
2620:2d:4002:1::101 RAW
2620:2d:4002:1::102 STREAM
2620:2d:4002:1::102 DGRAM
2620:2d:4002:1::102 RAW
2620:2d:4002:1::103 STREAM
2620:2d:4002:1::103 DGRAM
2620:2d:4002:1::103 RAW
There are no non `fe80::` (link local addresses) on the host.
$ ip a | grep inet6
inet6 ::1/128 scope host noprefixroute
inet6 fe80::786a:e338:3957:b331/64 scope link noprefixroute
inet6 fe80::a10c:eae9:9a49:c94d/64 scope link noprefixroute
So to be clear, I removed my temporary ipv4 only apt config, but there are a million places for this to be brittle and you see people doing so with sysctl net.ipv6.conf.* netplan, systemd-networkd, NetworkManager, etc... plus the individual client etc....
> If an implementation is not configurable or has not been configured, then it SHOULD operate according to the algorithms specified here in conjunction with the following default policy table:
One could argue that GUA's without a non-link-local IPv6 address should just be ignored...and in a perfect world they would.
But as covered int the first link in this post this is not as easy or clear as expected and people tend to error towards following rfc6724 which states just below the above refrence:
> Another effect of the default policy table is to prefer communication using IPv6 addresses to communication using IPv4 addresses, if matching source addresses are available.
I am not an IPv6 hater...just giving observations that when you introduce a breaking change, and add additional friction, it dramatically reduces adoption.
Many companies I have been at basically just implement enough to meet Federal Government requirements and often intentionally strip it out of the backend to avoid the brittleness it caused.
I am old enough to remember when I could just ask for an ASN and a portable class-c and how nice that was, in theory IPv6 should have allowed for that in some form...I am just frustrated with how it has devolved into an intractable 'wicked problem' when there was a path.
The fact that people don't acknowledge the pain for users, often due to situations beyond their control is a symptom of that problem. Ubuntu should never have even requested an IPv6 aaaa in the above system, and yes it only does because of politics and RFC requirements.
user@ubuntu-server:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 25.10
Release: 25.10
Codename: questing
user@ubuntu-server:~$ uname -a
Linux ubuntu-server 6.17.0-7-generic #7-Ubuntu SMP PREEMPT_DYNAMIC Sat Oct 18 10:10:29 UTC 2025 x86_64 GNU/Linux
user@ubuntu-server:~$ getent ahosts us.archive.ubuntu.com
91.189.91.82 STREAM us.archive.ubuntu.com
91.189.91.82 DGRAM
91.189.91.82 RAW
91.189.91.81 STREAM
91.189.91.81 DGRAM
91.189.91.81 RAW
91.189.91.83 STREAM
91.189.91.83 DGRAM
91.189.91.83 RAW
2620:2d:4002:1::102 STREAM
2620:2d:4002:1::102 DGRAM
2620:2d:4002:1::102 RAW
2620:2d:4002:1::101 STREAM
2620:2d:4002:1::101 DGRAM
2620:2d:4002:1::101 RAW
2620:2d:4002:1::103 STREAM
2620:2d:4002:1::103 DGRAM
2620:2d:4002:1::103 RAW
user@ubuntu-server:~$ ip --oneline link | grep -v lo: | awk '{ print $2 }'
enp0s3:
user@ubuntu-server:~$ ip addr | grep inet6
inet6 ::1/128 scope host noprefixroute
inet6 fe80::5054:98ff:fe00:64a9/64 scope link proto kernel_ll
user@ubuntu-server:~$ fgrep -r -e us.archive /etc/apt/
/etc/apt/sources.list.d/ubuntu.sources:URIs: http://us.archive.ubuntu.com/ubuntu/
user@ubuntu-server:~$ sudo apt-get update
Hit:1 http://us.archive.ubuntu.com/ubuntu questing InRelease
Get:2 http://security.ubuntu.com/ubuntu questing-security InRelease [136 kB]
<snip>
Get:43 http://security.ubuntu.com/ubuntu questing-security/multiverse amd64 c-n-f Metadata [252 B]
Fetched 2,602 kB in 3s (968 kB/s)
Reading package lists... Done
I didn't think to wrap that in 'time', but it only took a few seconds to run... more than two and less than thirty.
The IPv6 packet capture running during all that reveals that it never tried to reach out over v6 (but that my multicast group querier is happily running):
I even manually ran unattended-upgrade, which looks to have succeeded. Other than unanswered router solicitations and multicast group query membership chatter, there continued to be no IPv6 communication at all, and none of the messages you reported appeared either in /var/log/syslog or on the terminal.
You aren't running it during an external transitive failure that happened on April 15th.
The problem isn't the happy path, the problem is when things fail, and that linux, in particular made it really hard to reliably disable [0]
Once that hits someone's vagrant or ansible code, it tends to stick forever, because they don't see the value until they try to migrate, then it causes a mess.
The last update on the original post link [1] explains this. The ipv4 host being down, not having a response, it being the third Tuesday while Aquarius is rising into what ever, etc... can invoke it. It causes pains, is complex and convoluted to disable when you aren't using it, thus people are afraid to re-enable it.
> ...linux, in particular made it really hard to reliably disable
Section 10.1 of that Archi Wiki page says that adding 'ipv6.disable=1' to the kernel command line disables IPv6 entirely, and 'ipv6.disable_ipv6=1' keeps IPv6 running, but doesn't assign any addresses to any interfaces. If you don't like editing your bootloader config files, you can also use sysctl to do what it looks like 'ipv6.disable_ipv6=1' does by setting the 'net.ipv6.conf.all.disable_ipv6' sysctl knob to '1'.
> You aren't running it during an external transitive failure...
I'll assume you meant "transient". Given that I've already demonstrated that the only relevant traffic that is generated is IPv4 traffic, let's see what happens when we cut off that traffic on the machine we were using earlier, restored to its state prior to the updates.
We start off with empty firewall rules:
root@ubuntu-server:~# iptables-save
root@ubuntu-server:~# ip6tables-save
root@ubuntu-server:~# nft list ruleset
root@ubuntu-server:~#
We prep to permit DNS queries and ICMP and reject all other IPv4 traffic:
And we do an apt-get update, which fails in less than ten seconds:
root@ubuntu-server:~# apt-get update
Ign:1 http://security.ubuntu.com/ubuntu questing-security InRelease
Ign:2 http://us.archive.ubuntu.com/ubuntu questing InRelease
<snip>
Could not connect to security.ubuntu.com:80 (91.189.92.23). - connect (111: Connection refused) Cannot initiate the connection to security.ubuntu.com:80 (2620:2d:4000:1::102). - connect (101: Network is unreachable) <long line snipped>
<snip>
W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/questing-security/InRelease Cannot initiate the connection to security.ubuntu.com:80 (2620:2d:4000:1::102). - connect (101: Network is unreachable) <long line snipped>
W: Some index files failed to download. They have been ignored, or old ones used instead.
root@ubuntu-server:~#
In this case, the IPv6 traffic I see is... an unanswered router solicitation, and the multicast querier chatter that I saw before. [0] What happens when we change those REJECTs into DROPs
root@ubuntu-server:~# iptables -D OUTPUT -o enp0s3 -j REJECT
root@ubuntu-server:~# iptables -D INPUT -i enp0s3 -j REJECT
root@ubuntu-server:~# iptables -A OUTPUT -o enp0s3 -j DROP
root@ubuntu-server:~# iptables -A INPUT -i enp0s3 -j DROP
root@ubuntu-server:~#
...and then re-run 'apt-get update'?
root@ubuntu-server:~# apt-get update
Ign:1 http://security.ubuntu.com/ubuntu questing-security InRelease
Ign:1 http://security.ubuntu.com/ubuntu questing-security InRelease
Ign:1 http://security.ubuntu.com/ubuntu questing-security InRelease
Err:1 http://security.ubuntu.com/ubuntu questing-security InRelease
Cannot initiate the connection to security.ubuntu.com:80 (2620:2d:4002:1::103). - connect (101: Network is unreachable) <v6 addrs snipped> Could not connect to security.ubuntu.com:80 (91.189.92.24), connection timed out <long line snipped>
<redundant output snipped>
W: Some index files failed to download. They have been ignored, or old ones used instead.
root@ubuntu-server:~#
Exactly the same thing, except it takes like two minutes to fail, rather than ~ten seconds, and the error for IPv4 hosts is "connection timed out", rather than "Connection refused". Other than the usual RS and multicast querier traffic, absolutely no IPv6 traffic is generated.
However. The output of 'apt-get' sure makes it seem like an IPv6 connection is what's hanging, because the last thing that its "Connecting to..." line prints is the IPv6 address of the host that it's trying to contact... despite the fact that it immediately got a "Network is unreachable" back from the IPv6 stack.
To be certain that my tcpdump filter wasn't excluding IPv6 traffic of a type that I should have accounted for but did not, I re-ran tcpdump with no filter and kicked off another 'apt-get update'. I -again- got exactly zero IPv6 traffic other than unanswered router solicitations and multicast group membership querier chatter.
I'm pretty damn sure that what you were seeing was misleading output from apt-get, rather IPv6 troubles. Why? When you combine these facts:
* REJECTing all non-DNS IPv4 traffic caused apt-get to fail within ten seconds
* DROPping all non-DNS IPv4 traffic caused apt-get to fail after like two minutes.
* In both cases, no relevant IPv6 traffic was generated.
the conclusion seems pretty clear.
But, did I miss something? If so, please do let me know.
[0] I can't tell you why the last line in the 'apt-get update' output is only IPv6 hosts. But everywhere there were IPv6 hosts, the reported error was "Network is unreachable" and for IPv4 the error was "Connection refused".
This part is exactly the problem I was talking about:
root@ubuntu-server:~# apt-get update
...
Could not connect to security.ubuntu.com:80 (91.189.92.23). - connect (111: Connection refused) Cannot initiate the connection to security.ubuntu.com:80 (2620:2d:4000:1::102). - connect (101: Network is unreachable) <long line snipped>
<snip>
W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/questing-security/InRelease Cannot initiate the connection to security.ubuntu.com:80 (2620:2d:4000:1::102). - connect (101: Network is unreachable) <long line snipped>
W: Some index files failed to download. They have been ignored, or old ones used instead.
Well... in this case the output does show the failure to connect to 91.189.92.23, but that looks like a different kind of message to the "W:" lines, so maybe it doesn't show up on all setups or didn't make it into the logs on disk, or got buried under other output.
If you look at just the W: lines, it mentions a v6 address but the machine doesn't have v6 and the actual problem is the Connection Refused to the v4 address. The output is understandably misleading but ultimately the problem here has nothing to do with v6.
you repeat several times that IAB was too ivory tower and refused to address the critical issues of the day, but don't really go into much detail. I wrote an early implementation of v6, before ratification (and even won the UNH interop prize!). and I struggle to understand exactly what blame you are placing at their feet. just that maybe they took the e2e principle too seriously and should have backed the awful bodge that was NAT?
CLNP had existing implementations and was fundamentally sound. On its technical merits, RFC1347 TCP and UDP with Bigger Addresses (TUBA) wins hands-down. But it took too long for the ISO to agree to a hand-off (the IETF wanted to be able _fork_ it, which seems nuts to me) and the IAB required ownership.
But aside from that, I actually do think we could have baked address extensions into the existing packet format's option fields and had a gradual upgrade that relied on that awful bodge that was (and is) NAT. And had a successful transition wherein it died a well-deserved death by now. :-)
I didn't have too much visibility in the CLNP world, although we did have a test network where I worked. My personal issue was that I just couldn't read the massively overwrought ISO specs. My admittedly biased viewpoint there wasn't anything really wrong with Ipv6, but the providers were quite happy with the way things were and actually kind of liked the internet-as-television model that we ended up with.
I do think that the IETF didn't realize that they were losing their agency, so its very likely that TUBA would have made the difference. not for any technical reason, but that it would have been a few years earlier when people were still listening.
I only read up on CLNP based on a fascination with counterfactuals. I will say there is a fair bit to IS-IS and ES-IS that's directly relevant to the original articles points on the circuits-to-bus-to-circuits physical evolution. There was no blanket assumption that the underlying layer look like Ethernet. The subnet equivalent was at a higher level and the assumptions were that there would be an actual network of links to manage.
The fact that IS-IS survived as a relevant IP routing protocol says a lot on its own.
It is hard to cover decades of politics in one post on here, but rather than the IAB being in an ivory tower, at least for the first 15 years, I think it was ruled by inertia that was changing, and suffering a bit from The Mythical Man Month second system syndrome.
In the beginning it was an experiment and should have been ambitious, the IETF had just moved to CIDR which bought almost a decade of time, and they should have aimed high.
It is just when you significantly change a system, you need to show users how to accomplish the work they are doing with the old system, even if how they do that changes. If you can't communicate a way to replace their old needs, or how that system is fitting new needs that you could never have predicted, you need to be flexible and demonstrate that ability.
If you look at the National Telecommunications and Information Administration. [Docket No. 160810714-6714-01] comments
You will see that the address space argument is the only real one they make. It isn't coincidence that rfc7599 came about ~20 years later when 160810714-6714-01 and federal requirements for IPv6 were being discussed.
If you look at the #nanog discussions between RFC 1883 (ipv6) (late 1996) being proposed and Ipv4 exhaustion in early in (2011) it wasn't just the IAB that was having philosophical discussions around this.
Both rfc3484 and rfc6724 suffered from the lack of executive sponsorship as called out in the above public comments. And the following from rfc6724's intro is often ignored with just pure compliance:
> They do not override choices made by applications or upper-layer protocols, nor do they preclude the development of more advanced mechanisms for address selection.
There are many ways that could have played out different, but I noticed Avery Pennarun's last update to that post pretty much says the same in different words.
> IPv6 was created in a new environment of fear, scalability concerns, and Second System Effect. As we covered last time, its goal was to replace The Internet with a New Internet — one that wouldn’t make all the same mistakes. It would have fewer hacks. And we’d upgrade to it incrementally over a few years, just as we did when upgrading to newer versions of IP and TCP back in the old days
I have come to think that having both SLAAC and DHCPv6 were a big flaw in IPv6. SLAAC is awesome but having two config mechanisms is confusing. It doesn't help that Android refuses to support DHCPv6.
I think SLAAC came from world where computers were expensive, DHCP servers were separate, and they wanted to eliminate them. But we are in world where computers are cheap and every router can run DHCP.
We could have had easy config with DHCPv6 giving out MAC based addresses by default. The auto config would still work on link-local.
I remember when ipv6 seemed like an inevitable next step. The fact that it fizzled seems like the problem it was trying to solve just doesn't matter? We somehow found enough ipv4 addresses to make the whole thing keep working just fine (from practical end user perspective) which seems like we never truly needed ipv6? Is that the wrong conclusion?
Sniffnoy | 18 hours ago
rnhmjoj | 17 hours ago
https://news.ycombinator.com/item?id=14986324 (2017)
https://news.ycombinator.com/item?id=20167686 (2019)
https://news.ycombinator.com/item?id=25568766 (2020)
https://news.ycombinator.com/item?id=37116487 (2023)
tomhow | 11 hours ago
The world in which IPv6 was a good design (2017) - https://news.ycombinator.com/item?id=37116487 - Aug 2023 (306 comments)
The world in which IPv6 was a good design (2017) - https://news.ycombinator.com/item?id=25568766 - Dec 2020 (131 comments)
The world in which IPv6 was a good design (2017) - https://news.ycombinator.com/item?id=20167686 - June 2019 (238 comments)
The world in which IPv6 was a good design - https://news.ycombinator.com/item?id=14986324 - Aug 2017 (191 comments)
p4bl0 | 17 hours ago
There's one point I don't really get and I would be glad if someone could clarify it for me. When the author says that even over wifi, the CSMDA/CD protocol is not used anymore. Then how does it actually work?
Discussing this, the author explains:
> If you have two wifi stations connected to the same access point, they don't talk to each other directly, even when they can hear each other just fine.
So, each station still has to decide at some point if what its hearing is for them or not, as it could be another station talking to the AP, or the AP talking to another station. How is that done if not using CSMA/CD (or something very similar at least)?
rnhmjoj | 16 hours ago
AFAIK, WiFi has always been doing CSMA/CA and starting with the 802.11ax standard also OFDMA. See https://en.wikipedia.org/wiki/Hidden_node_problem#Background
p4bl0 | 15 hours ago
Thanks for your link that helped clarifying this for me!
okanat | 14 hours ago
WiFi is different of course. However as the author wrote, your WiFi devices always go through the access point where they use 802.11 RTS/CTS messages to request and receive permission to send packets. All nodes can see CTS being broadcasted so they know that somebody is sending something. So even CSMA/CA is getting less useful.
p4bl0 | 14 hours ago
benjx88 | 9 hours ago
for Non-WiFi, we don't use CD because all is bi-dirireactional and all communication have their own lane, no needed because there will never be a collision this is down to the port level on the switches, the algorithm might be still there but not use for it.
For WiFi, CD can never be good or work, because "Detecting" is pointless, it cannot work. we need to "Avoid" so it has functionality because is a shared lane or medium. CA is a necessity, now in 2026, we actually truly don't need it or use it as much since now WiFi and 802.11 functions as a switch with OFDM and with RF signal steering, at the PHY (physical level) the actual RF radio frequency side, it cancels out all other signals say from others devices near you and we "create" similar bi-directional lanes and functions similar as switches.
The article is good and represents how IETF operates a view (opinionated) of what happens inside. We actually need an IETF equivalent for AI. Its actually good and a meritocracy even though of late the Big companies try to corrupted or get their way, but academia is still the driver and steers it, and all votes count for when Working-Groups self organized. (my last IETF was 2018 so not sure how it is now in the 2020s)
noselasd | 14 hours ago
Wifi is in any case not considered a bus network, rather a star topology network.
tremon | 51 minutes ago
I think the most accurate classification is that wifi emulates a star topology at OSI layer 2 on top of a layer 1 bus topology.
NooneAtAll3 | 17 hours ago
so all the fairy tales about IP invented for nuclear war was a lie? the moment military started moving around, IP became useless?
znkr | 17 hours ago
PunchyHamster | 17 hours ago
IP + some dynamic routing handles the situation of "the connection site got nuked and we need to route around it", it's just not in the protocol, it's additional layer on top of it
themanualstates | 15 hours ago
Wi-Fi and ethernet also have different IPs. And what if you also add Wi-Fi peer-to-peer (Airdrop-ish), Wi-Fi Tunneled Direct Link Setup (literally Chromecast)?
If a vendor implemented simultaneous Dual Band (DBDC) Wi-Fi, that means it can connect to both 2.4ghz and 5ghz at the same time, each with their own mac & ip, because you're trying to connect to the same network on a different band. Or route packages from a 'wan' Wi-Fi to a 'lan' Wi-Fi (share internet on (BSS) infrastructure Wi-Fi A to a new (IBSS) ad-hoc Wi-Fi network B with your smartphone as the gateway on Android.
There's also 802.11 the IEEE 802.11 standard to add wireless access in vehicular environments (WAVE) and EV chargers or IP over the CCS protocol, etc. If all cars need to be 'connected' and 'have a unique address' NAT / CGNAT also isn't cutting it.
There's also IoT. Thread is ipv6 because it's the alternative to routing whatever between wan / lan / zigbee / Z-Wave / etc with a specific gateway at a remote point in the mesh network.
And how about the new DHCP / DNS specs for ipv6, you can now share encrypted DNS servers, DHCP client-ID, unique OUID, etc etc.
It's an infuriating post really. As if IP was only designed for a small scale VPN / overlay network service such as Tailscale.
Sesse__ | 14 hours ago
Mobile IP actually wanted to do this, it just never took off (not the least because both endpoints need to understand it to get route optimization). I think some Windows versions actually had partial Mobile IPv6 support.
wpollock | 16 hours ago
For smaller internets, protocols such as RIP (limited to 16 hops) broadcast routing information from each still-working router to other routers. Each router built a picture of the internet (simplifying a bit here, RIP and similar protocols used "distance vector" routing, but other more advanced routing protocols did have each a picture of the internet). So when a packet arrived at its router, that router can forward the pack towards the destination. Such protocols are "interior" routing protocols, used within an ISP's network.
The Internet is too big for such automatic routing and uses an "exterior" routing protocol called BGP. This protocol routes packets from one ISP to the next, using route and connectivity information input by humans. (Again I'm simplifying a bit.)
Wifi uses entirely different protocols to route packets between cells.
Fun fact: wifi is not an acronym for anything, the inventors simply liked how it sounded.
jstimpfle | 13 hours ago
Most certainly it's a reference to "Sci-Fi" or "Hi-Fi".
CaninoDev | 11 hours ago
wlonkly | 2 hours ago
https://boingboing.net/2005/11/08/wifi-isnt-short-for.html
pocksuppet | an hour ago
rjsw | 13 hours ago
holowoodman | 9 hours ago
PunchyHamster | 17 hours ago
And how the fuck anything in-between knows where to route it ? The article glows a blazing beacon of ignorance about everything in-between.
The whole entire problem with mobile IP is "how we get intermediate devices to know where to go?" we're back to
> The problem with ethernet addresses is they're assigned sequentially at the factory, so they can't be hierarchical.
Which author hinted at then forgot. We can't have globally routable, unique, random-esque ID precisely because it has to be hierarchical. Keeping connection flow ID at L4 instead of L3+L4 changes very little, yeah, you can technically roam the client except how the fuck server would know where to send the packet back when L3 address changes ? It would have to get client packet with updated L3 address and until then all packets would go to void.
But hey, at least it's some progress ? NOPE, nothing at protocol layer can be trusted before authentication, it would make DoS attacks far easier (just flood the host in a bunch of random uuids), and you would still end up doing it QUIC way of just re-implementing all of that stuff after encryption of the insides
tuetuopay | 14 hours ago
Because the IP address changed, so classic routing still works. Their point is about identifying a session with something non-constant (the IP of the client), rather than a session token.
Instead of identifying the "TCP" socket with (src ip, src port, dst ip, dst port), they use (src uuid, dst uuid) which allows flows to keep working when you change IP addresses. Just like you can change networks and still have your browser still logged in to most websites.
The packets carrying those UUIDs still are regular old IP packets, UDP in the case of QUIC. Only the server needs to track anything, and only has to change the dst ip of outgoing packets.
As for flooding and DDoS, that’s what handshakes are for, and QUIC already does it (disclaimer: never dug deep in how QUIC works so I can’t explain the mechanism here).
Borealid | 14 hours ago
This is not, technically, true. We could have globally-routable, unique, random-esque IDs if every routing device in the network had the capacity to store and switch on a full table of those IDs.
I'm not saying this is feasible, mind you, just that it's not impossible.
bluGill | 13 hours ago
Borealid | 12 hours ago
jcgl | 11 hours ago
ikiris | 5 hours ago
Outside of ignoring the laws of physics, this isn’t very useful of speculation.
Dagger2 | an hour ago
avianlyric | 9 hours ago
As for L3 packets going into the void. Yeah they’re gonna get lost, can’t be helped. But the server also isn’t going to get any L4 acks for those packets. So when a new L3 connection is created, and the L4 session recovered, the lost packet just get replayed over the new L3 connection.
globular-toast | 16 hours ago
One of the problems we have is when we're born we don't question anything. It just is the way it is. This, of course, lets us do things in the world much more quickly than if we had to learn everything from basic principles, but it's a disadvantage too. It means we get stuck in these local optima and can't get out. Each successive generation only finally learns enough to change anything fundamental once they're already too old and set in their ways doing the standard thing.
How I wish we could have a new generation of network engineers who just say "fuck this shit" and build their own internet.
sidewndr46 | 12 hours ago
I don't know about you personally but every grade-school, high-school, & college level instructor I ever had would probably vehemently disagree with this statement about me. I remember at least 70 year old college instructor becoming visibly irritated that I would ask what research supported the assertions he made
throw0101a | 10 hours ago
And doing so would improve nothing, and be no different than the IPV6 rollout. So you have to ship new code to every 'network element' to support an "IPv4+" protocol. Just like with IPv6.
So you have to update DNS to create new resource record types ("A" is hard-coded to 32-bits) to support the new longer addresses, and have all user-land code start asking for, using, and understanding the new record replies. Just like with IPv6. (A lot of legacy code did not have room in data structures for multiple reply types: sure you'd get the "A" but unless you updated the code to get the "A+" address (for "IPv4+" addresses) you could never get to the longer with address… just like IPv6 needed code updates to recognize AAAA, otherwise you were A-only.)
You need to update socket APIs to hold new data structures for longer addresses so your app can tell the kernel to send packets to the new addresses. Just like with IPv6. In any 'address extension' plan the legacy code cannot use the new address space; you have to:
* update the IP stack (like with IPv6)
* tell applications about new DNS records (like IPv6)
* set up translation layers for legacy-only code to reach extended-only destination (like IPv6 with DNS64/NAT64, CLAT, etc)
You're updating the exact same code paths in both the "IPv4+" and IPv6 scenarios: dual-stack, DNS, socket address structures, dealing with legacy-only code that is never touched to deal with the larger address space.
Deploying the new "IPv4+" code will take time, there will partial deployment of IPv4+ is no different than having partial deployment of IPv6: you have islands of it and have to fall back to the 'legacy' IPv4-plain protocol when the new protocol fails to connect:
* https://en.wikipedia.org/wiki/Happy_Eyeballs
teddyh | 6 hours ago
That explains it. Like I wrote two years ago¹:
The eternal problem with companies like Tailscale (and Cloudflare, Google, etc. etc.) is that, by solving a problem with the modern internet which the internet should have been designed to solve by itself, like simple end-to-end secure connectivity, Tailscale becomes incentivized to keep the problem. What the internet would need is something like IPv6 with automatic encryption via IPSEC, with IKE provided by DNSSEC. But Tailscale has every incentive to prevent such things to be widely and compatibly implemented, because it would destroy their business. Their whole business depends on the problem persisting.
1. <https://news.ycombinator.com/item?id=38570370>
globular-toast | 6 hours ago
ekr____ | 4 hours ago
I understand the appeal of this vision, but I think history has shown that it's not consistent with the realities of incremental deployment. One of the most important factors in successful deployment is the number of different independent actors who need to change in order to get some value; the lower this number the easier it is to get deployment. By very rough analogy to the effectiveness of medical treatments, we might call it the Number To Treat (NTT).
By comparison to the technologies which occupy the same ecological niches on the current Internet, all of the technologies you list have comparatively higher NTT values. First, they require changing the operating system[0], which has proven to be a major barrier. The vast majority of new protocols deployed in the past 20 years have been implementable at the application layer (compare TLS and QUIC to IPsec). The reason for this is obviously that the application can unilaterally implement and get value right away without waiting for the OS.
IPv6 requires you not only to update your OS but basically everyone else on the Internet to upgrade to IPv6. By contrast, you can just throw a NAT on your network and presto, you have new IP addresses. It's not perfect, but it's fast and easy. Even the WebPKI has somewhat better NTT properties than DNSSEC: you can get a certificate for any domain you own without waiting for your TLD to start signing (admittedly less of an issue now, but we're well into path dependency).
Even if we stipulate that the specific technologies you mention would by better than the alternatives if we had them -- which I don't -- being incrementally deployable is a huge part of good design.
[0] DNSSEC doesn't strictly require this, but if you want it to integrate with IKE, it does.
NotPractical | 3 hours ago
dmdeller | 3 hours ago
It was somewhat unexpected to find section headings such as "Is IPv6 a failure?" in the product support documentation, but I thought it was interesting and informative nonetheless.
culi | 3 hours ago
There are plenty of anarchists and disaster aid groups interested in building a more decentralized alternative to the internet. Meshtastic, AnoNet, Reticulum, MeshCore, etc are all evidence of that
Then there's also stuff like Dave Ackley's robust-first computing that's looking towards a completely different paradigm for computing in general that focuses on robustness.
https://www.cs.unm.edu/~ackley/be-201301131528.pdf
themanualstates | 16 hours ago
Regardless, ipv6 was to have more IP addresses because of ipv4 exhaustion and NAT?
My Xbox tells me my network sucks because it doesn't have ipv6, but this is a very North-American perspective regardless.
hdgvhicv | 15 hours ago
bombcar | 11 hours ago
pocksuppet | 5 hours ago
dijit | 14 hours ago
Pretty sure that it's complaining about lack of upnp. Which, yes, would not be an issue if we had ipv6... but ironically consoles typically have been slow to adopt ipv6 support themselves, so I'm curious if the xbox even supports it..
elcritch | 12 hours ago
Steam having issues makes sense given its been around ages. Meta Quest is all new OS and code yet they managed to bork ipv6. Super annoying.
jcgl | 11 hours ago
badgersnake | 3 hours ago
Xbox live has had it for years because ipv6 means no nat and lower latency. It’s been there since at least the 360.
jcgl | 11 hours ago
Nit: per RFC8064[0], most modern, non-server devices do/should configure their addresses with "semantically opaque interface identifiers"[1] rather than using their MAC address/EUI64. That stable address gets used for inbound traffic, and then outbound traffic can use temporary/privacy addresses that are randomized and change over time.
Statelessness is accomplished simply by virtue of devices self-assigning addresses using SLAAC, rather than some centralized configuration thing like DHCPv6.
[0] https://datatracker.ietf.org/doc/rfc8064/ [1] https://datatracker.ietf.org/doc/rfc7217/
tschaeferm | 14 hours ago
xacky | 12 hours ago
martinky24 | 12 hours ago
mort96 | 12 hours ago
Otherwise, the networking history part of this post is amazing. I haven't gotten to the IPv6 part yet.
jcgl | 11 hours ago
For instance, IPv6's NDP is built on actual IPv6 packets (ICMPv6), rather than some spoofed IP-lookalike thing. No layering violation, and, thanks to multicasting, no need to dump a bunch of broadcast traffic on the layer 2 network.
holowoodman | 9 hours ago
Only if the L2 network actually supports L2-multicast. Ethernet doesn't, except if your switches are intelligent enough. With cheap ethernet switches, multicast will be simulated by broadcast.
And actually, you can never avoid a layering violation. The only thing that NDP avoids is filling in the source/destination IP portions with placeholders. In NDP, you fill the destination with some multicast IPv6 address. But that is window dressing. You still need to know that this L3-multicast IPv6 address corresponds to a L2-multicast MAC address (or just do L2 broadcast). The NDP source you fill with an L3 IPv6 address that is directly derived from your L2 MAC address. And you still get back a MAC address for each IPv6 address and have to keep both in a table. So there are still tons of layering violations where the L2 addresses either have direct 1:1 correspondences to L3 addresses, or you have to keep L2/L3 translation tables and L3 protocols where the L3 part needs to know which L2 protocol it is running on, otherwise the table couldn't be filled.
jcgl | 8 hours ago
True, but outside bottom-barrel switches, any switch that's not super old should support multicast, no?
Regarding the rest of your comment, I really don't see how all those things count as layering violations. Yes, there is tight coupling (well, more like direct correspondence) between l2 and l3 addresses. However, these multicast addresses are actual addresses furnished by IPv6; nodes answer on these addresses. Basically, the fact that there is semantic correspondence between l2 and l3 is basically an implementation detail. Whereas ARP even needs its own EtherType!
And, yes, nodes need to keep state. But why is that relevant to whether or not this is a layering violation? When two layers are separate, they need to be combined somewhere ("gluing the layers together"). The fact that the glue is stateless seems irrelevant. But again, I'm just a sysadmin.
pocksuppet | 5 hours ago
It's pretty silly anyway. NDP is more of a layering violation than ARP, because now IPv6 has a stupid circular dependency on itself. Mapping L3 addresses to L2 is a layer below 3, it is not part of layer 3, it is part of the sub-layer that adapts between 2 and 3. DHCP should be part of that sub-layer, too.
Did you know that for every kind of network that IP can run on top of, there's a whole separate standard specifying how to adapt one to the other? RFC 894 specifies how to run IP over Ethernet networks. RFC 2225 specifies how to run IP over ATM networks.
holowoodman | 4 hours ago
The only thing one should really really really avoid is the TCP mistake of not just having some minimally necessary glue, but that tight coupling of TCP connections to IP addresses in the layer below.
pocksuppet | an hour ago
mort96 | 7 hours ago
NDP may very well be a nicer protocol than ARP, but following the logic of the article, the neighbor solicitation part of NDP would be just as unnecessary as ARP.
AlienRobot | 11 hours ago
Also funny it was made in 1990 and it only recently reached 50% adoption.
orangeboats | 8 hours ago
culi | 3 hours ago
Dagger2 | 9 hours ago
I don't think v6 is the absolute pinnacle of protocol design, but whenever anybody says it's bad and tries to come up with a better alternative, they end up coming up with something equivalent to IPv6. If people consistently can't do better than v6, then I'd say v6 is probably pretty decent.
api | 9 hours ago
All the complaints I hear are pretty much all ignorance except one: long addresses. That is a genuine inconvenience and the encoding is kind of crap. Fixing the human readable address encoding would help.
vbezhenar | 9 hours ago
Yes, it denies simple P2P connectivity. World doesn't need it. Consumers are behind firewalls either way. We need a way for consumers to connect to a server. That's all.
estimator7292 | 9 hours ago
Unfortunately, the internet is used for a lot more than using one of the six gigantic centralized websites.
moffkalast | 8 hours ago
Speaking of that, why don't we just keep ipv4 for ourselves and let them eat ipv6?
betaby | 8 hours ago
Lt_Riza_Hawkeye | 8 hours ago
mort96 | 7 hours ago
With IPv4 + NAT, you have a public IP address. That public address goes to your router. Your router can forward any port to any machine on your LAN. I used to run Minecraft servers from a residential connection on IPv4, it was fine. Never had to call the ISP.
voxic11 | 7 hours ago
Symbiote | 7 hours ago
In many countries they don't have enough, so you have CGNAT.
mort96 | 6 hours ago
Still, I do think that the solution of, "one IPv4 address per household + NAT" is a perfectly good system. I view the IPv6 mentality of giving each computer in the world a globally unique IPv6 address as a non-goal.
wyufro | 6 hours ago
pocksuppet | 5 hours ago
mburns | 4 hours ago
tremon | 2 hours ago
happymellon | 4 hours ago
If you are giving out public IPs then you aren't really NAT'ing.
mort96 | 4 hours ago
Spooky23 | 4 hours ago
I also deployed it as a pilot on an internal network. Other than getting direct IPv6 connectivity to some services, which sometimes gave us better performance, it conferred no advantage to us.
IPv6 is great for phones where you don't expect any inbound traffic. Even then, every US carrier is using Carrier NAT to route and proxy traffic for their own purposes.
unethical_ban | 4 hours ago
Fnoord | 7 hours ago
RIMR | 7 hours ago
Worth pointing out that this article was written by the now-CEO of Tailscale. I don't know if "The world doesn't need P2P connectivity" is a compelling take.
fc417fc802 | 2 hours ago
I do wish ISPs would refrain from intentionally breaking things though. It ought to be illegal for them to block specific ports or filter specific sorts of traffic absent a pressing and active security concern.
jrm4 | 7 hours ago
Roughly, it's my belief that an IPv6 world makes it easier for centralizing forces and harder for local p2p or p2p-esque ones; e.g. an IPv6 world would have likely made it easier to do bad things like "charge for individual internet user in a home."
The decentralization of "routing power" is more a good thing than bad, what you pay for in complexity you get back in "power to the people."
MrDOS | 6 hours ago
This idea comes up in every HN conversation about IPv6, and so I suppose this time it's my turn to point out RFC 8981[0]. tl;dr: typically, machines which receive IPv6 address assignment via SLAAC (functional equivalent of DHCP) periodically cycle their addresses. Supposed to offer pretty effective protection against host-counting.
0: https://datatracker.ietf.org/doc/html/rfc8981
throw0101a | 6 hours ago
I don't want our communications infrastructures to be just for consumers.
fortran77 | 5 hours ago
Yes! They need an alternate encoding form that distills to the same addresses.
My machines Link-local IPV6 address is "fe90::6329:c59:ad67:4b52%8"
If I try to paste that into the address bar in Edge or Chrome (with the https://) it does an internet search on that string! No way around it.
I have to do workarounds like: "http://fe90::6329:c59:ad67:4b52%8.ipv6-literal.net:8081/
All to test the IPv6 interface on a web server I'm running on my local machine.
pocksuppet | 5 hours ago
Dagger2 | 29 minutes ago
If it's on the same machine then just use http://[::1]:8081/. Dropping the interface specifier (http://[fe90::6329:c59:ad67:4b52]:8081/) works if the OS picks a default, which some will. curl seems to be happy to work. Or just use one of the non-link-local addresses on the machine, if you have any.
The other frustrating part of this is that it makes it impossible to come up with your own address syntax. An NSS plugin on Linux could implement a custom address format, and it's kind of obvious that the intention behind the URL syntax is that "[" and "]" enter and exit a raw address mode where other URI metacharacters have no special meaning. In general you can't syntax validate the address anyway because you don't know what formats it could be in (including future formats or ones local to a specific machine), so the only sane thing to do is pass the contents verbatim to getaddrinfo() and see if you get an error.
But no, they wrote the spec to only allow a subset of v6 addresses and nothing else.
I very much didn't test it, but this patch might do the job on Firefox (provided there's no code in the UI doing extra validation on top):
pocksuppet | 5 hours ago
bombcar | 5 hours ago
perennialmind | 4 hours ago
It still would have been a ton of work, but we could have just had what IPv6 claimed to be: IPv4 with bigger addresses. Except after the upgrade, there'd be no parallel system. And all of DJB's points apply: https://cr.yp.to/djbdns/ipv6mess.html
api | 3 hours ago
The people involved in core Internet protocol design were used to the net being a largely walled garden of governments, corporations, universities, and a small number of BBSes and niche ISPs.
Major protocol upgrades had happened before, not just for the core protocol but all kinds of other then-core services.
It had been a while but not that long, I think less than 20 years, and last time it was pretty easy. They assumed they could design something better and phase it in and all the members of the Internet community would just do the right thing.
That’s probably what made them feel they could push a more radical upgrade.
Unfortunately they started this right as the massive tsunami of Internet commercialization hit. Since V6 was too new, everyone went with V4. Now all the sudden you had thousands of times more nodes, sites, and personnel, and all of them were steeped in IPv4 and rushing to ship on top of it. You also lost the small town atmosphere of the early net where admins were a club and could coordinate things.
Had V6 launched five years earlier V4 would probably be dead.
V6 usage will probably keep creeping up, but as it stands we will likely be dual stack forever. Once the installed user base and sunk cost is this high the design is fixed and can never be changed without a hard core heavy handed measure like a government mandate.
iknowstuff | 2 hours ago
pocksuppet | an hour ago
In DNS64, whenever your DNS resolver encounters an IPv4-only site, it translates it to an IPv6 address under a translator prefix, and returns that address to the client. The client connects to the translator server via that address, and the translator server opens an IPv4 connection to the website. Your side of the network is IPv6-only, not even running tunneled v4.
This only breaks things to about the same small extent that the introduction of NAT did.
pocksuppet | an hour ago
If you add *any* address bits you've already broken protocol compatibility and you need to upgrade the entire world. While you're already upgrading the entire world, you should add so many address bits that we'll never need more, because it costs the same, and you may as well fix those other niggling problems as well, right?
zrail | 7 hours ago
Not just that. Almost every single thing people think up that's "better" is something that was considered and rejected by the IPv6 design process, almost always for well-considered reasons.
Dagger2 | 5 hours ago
dwattttt | 2 hours ago
notepad0x90 | 4 hours ago
The only reason it's around is because of sunken cost fallacy and people stuck in decades old tech-debt. A new protocol designed today will be different, much the same as how Rust is different than Ada. SD-WAN wasn't a thing in 1998, the cost of chips and the demand of mobile customers wasn't a thing. supply/demand economics have changed the very requirments behind the protocol.
Even concepts like source and destination addressing should be re-thought. The very concept of a network layer protocol that doesn't incorporate 0RTT encryption by default is ridiculous in 2026. Even protocols like ND, ARP, RA, DHCP and many more are insecure by default. Why is my device just trusting random claims that a neighbor has a specific address without authentication? Why is it connecting to a network (any! wired,wireless, why does it matter, this is a network layer concern) without authenticating the network's security and identity authority? I despise the corporatized term "zero trust" but this is what it means more or less.
People don't talk about security, trust, identity and more, because ipv6 was designed to save networking gear vendors money, and any new costly features better come with revenue streams like SD-WAN hosting by those same companies. There are lots and lots of new things a new layer-3 protocol could bring to the scene. But security aside, the main thing would be replacing numbered addressing with identity-based addressing.
It all comes down to how much money it costs the participants of the RFC committees. given how dependent the world is on this tech, I'm hoping governments intervene. It's sad that this is the tech we're passing to future generations. We'll be setting up colonies on mars, and troubleshooting addressing and security issues like it's 2005.
unethical_ban | 4 hours ago
I don't know much about MPLS and only know IP routing, but that quote above sounds very hand-waving. How do you route "identity based addressing"?
guelo | 3 hours ago
tekne | 2 hours ago
Which public key you want to route to is above the network layer.
notepad0x90 | an hour ago
You wouldn't need TLS. this scheme i just thought would actually decentralize/federate PKI a lot more. If you have a public address assigned, your ISP is the IP-CA. I don't want to get into the details of my DNS replacement idea, but similar to network operators being authorities over the addresses they're responsible for, whoever issued you a network name is also the identity authority over that name (so DNS registrars would be CA's). Ideally though, every device would be named, and the people that have logical control over the address will also be responsible for the name and all identity authentication and claims over those addresses and names. You won't have freaking google and browsers dictating which CA root to trust, it will instead be the network you're joining that does that (be it your DHCP server, or your ISP is up for debate, but I prefer the former). Ideally, your public key hash is your address. How others reach you would be by resolving your public key from your identity, the traffic will be sent to your public key (or see my sibling comment for the concept of cryptographic identity). All names would of course be free, but what we call "DNS" today will survive as an alias to those proper names. so your device might be guelo.lan123.yourisp.country but a registrar might sell you a guelo.com alias that points to the former name.
The implications of this scheme are wild, think about it!
Rogue trust providers will be a problem, but only to their domain. right now random CA roots can issue domains for anything. with the scheme I proposed, your country can mess with its own traffic, as can your isp, as can you over your lan. You won't be able to spoof traffic for a different lan, or isp using their name.
Solve all the problems at their foundations!
notepad0x90 | an hour ago
It is far from hand-waving. Right now we have numeric addressing, where routers look at bits and perform ASIC-friendly bitwise (and other) operations on that number to forward a lot of traffic really fast for cheap.
Identity and trust establishment won't be part of the regular data flow, but at network connection time, each end-device will discover the network authority it has connected to, and build trust that allows it to validate identities in that network, including address assignments, neighbor discovery, name resolution and verification, authorized traffic forwarders (routers) and more.
After the connection is established and the network is trusted, as part of the connection establishment, the network authority designates how addressing should be done. If Alice's Iphone wants to connect to Bob's server, it will encrypt the data, and as part of a very slim header designate Bob's server's cryptographic identifier, destination service identifier, and its own cryptographic identifier for the first packet. To reduce overhead, subsequent traffic can use a simple hash of the connection identifers mentioned earlier.
When devices come online in the network, their cryptographic identifers will become known to the entire network, including intermediate routers. Routing protocols work with the identity authority of the network to build forwarding tables based on cryptographic identifiers, and for established sessions, session ids.
"Cryptographic identifier" is also not a hand-wavy term. what it means must be dynamic, so as to avoid protocol updates like v4 and v6 over addressing. V6 presumed just having lots of bits is enough. An ideal protocol will allow the network itself to communicate the identifier type and byte-size. For example an FQDN, or an IPv4 address alike could be used directly, or a public key hash using a hash algorithm of the network's choice can be used. So long as the devices in the network support it, and the end device supports it, it should work fine.
Internet addressing can use a scheme like this, but it doesn't need to. IPv6 took the wrong approach with NAT, it got rid of it instead of formalizing it. we'll always need to translate addresses. But the internet is actually well-positioned for this, due to the prevalence of certificate authorities, but it will require rethinking foundational protocols like DNS, BGP, and PKI infrastructure.
But my original point wasn't this, it was that tech has come far, our requirements today are different than 30 years ago. Even the OSI layered model is outdated, among other things.
This is just my proposal that I just thought of as I'm typing this, smarter people that can sit down and think through the problem can think of better protocols. I only proposed it to demnostrate the concept isn't hand-wavy or ridiculous.
IPv6 was relatively rushed to meet the address shortage issue of IPv4 while at the same time solve lots of other problems. The next network layer protocol (and we do need one) should have the goal of making networking as a whole adaptable to new and unforeseen requirements (that's why I suggested the network authority be the one to dictate the addressing scheme, and with it, be responsible for translating it if needed). We're being held back, not just in tech but as a species, because of this short-sighted protocol design! exaggerated as that statement might sound, it is true.
I'll reserve further discussion on the topic for when it is required, but I hope this prevents more dismissive responses.
tremon | 2 hours ago
That's false. Firstly, rfc1883 was published in 1995 which means work started some time before that, and the RFC process included operating system vendors and RIR administrators. The primary author of rfc1883 worked at Xerox Parc, and the primary author of rfc1885 worked at DEC. Neither were networking gear companies.
notepad0x90 | 2 hours ago
tremon | 2 hours ago
notepad0x90 | an hour ago
https://www.ietf.org/process/rfcs/
> Proposed Standard (PS). The first official stage. Many standards never progress beyond this level.
> Draft Standard. An intermediate stage that is no longer used for new standards.
> Internet Standard. The final stage, when the standard is shown to be interoperable and widely deployed.
m463 | 3 hours ago
I think they "shipped it" and washed their hands of it.
But I think there should have been more iterations, until we got a little more ipv4+ and less ipv6.
tadfisher | an hour ago
Everything since has been round after round of RFCs trying to adapt IPv4 workarounds to the IPv6 world.
nyrikki | 7 hours ago
There were many of us who, even when it was still IPng (IP Next Generation) in the mid 1990's, tried to get it working and spent significant amount of effort to do so, only to be hit with unrealistic ideological ideals that blocked our ability to deploy it, especially with the limitations of the security tools back in the day.
Remember when IPng started, even large regional ISPs like xmission had finger servers, many people used telnet and actually slackware enabled telnet with no root password by default!!! I used both to get wall a coworker who was late to work because he was playing tw2000.
Back then we had really bad application firewalls like Altavista and PIX was just being invented, and the large surveillance capitalism market simply didn't exist then.
The IAB hampered deployment by choosing hills to die on without providing real alternatives, and didn't relent until IPv4 exhaustion became a problem, and they had lost their battle because everyone was forced into CGNAT etc...because of the IETF, not in spite of it.
The IAB and IETF was living in a MIT ITS mindset when the real world was making that model hazardous and impossible. End to end transparency may be 'pretty' to some people, but it wasn't what customers needed. When they wrote the RFCs to make other services simply fail and time out if you enabled IPv6 locally, but didn't have ISP support they burned a lot of good will and everyone just started ripping out the IPv6 stack and running IPv4 only.
IMHO, Like almost all tech failures, it didn't flail based on technical merits, it flailed based on ignorance of the users needs and a refusal to consider them, insisting that adopters just had to drink their particular flavor of Kool-aid or stick to IPv4, and until forced most people chose the latter.
[0] https://www.rfc-editor.org/rfc/rfc5902.txt
nyrikki | 6 hours ago
That behavior is due to the same politics mentioned above.
A few more pragmatic decisions, or at least empathetic guidance would have dramatically changed the acceptance of ipv6.
Dagger2 | 5 hours ago
You would only see a timeout to an AAAA record if the connection attempt to the A record already failed. Some software (looking at you, apt-get) will only print the last connection failure instead of all of them, so you don't see the failure to connect to the A record. I've seen people blame v6 for this even though they don't have v6 and it's 100% caused by their v4 breaking.
Run `getent ahosts example.com` to see the order your system sorts addresses into. `wget example.com` (wget 1.x only though) is also nice, because it prints the addresses and tries to connect to them in turn, printing every error.
I mean... adding v6 is the right thing to do either way, but "AAAA is higher priority than A" isn't the reason.
nyrikki | 4 hours ago
There is an expired 6man draft that explains some of the issues here.
https://www.ietf.org/archive/id/draft-buraglio-6man-rfc6724-...
To be clear, I go and clean out the temporary fixes for dual stack problems, but you want some more info so here it is.
IPv6 aaaa timeout was shown to be the problem, adding `Acquire::ForceIPv4 "true";` fixed the problem on several hosts. There are no non `fe80::` (link local addresses) on the host. So to be clear, I removed my temporary ipv4 only apt config, but there are a million places for this to be brittle and you see people doing so with sysctl net.ipv6.conf.* netplan, systemd-networkd, NetworkManager, etc... plus the individual client etc....Note:
https://datatracker.ietf.org/doc/html/rfc6724#section-2.1
And how "::/0" > "::ffff:0:0/96"
And the preceding text:
> If an implementation is not configurable or has not been configured, then it SHOULD operate according to the algorithms specified here in conjunction with the following default policy table:
One could argue that GUA's without a non-link-local IPv6 address should just be ignored...and in a perfect world they would.
But as covered int the first link in this post this is not as easy or clear as expected and people tend to error towards following rfc6724 which states just below the above refrence:
> Another effect of the default policy table is to prefer communication using IPv6 addresses to communication using IPv4 addresses, if matching source addresses are available.
I am not an IPv6 hater...just giving observations that when you introduce a breaking change, and add additional friction, it dramatically reduces adoption.
Many companies I have been at basically just implement enough to meet Federal Government requirements and often intentionally strip it out of the backend to avoid the brittleness it caused.
I am old enough to remember when I could just ask for an ASN and a portable class-c and how nice that was, in theory IPv6 should have allowed for that in some form...I am just frustrated with how it has devolved into an intractable 'wicked problem' when there was a path.
The fact that people don't acknowledge the pain for users, often due to situations beyond their control is a symptom of that problem. Ubuntu should never have even requested an IPv6 aaaa in the above system, and yes it only does because of politics and RFC requirements.
simoncion | 3 hours ago
nyrikki | an hour ago
The problem isn't the happy path, the problem is when things fail, and that linux, in particular made it really hard to reliably disable [0]
Once that hits someone's vagrant or ansible code, it tends to stick forever, because they don't see the value until they try to migrate, then it causes a mess.
The last update on the original post link [1] explains this. The ipv4 host being down, not having a response, it being the third Tuesday while Aquarius is rising into what ever, etc... can invoke it. It causes pains, is complex and convoluted to disable when you aren't using it, thus people are afraid to re-enable it.
[0] https://wiki.archlinux.org/title/IPv6#Disable_IPv6 [1] https://tailscale.com/blog/two-internets-both-flakey
simoncion | 44 minutes ago
Section 10.1 of that Archi Wiki page says that adding 'ipv6.disable=1' to the kernel command line disables IPv6 entirely, and 'ipv6.disable_ipv6=1' keeps IPv6 running, but doesn't assign any addresses to any interfaces. If you don't like editing your bootloader config files, you can also use sysctl to do what it looks like 'ipv6.disable_ipv6=1' does by setting the 'net.ipv6.conf.all.disable_ipv6' sysctl knob to '1'.
> You aren't running it during an external transitive failure...
I'll assume you meant "transient". Given that I've already demonstrated that the only relevant traffic that is generated is IPv4 traffic, let's see what happens when we cut off that traffic on the machine we were using earlier, restored to its state prior to the updates.
We start off with empty firewall rules:
We prep to permit DNS queries and ICMP and reject all other IPv4 traffic: And we do an apt-get update, which fails in less than ten seconds: In this case, the IPv6 traffic I see is... an unanswered router solicitation, and the multicast querier chatter that I saw before. [0] What happens when we change those REJECTs into DROPs ...and then re-run 'apt-get update'? Exactly the same thing, except it takes like two minutes to fail, rather than ~ten seconds, and the error for IPv4 hosts is "connection timed out", rather than "Connection refused". Other than the usual RS and multicast querier traffic, absolutely no IPv6 traffic is generated.However. The output of 'apt-get' sure makes it seem like an IPv6 connection is what's hanging, because the last thing that its "Connecting to..." line prints is the IPv6 address of the host that it's trying to contact... despite the fact that it immediately got a "Network is unreachable" back from the IPv6 stack.
To be certain that my tcpdump filter wasn't excluding IPv6 traffic of a type that I should have accounted for but did not, I re-ran tcpdump with no filter and kicked off another 'apt-get update'. I -again- got exactly zero IPv6 traffic other than unanswered router solicitations and multicast group membership querier chatter.
I'm pretty damn sure that what you were seeing was misleading output from apt-get, rather IPv6 troubles. Why? When you combine these facts:
* REJECTing all non-DNS IPv4 traffic caused apt-get to fail within ten seconds
* DROPping all non-DNS IPv4 traffic caused apt-get to fail after like two minutes.
* In both cases, no relevant IPv6 traffic was generated.
the conclusion seems pretty clear.
But, did I miss something? If so, please do let me know.
[0] I can't tell you why the last line in the 'apt-get update' output is only IPv6 hosts. But everywhere there were IPv6 hosts, the reported error was "Network is unreachable" and for IPv4 the error was "Connection refused".
Dagger2 | 8 minutes ago
If you look at just the W: lines, it mentions a v6 address but the machine doesn't have v6 and the actual problem is the Connection Refused to the v4 address. The output is understandably misleading but ultimately the problem here has nothing to do with v6.
convolvatron | 5 hours ago
perennialmind | 3 hours ago
But aside from that, I actually do think we could have baked address extensions into the existing packet format's option fields and had a gradual upgrade that relied on that awful bodge that was (and is) NAT. And had a successful transition wherein it died a well-deserved death by now. :-)
convolvatron | an hour ago
I do think that the IETF didn't realize that they were losing their agency, so its very likely that TUBA would have made the difference. not for any technical reason, but that it would have been a few years earlier when people were still listening.
perennialmind | an hour ago
The fact that IS-IS survived as a relevant IP routing protocol says a lot on its own.
nyrikki | 2 hours ago
In the beginning it was an experiment and should have been ambitious, the IETF had just moved to CIDR which bought almost a decade of time, and they should have aimed high.
It is just when you significantly change a system, you need to show users how to accomplish the work they are doing with the old system, even if how they do that changes. If you can't communicate a way to replace their old needs, or how that system is fitting new needs that you could never have predicted, you need to be flexible and demonstrate that ability.
If you look at the National Telecommunications and Information Administration. [Docket No. 160810714-6714-01] comments
Microsoft: https://www.ntia.gov/sites/default/files/publications/micros... ARIN: https://www.ntia.gov/sites/default/files/publications/arin_c...
You will see that the address space argument is the only real one they make. It isn't coincidence that rfc7599 came about ~20 years later when 160810714-6714-01 and federal requirements for IPv6 were being discussed.
If you look at the #nanog discussions between RFC 1883 (ipv6) (late 1996) being proposed and Ipv4 exhaustion in early in (2011) it wasn't just the IAB that was having philosophical discussions around this.
Both rfc3484 and rfc6724 suffered from the lack of executive sponsorship as called out in the above public comments. And the following from rfc6724's intro is often ignored with just pure compliance:
> They do not override choices made by applications or upper-layer protocols, nor do they preclude the development of more advanced mechanisms for address selection.
There are many ways that could have played out different, but I noticed Avery Pennarun's last update to that post pretty much says the same in different words.
https://tailscale.com/blog/two-internets-both-flakey
> IPv6 was created in a new environment of fear, scalability concerns, and Second System Effect. As we covered last time, its goal was to replace The Internet with a New Internet — one that wouldn’t make all the same mistakes. It would have fewer hacks. And we’d upgrade to it incrementally over a few years, just as we did when upgrading to newer versions of IP and TCP back in the old days
ianburrell | 4 hours ago
I think SLAAC came from world where computers were expensive, DHCP servers were separate, and they wanted to eliminate them. But we are in world where computers are cheap and every router can run DHCP.
We could have had easy config with DHCPv6 giving out MAC based addresses by default. The auto config would still work on link-local.
amelius | 3 hours ago
xyzelement | an hour ago