WolfIP: Lightweight TCP/IP stack with no dynamic memory allocations

144 points by 789c789c789c a day ago on hackernews | 35 comments

rpcope1 | a day ago

It would be interesting to know why you would choose this over something like the Contiki uIP or lwIP that everything seems to use.

RealityVoid | a day ago

Not sure if they do for _this_ package, but the Wolf* people's model is usually selling certification packages so you can put their things in stuff that need certifications and you offload liability. You also get people that wrote it and that you can pay for support. I kind of like them, had a short project where I had to call on them for getting their WolfSSL to work with a ATECC508 device and it was pretty good support from them.

jpfr | a day ago

As the project is GPL’ed I guess they sell a commercial version. GPL is toxic for embedded commercial software. But it can be good marketing to sell the commercial version.

Edit: I meant commercial license

LoganDark | a day ago

You don't need a commercial version, many projects get away with selling just a commercial license to the same version. As long as they have the rights to relicense this works fine.

RealityVoid | a day ago

I think they might sell a commercial version as well. It makes sense with the GPL. But I can't really recall that well.

anthonj | 11 hours ago

In my company we used their stuff often. They have an optional commercial license for basically all their products. The price was very reasonable as well.

cpach | 10 hours ago

“GPL is toxic for embedded commercial software”

Why is that?

dietr1ch | 7 hours ago

He probably meant viral or tried to make a deadly twist on virality

bobmcnamara | 6 hours ago

Many bare metal or RTOS systems consist of a handful of statically linked programs (one or two bootloaders and the main application), many companies would rather find a non-GPL library rather than open up the rest of the system's code. Sometimes a system contains proprietary code that may not be open sourced as well.
passt (the network stack that you might be using if you're running qemu, or podman containers) also has no dynamic memory allocations. I always thought it's quite an interesting achievement. https://blog.vmsplice.net/2021/10/a-new-approach-to-usermode... https://passt.top/passt/about/#security

sedatk | 21 hours ago

It only implements IPv4 which explains to a degree that why IPv6 isn't ubiquitous: it's costly to implement.

notepad0x90 | 20 hours ago

It's just not worth it. the only thing keeping it alive is people being overly zealous over it. if the cost to implement is measured as '1', the cost to administer it is like '50'.

gnerd00 | 20 hours ago

my 15 year old Macbook does IPv6 and IPv4 effortlessly

notepad0x90 | an hour ago

that's great, but when you have a networking issue, you have to deal with two stacks for troubleshooting. it would be much less effort to use just ipv4.

You're not paying for IPv4 addresses I'm sure, so did ipv6 solve anything for you? This is why i meant by zealots keeping it alive. you use ipv6 for the principle of it, but tech is suppose to solve problems, not facilitate ideologies.

toast0 | 20 hours ago

Eh. IPv6 is probably cheaper to run compared to running large scale CGNAT. It's well deployed in mobile and in areas without a lot of legacy IPv4 assignments. Most of the high traffic content networks support it, so if you're an eyeball network, you can shift costs away from CGNAT to IPv6. You still have to do both though.

Is it my favorite? No. Is it well supported? Not everywhere. Is it going to win, eventually? Probably, but maybe IPv8 will happen, in which case maybe they learn and it it has a 10 years to 50% of traffic instead of 30 years to 50% of traffic.

notepad0x90 | an hour ago

it depends on who you're talking about but no disagreement with cost for ISPs. For endusers (including CSPs) it's another story.

Even on its own it's hard to support, but for most people they have to maintain a dual stack. v4 isn't going away entirely any time soon.

sedatk | 20 hours ago

> the only thing keeping it alive is people being overly zealous over it

Hard disagree. It turned out to be great for mobile connectivity and IoT (Matter + Thread).

> the cost to administer it is like '50'.

I'm not sure if that's true. It feels like less work to me because you don't need to worry about NAT or DHCP as much as you need with IPv4.

nicman23 | 13 hours ago

what. have you seen ipv4 block pricing?

notepad0x90 | an hour ago

there keep arising more solutions, public ip usage hasn't been increasing as it did in past decades either. most new device increase is on mobile where cgnat works ok.

hrmtst93837 | 11 hours ago

If you want IPv6 without dynamic allocation you end up rewriting half the stack anyway so probably not what most embedded engineers are itching to spend budget on. The weird part is that a lot of edge gear will be stuck in legacy-v4 limbo just because nobody wants to own that porting slog which means "ubiquitous IPv6" will keep being a conference slide more than a reality.

preisschild | 6 minutes ago

Matter (a smart home connectivity standard in use by many embedded devices) is using IPv6. Doesnt seem to be a problem there.

CyberDildonics | 20 hours ago

Are there TCP/IP stacks out there in common use that are allocating memory all the time?
Packets and sockets have to be stored in memory somehow. If you have a fixed pool that you reuse it's basically a slab allocator.

CyberDildonics | 16 hours ago

You need some memory but that doesn't mean you would constantly allocate memory. There is a big difference between a few allocations and allocating in a hot loop.

fulafel | 13 hours ago

Yes, TCP is pretty hungry for buffers. The bandwidth*delay product can eat gigs of memory on a server. You have to be ready to retransmit anything that's in flight / haven't received the ack for yet.
The bandwidth delay product for a 10Gbps stream for a 300ms RTT theoretically only requires ~384MB

One option is just to simply keep buffers small and fixed and disconnect blocked clients on write() after some timeout

CyberDildonics | 5 hours ago

Needing memory doesn't have to mean allocating memory over and over. Memory allocation is expensive. If someone is doing that reusing memory is going to be by far the best optimization.

bobmcnamara | 6 hours ago

Yes, it is pretty common.

However sometimes the buffers are pooled so buffer allocator contention only occurs within the network stack or within a particular nic.

fulafel | 13 hours ago

How does it deal with all the dynamic TCP buffering things where things may get quite large?

Ao7bei3s | 13 hours ago

It has a fixed maximum number of concurrent sockets, and each socket has queues backed by per-socket fixed-size transmit and receive buffers (see `rxmem` and `txmem` in `struct tsocket`[1]). This is fine, because in TCP, each side advertises remaining buffer space via the window size header field [2] (possibly with its meaning modified by the window scale option during the initial handshake - see [3] & `struct PACKED tcp_opt_ws`), and possibly also how much it can maximally receive in one packet (via the MSS option on the initial handshake [4]; possibly modified by intermediary systems via MSS clamping). wolfip has unusually small buffer sizes, and hardcodes them via #define, and everything else (e.g. congestion control) is pretty rudimentary too, but otherwise it's pretty much the same as in a "normal" implementation.

[1] https://github.com/wolfSSL/wolfip/blob/60444d869e8f451aa2dca... [2] https://github.com/wolfSSL/wolfip/blob/60444d869e8f451aa2dca... [3] https://github.com/wolfSSL/wolfip/blob/60444d869e8f451aa2dca... [4] https://github.com/wolfSSL/wolfip/blob/60444d869e8f451aa2dca...

fulafel | 13 hours ago

Ok. So I guess it doesn't try to go very fast.
Buffer size is the product of bandwidth and delay, so if communicating with something close, it can still go fast.

Had an illustration of this once when my then-employers IT dept set up the desktop IP phones to update from a TFTP server on the other continental land mass. Since TFTP only allows one outstanding packet, everyone outside head office had to wait a long time for their phone to update, while head office didn't see any issue.

sbrivio | 10 hours ago

passt has been doing something similar for years (see https://passt.top/kvm_forum_2022.pdf pages 17, 18) and it's actually pretty fast (https://passt.top/#performance_1), even though it's not as resource-constrained as WolfIP.

We're finally adding multithreading (https://bugs.passt.top/show_bug.cgi?id=13) these days.

pseudohadamard | 5 hours ago

If it's a memory-constrained embedded device you're sending a handful of bytes of telemetry every now and then, not aiming for gigabit speeds. In fact it couldn't get to megabit speeds even if it wanted to. For something like that this is perfect.

hermanradtke | 8 hours ago