This is extremely tangential, but I was working on setting up some manual network namespaces recently, basically manually reproducing what docker does to fix some of its faulty assumptions regarding containers having multiple IPs and a single name causing all sort of jank, and had to freshen up on a lot of Linux virtual networking concepts (namespaces, veths, bridge networks, macvlans and various other interfaces), made a ton of fairly informal notes to make myself sufficiently familiar with the thing to set it up.
Would anyone be interested if I polished it up and maybe added a refresher on the relevant layer 2 networking needed to reason about it? It's a fair bit of work and it's a niche topic, so I'm trying to poll a bit to see if the juice is worth the squeeze.
I was actually going down rabbitholes today trying to figure out how to do a sane Docker setup where all the containers couldn't connect to each other. Your notes would be valuable at most any level of polish.
If you want point-to-point communication between two network namespaces, you should use veths[1]. I think virtual patch cables is a good mental model for veths.
If you want multiple participants, you use bridges, which are roughly analogous to switches.
That would create an excessive amount of bridges in my case. Also this is another trivial suggestion that anyone can find with a quick search or asking an LLM. Not helpful.
I'm not sure why people are replying to my comment with solutioning and trivial suggestions. All I did was encourage the thread OP to publish their notes. FWIW I've already been through a lot of options for solving my issue, and I've settled on one for now.
It's about time someone write a new linux networking book covering layer 2 and 3. Between switchdev, nftables and flowtables, there are many new information.
The existing books are already more than two decades old namely Linux Routing and Linux Routers (2nd edition).
Please do it, I'm very biased but I think there would be lots of interest in seeing all that explained in one place in a coherant fashion (you will likely sharpen your own understanding in the process and have the perfect resource for when you next need to revisit these topics).
I'm slightly surprised cloudflare isn't using a userspace tcp/ip stack already (faster - less context switches and copies). It's the type of company I'd expect to actually need one.
Nice, they know better. But it also makes me wonder, because they're saying "but what if you need to run another app", I'd expect for things like loadbalancers for example, you'd only run one app per server on the data plane, the user space stack handles that, and the OS/services use a different control plane NIC with the kernel stack so that boxes are reachable even if there is link saturation, ddos,etc..
It also makes me wonder, why is tcp/ip special? The kernel should expose a raw network device. I get physical or layer 2 configuration happening in the kernel, but if it is supposed to do IP, then why stop there, why not TLS as well? Why run a complex network protocol stack in the kernel when you can just expose a configured layer 2 device to a user space process? It sounds like "that's just the way it's always been done" type of a scenario.
AFAIK Cloudflare runs their whole stack on every machine. I guess that gives them flexibility and maybe better load balancing. They also seem to use only one NIC.
why is tcp/ip special? The kernel should expose a raw network device. ... Why run a complex network protocol stack in the kernel when you can just expose a configured layer 2 device to a user space process?
Check out the MIT Exokernel project and Solarflare OpenOnload that used this approach. It never really caught on because the old school way is good enough for almost everyone.
why stop there, why not TLS as well?
kTLS is a thing now (mostly used by Netflix). Back in the day we also had kernel-mode Web servers to save every cycle.
You can do that if you want, but I think part of why tcp/ip is a useful layer of abstraction is it allows more robust boundaries between applications that may be running on the same machine. If you're just at layer 2 you are basically acting in behalf of the whole box.
TCP/IP is, in theory (AFAIK all experiments related to this fizzled out a decade or two ago), a global resource when you start factoring in congestion control. TLS is less obviously something you would want kernel involvement from, give or take the idea of outsourcing crypto to the kernel or some small efficiency gains for some workloads by skipping userspace handoffs, with more gains possible with NIC support.
You do want to offload crypto to dedicated hardware otherwise your transport will get stuck at a paltry 40-50 Gb/s per core. However, you do not need more than block decryption; you can leave all of the crypto protocol management in userspace with no material performance impact.
Aren't neither required these days with the "async" like and zero-copy interfaces that are now available (like io_uring, where it's still handled by the kernel), along with the nearly non-existence of single core processors in modern times?
This is very much newbie way of thinking. How do you know? Did you profile it?
It turns out there is surprisingly little dumb zero-copy potential at CF. Most of the stuff is TLS, so stuff needs to go through userspace anyway (kTLS exists, but I failed to actually use it, and what about QUIC).
Most of the cpu is burned on dumb things, like application logic. Turns out data copying and encryption and compression are actually pretty fast. I'm not saying these areas aren't ripe for optimization - but the majority of the cost was historically in much more obvious areas.
This happened before my watch, but I always was rooting for Linux. Linux is winning on many aspects. Consider the featureset of iptables (CF uses loads of stuff, from "comment" to "tproxy"), bpf for metrics is a killer (ebpf_exporter), bpf for DDoS (XDP), Tcp fast open, UDP segmentation stuff, kTLS (arguably half-working). Then there is non-networking things like Docker, virtio ecosystem (vhost), seccomp, namespaces (net namespace for testing network apps is awesome). And the list goes on. Not to mention hiring is easier for Linux admins.
Because FreeBSD is known for having the best network stack. The code is elegant and clean. And, at least until a few years ago, it was the preferred choice to build routers or firewalls.
AFAIK, they were the first to implement BPF for production ready code almost 3 decades ago.
To be honest, I think when I heard them speak, they're kind of saying yes, FreeBSD is awesome but that the main reason is that the early people there liked FreeBSD so they just stuck with it. And that it's a good choice, but they don't claim these are things that would be impossible to do with optimizations in Linux.
A few things in the article are Douglas Adams quotes, and more specifically from the Hitchhiker’s Guide series.
Creating the universe being regarded as a mistake and making many unhappy is from those books. Whenever someone figures out the universe it gets replaced with something stranger and having evidence that’s happened repeatedly is too. The Restaurant at the End of the Universe is reference in the article.
I’m a bit surprised nothing in the article was mentioned as being “mostly harmless”.
This is extremely tangential, but I was working on setting up some manual network namespaces recently, basically manually reproducing what docker does to fix some of its faulty assumptions regarding containers having multiple IPs and a single name causing all sort of jank, and had to freshen up on a lot of Linux virtual networking concepts (namespaces, veths, bridge networks, macvlans and various other interfaces), made a ton of fairly informal notes to make myself sufficiently familiar with the thing to set it up.
Would anyone be interested if I polished it up and maybe added a refresher on the relevant layer 2 networking needed to reason about it? It's a fair bit of work and it's a niche topic, so I'm trying to poll a bit to see if the juice is worth the squeeze.
I was actually going down rabbitholes today trying to figure out how to do a sane Docker setup where all the containers couldn't connect to each other. Your notes would be valuable at most any level of polish.
I put each docker container in a LXC container which effectively uses namespaces, cgroups etc to isolate them.
If you create each container in its own network namespace, they won't be able to.
It's a little more complex than that for any non-trivial layout where some containers do need to talk to other containers, but most don't.
You could also create a network for each pair of containers that need to communicate with one another.
If you want point-to-point communication between two network namespaces, you should use veths[1]. I think virtual patch cables is a good mental model for veths.
If you want multiple participants, you use bridges, which are roughly analogous to switches.
[1] https://man7.org/linux/man-pages/man4/veth.4.html
That would create an excessive amount of bridges in my case. Also this is another trivial suggestion that anyone can find with a quick search or asking an LLM. Not helpful.
I'm not sure why people are replying to my comment with solutioning and trivial suggestions. All I did was encourage the thread OP to publish their notes. FWIW I've already been through a lot of options for solving my issue, and I've settled on one for now.
> I'm not sure why people are replying to my comment with solutioning and trivial suggestions
Because your comment didn’t say you solved it and you asked for notes without any polish as if that would help.
I didn't say I settled on a solution for all time. I said "for now". I'm still interested in alternatives.
Looking forward to that.
It's about time someone write a new linux networking book covering layer 2 and 3. Between switchdev, nftables and flowtables, there are many new information.
The existing books are already more than two decades old namely Linux Routing and Linux Routers (2nd edition).
Please do it, I'm very biased but I think there would be lots of interest in seeing all that explained in one place in a coherant fashion (you will likely sharpen your own understanding in the process and have the perfect resource for when you next need to revisit these topics).
Yes of course. It would be great.
Don't forget to post the link here!
I would absolutely be interested.
I would def. be interestred!
Yes please!
YES
i await your write up!
I had to read their article on "soft-unicast" before I could really grok this one: https://blog.cloudflare.com/cloudflare-servers-dont-own-ips-...
I'm slightly surprised cloudflare isn't using a userspace tcp/ip stack already (faster - less context switches and copies). It's the type of company I'd expect to actually need one.
From 2016: https://blog.cloudflare.com/why-we-use-the-linux-kernels-tcp...
Nice, they know better. But it also makes me wonder, because they're saying "but what if you need to run another app", I'd expect for things like loadbalancers for example, you'd only run one app per server on the data plane, the user space stack handles that, and the OS/services use a different control plane NIC with the kernel stack so that boxes are reachable even if there is link saturation, ddos,etc..
It also makes me wonder, why is tcp/ip special? The kernel should expose a raw network device. I get physical or layer 2 configuration happening in the kernel, but if it is supposed to do IP, then why stop there, why not TLS as well? Why run a complex network protocol stack in the kernel when you can just expose a configured layer 2 device to a user space process? It sounds like "that's just the way it's always been done" type of a scenario.
AFAIK Cloudflare runs their whole stack on every machine. I guess that gives them flexibility and maybe better load balancing. They also seem to use only one NIC.
why is tcp/ip special? The kernel should expose a raw network device. ... Why run a complex network protocol stack in the kernel when you can just expose a configured layer 2 device to a user space process?
Check out the MIT Exokernel project and Solarflare OpenOnload that used this approach. It never really caught on because the old school way is good enough for almost everyone.
why stop there, why not TLS as well?
kTLS is a thing now (mostly used by Netflix). Back in the day we also had kernel-mode Web servers to save every cycle.
Was it Tux? I've only used it, a looong time ago, on load balancers.
https://en.wikipedia.org/wiki/TUX_web_server
You can do that if you want, but I think part of why tcp/ip is a useful layer of abstraction is it allows more robust boundaries between applications that may be running on the same machine. If you're just at layer 2 you are basically acting in behalf of the whole box.
TCP/IP is, in theory (AFAIK all experiments related to this fizzled out a decade or two ago), a global resource when you start factoring in congestion control. TLS is less obviously something you would want kernel involvement from, give or take the idea of outsourcing crypto to the kernel or some small efficiency gains for some workloads by skipping userspace handoffs, with more gains possible with NIC support.
why can't it be global and user space? DNS resolution for example is done by user space, and it is global.
DNS isn't a shared resource, that needs to be managed and distributed fairly, among programs that don't trust and cooperate with each others.
You do want to offload crypto to dedicated hardware otherwise your transport will get stuck at a paltry 40-50 Gb/s per core. However, you do not need more than block decryption; you can leave all of the crypto protocol management in userspace with no material performance impact.
> faster - less context switches and copies
Aren't neither required these days with the "async" like and zero-copy interfaces that are now available (like io_uring, where it's still handled by the kernel), along with the nearly non-existence of single core processors in modern times?
> > faster - less context switches and copies
This is very much newbie way of thinking. How do you know? Did you profile it?
It turns out there is surprisingly little dumb zero-copy potential at CF. Most of the stuff is TLS, so stuff needs to go through userspace anyway (kTLS exists, but I failed to actually use it, and what about QUIC).
Most of the cpu is burned on dumb things, like application logic. Turns out data copying and encryption and compression are actually pretty fast. I'm not saying these areas aren't ripe for optimization - but the majority of the cost was historically in much more obvious areas.
I would expect they would do the same as other big scalers, and handle most of the networking in dedicated card firmware,
https://learn.microsoft.com/en-us/azure/azure-boost/overview...
https://learn.microsoft.com/en-us/azure/virtual-network/acce...
Being a networking company I always wondered why did they pick Linux over FreeBSD.
This happened before my watch, but I always was rooting for Linux. Linux is winning on many aspects. Consider the featureset of iptables (CF uses loads of stuff, from "comment" to "tproxy"), bpf for metrics is a killer (ebpf_exporter), bpf for DDoS (XDP), Tcp fast open, UDP segmentation stuff, kTLS (arguably half-working). Then there is non-networking things like Docker, virtio ecosystem (vhost), seccomp, namespaces (net namespace for testing network apps is awesome). And the list goes on. Not to mention hiring is easier for Linux admins.
Why does being a networking company suggest FreeBSD is the "right" pick?
Because FreeBSD is known for having the best network stack. The code is elegant and clean. And, at least until a few years ago, it was the preferred choice to build routers or firewalls.
AFAIK, they were the first to implement BPF for production ready code almost 3 decades ago.
https://en.wikipedia.org/wiki/Berkeley_Packet_Filter
But all this is opinion and anecdotal. Just pick a random network feature and compare by yourself the Linux and the FreeBSD code.
> But all this is opinion and anecdotal.
Exactly.
Serving Netflix Video at 400Gb/s on FreeBSD [pdf] (2021)
https://news.ycombinator.com/item?id=28584738
(I don't share this as "the answer" as much as one example from years past.)
To be honest, I think when I heard them speak, they're kind of saying yes, FreeBSD is awesome but that the main reason is that the early people there liked FreeBSD so they just stuck with it. And that it's a good choice, but they don't claim these are things that would be impossible to do with optimizations in Linux.
I think they used FreeBSD because they were already using FreeBSD. The article doesn't mention Linux.
BSD driver support lags behind pretty bad.
SLATFATF - "So long and thanks for all the fish" is a Douglas Adams quote
https://en.wikipedia.org/wiki/So_Long,_and_Thanks_for_All_th...
A few things in the article are Douglas Adams quotes, and more specifically from the Hitchhiker’s Guide series.
Creating the universe being regarded as a mistake and making many unhappy is from those books. Whenever someone figures out the universe it gets replaced with something stranger and having evidence that’s happened repeatedly is too. The Restaurant at the End of the Universe is reference in the article.
I’m a bit surprised nothing in the article was mentioned as being “mostly harmless”.
One of these days I’ll figure out how to throw myself at the ground and miss.
Tangentially related, seL4's LionsOS can now act as a router/firewall[0].
0. https://news.ycombinator.com/item?id=45959952
[dead]