After reading this, I thought that the author's arguments actually support the idea that Message Passing is not Shared Mutable State. They gave two examples:
Go channels, where they argued that Go's channels are not actually message passing, but are instead shared concurrent queues:
Go channels: they aren’t really channels at all. A channel has two distinct endpoints, a producer end and a consumer end, with different types and capabilities. If the last consumer disappears, the runtime can detect it, unblock producers, and clean up.
A Go channel has none of this. It’s a single object, a concurrent queue, shared between however many goroutines happen to hold a reference. Any goroutine can send, and any goroutine can receive. There are no distinct endpoints, no directional typing, no way for the runtime to detect when one side is gone. It is a mutable data structure shared between multiple threads, where any thread can mutate the shared state by pushing or popping.
Erlang, which provides two separate concurrency mechanisms: shared mutable state (via ETS tables), and message passing. The author cites race conditions caused by using ETS, but does not cite any bugs caused by Erlang message passing.
I've been involved in writing concurrent code in C/C++ using two paradigms: shared mutable state (plus locks), and synchronous message passing (of the same kind that is supported by QNX). Shared mutable state was horrible: we were never able to fix all the concurrency bugs (in the legacy code we inherited and were extending). Synchronous message passing was wonderful: orders of magnitude easy to reason about and get right. In one project, we implemented the same module both ways, as a comparison, and the difference was stark.
Concurrent programming course at Chalmers teaches Erlang exactly for this reason. You start with Java, see semaphores, conditional vars and all that. Then Erlang, where whole classes of concurrently bugs are gone. But students also suffer with FP and Erlang syntax.
I think it was a great decision to structure it this way.
Perhaps their point, with Go, is that Message Passing alone isn't enough? Messages need to be passed over Real Channels.
The Erlang point was lost on me too, though. They admit that ETS is not Message Passing. The issue was one of performance. Perhaps their point is that Message Passing isn't a viable option because Shared Mutable State is faster?
Perhaps their point is that Message Passing isn't a viable option because Shared Mutable State is faster?
Definitely. The AmigaOS was built around message passing, but to maintain speed (on a 7MHz CPU), what was passed was not the actual message, but a pointer to the message. So yeah, shared mutable state.
Message passing between independent processes (ala Erlang/BEAM) is supposed to free you from worrying about shared state, and shared state is the usual reason why developers reach for a mutex or other locking primitive to avoid data corruption and other race conditions.
So no, shared state doesn’t technically require locking, and strict message passing can conceivably remove the need for explicit locks on messages and queues, but in most cases when writing multithreaded code you need some sort of locking.
Personally, I find the argument compelling if framed as, “Go is not safe from deadlocks and resource leaks when using channels.” Go might give you channels as a means to avoiding shared references, but you’re still free to have goroutines close over variables that are also manipulated elsewhere, as well as the potential locking behaviors described.
OTOH the parallel to Erlang and ETS is a stretch, and the whole thing kind of ignores the much stricter guarantees you get when there truly is no shared state.
right, but the problem I have with the article is its written with the apparent belief that message passing resolves locking issues, which is simply not that, and I'm not aware of anyone ever claiming it was? (outside of this blogpost :D)
This article was pretty evidently written with generative AI, which might explain your confusion.
It is really rather unfortunate to read technical prose on the Internet now! A year ago, when coming across a stilted phrasing or seemingly superficially incorrect statement or a background assumption left unspoken/unjustified, it was always worth thinking about harder -- the author either had a specific intention in putting it that way, or it arose out of a mentality yet to be understood.
Now the usual assumption is that it's all just slop and there's no intention or mentality behind word choice or any possibility of beliefs at all. Which is just such a bummer... man, I wish this site had a slop filter.
I wish there were some way to actually know this for certain -- the article didn't read like AI slop to me, and it made some great points. I can tell (from the comments here) that the article didn't do a great job making its points accessible, which is somewhat understandable considering the complexity of the topic.
“deadlock” is a bit of a misnomer, locks are a common way to get one but more fundamentally it’s a cyclic dependency (possibly indirect) between two concurrent entities. You can easily get deadlocks in message-passing shared-nothing architectures: process A waits for a message from process B while process B waits for a message from A, and there you go.
Around 58% of blocking bugs (i.e. goroutines stuck, unable to make progress) were caused by message passing, not shared memory. The thing that was supposed to be the cure was producing the same problems as the disease.
To be clear, message passing does eliminate one important class of concurrency bugs: unsynchronized memory access.
Like.. this pair of sentence-length paragraphs are literally describing exactly why message passing is NOT shared mutable state. This is just the CAP theorem. With message passing you are giving up availability in order to achieve consistency.
It is not surprising that we have so much difficulty with concurrency on machines whose architecture is based around low level "processes" (in the traditional unix sense). Truly addressing these problems requires rethinking whole computing systems at a lower level than the language.
The uFork project is perhaps the best distillation of an alternative that I have come across in this day and age.
I think there is. Some languages have tried different foundations and attacked aspects of the problem with real insight, but none of them have fully broken through to the mainstream. It’s worth exploring why, so that’s where we’re headed.
If only the author/LLM expanded upon this paragraph, or just dropped a few links right after. Or did I skim over it? Please say yes...!
A Go channel has none of this. It’s a single object, a concurrent queue, shared between however many goroutines happen to hold a reference. Any goroutine can send, and any goroutine can receive. There are no distinct endpoints, no directional typing, no way for the runtime to detect when one side is gone.
Yes and no. Channels do have directional subtypes, but they are not used for cleanup. Overall title is still correct tho.
Any software developer who has tackled concurrency in a serious project has the battle scars of dealing with the pitfalls of multi-threaded and concurrent programs.
Citation needed? This has not been my experience even a little bit.
I generally use the send only, and receive only channel sub-types whenever I have channels. Simply because it can catch some errors at compile time, and it helps with reasoning about what the specific instance / use of the channel will be.
Yes one can have deadlocks in channel cycles between goroutines, and I've even created a couple in a complex program I coded quite quickly.
The one advantage I found though is that with sufficient logging of high level events associated with the generation of the messages, it is usually quite easy to reason around why the deadlock happened. Certainly I find it a lot easier than the equivalent when trying to reason around explicit locks, and potential multiple threads accessing a piece of data.
As to the complained of goroutine leak issue, I recall reading that the latest version has added a for of detection involving the GC catching these cases.
From what I recall, the non-deterministic (pseudo-random) behaviour when multiple goroutines access the same channel (multi-read, or multi-write) is intentional. Possibly one could argue that Go (or a Go like language) should support multiple forms of channel, such that 1:1, 1:M, N:1, and N:M are explicitly different forms?
I also have a vague recall of some form of CSP based message static analysis for detecting issues. It was apparently used at AT&T with some of the predecessor languages and libraries. It would however require one to express the program in CSP form.
doug-moen | a day ago
After reading this, I thought that the author's arguments actually support the idea that Message Passing is not Shared Mutable State. They gave two examples:
I've been involved in writing concurrent code in C/C++ using two paradigms: shared mutable state (plus locks), and synchronous message passing (of the same kind that is supported by QNX). Shared mutable state was horrible: we were never able to fix all the concurrency bugs (in the legacy code we inherited and were extending). Synchronous message passing was wonderful: orders of magnitude easy to reason about and get right. In one project, we implemented the same module both ways, as a comparison, and the difference was stark.
ph14nix | 14 hours ago
Concurrent programming course at Chalmers teaches Erlang exactly for this reason. You start with Java, see semaphores, conditional vars and all that. Then Erlang, where whole classes of concurrently bugs are gone. But students also suffer with FP and Erlang syntax.
I think it was a great decision to structure it this way.
quad | 16 hours ago
Perhaps their point, with Go, is that Message Passing alone isn't enough? Messages need to be passed over Real Channels.
The Erlang point was lost on me too, though. They admit that ETS is not Message Passing. The issue was one of performance. Perhaps their point is that Message Passing isn't a viable option because Shared Mutable State is faster?
spc476 | 7 hours ago
Definitely. The AmigaOS was built around message passing, but to maintain speed (on a 7MHz CPU), what was passed was not the actual message, but a pointer to the message. So yeah, shared mutable state.
sjamaan | 14 hours ago
Yeah, author probably meant to say something like "undisciplined message passing is shared mutable state".
But it's very hard (maybe impossible) to get right if the language doesn't enforce the discipline (like, possibly, Erlang).
olliej | a day ago
I don’t understand this?
Message passing has nothing to do with preventing deadlocks, yet this presents it as “striking” that it doesn’t prevent them?
Similarly the core problem of shared mutable state isn’t blocking (I guess unless they mean the locking around it?)
rcoder | a day ago
Message passing between independent processes (ala Erlang/BEAM) is supposed to free you from worrying about shared state, and shared state is the usual reason why developers reach for a mutex or other locking primitive to avoid data corruption and other race conditions.
So no, shared state doesn’t technically require locking, and strict message passing can conceivably remove the need for explicit locks on messages and queues, but in most cases when writing multithreaded code you need some sort of locking.
Personally, I find the argument compelling if framed as, “Go is not safe from deadlocks and resource leaks when using channels.” Go might give you channels as a means to avoiding shared references, but you’re still free to have goroutines close over variables that are also manipulated elsewhere, as well as the potential locking behaviors described.
OTOH the parallel to Erlang and ETS is a stretch, and the whole thing kind of ignores the much stricter guarantees you get when there truly is no shared state.
olliej | a day ago
right, but the problem I have with the article is its written with the apparent belief that message passing resolves locking issues, which is simply not that, and I'm not aware of anyone ever claiming it was? (outside of this blogpost :D)
apropos | 19 hours ago
This article was pretty evidently written with generative AI, which might explain your confusion.
It is really rather unfortunate to read technical prose on the Internet now! A year ago, when coming across a stilted phrasing or seemingly superficially incorrect statement or a background assumption left unspoken/unjustified, it was always worth thinking about harder -- the author either had a specific intention in putting it that way, or it arose out of a mentality yet to be understood.
Now the usual assumption is that it's all just slop and there's no intention or mentality behind word choice or any possibility of beliefs at all. Which is just such a bummer... man, I wish this site had a slop filter.
cpurdy | 59 minutes ago
I wish there were some way to actually know this for certain -- the article didn't read like AI slop to me, and it made some great points. I can tell (from the comments here) that the article didn't do a great job making its points accessible, which is somewhat understandable considering the complexity of the topic.
masklinn | 19 hours ago
“deadlock” is a bit of a misnomer, locks are a common way to get one but more fundamentally it’s a cyclic dependency (possibly indirect) between two concurrent entities. You can easily get deadlocks in message-passing shared-nothing architectures: process A waits for a message from process B while process B waits for a message from A, and there you go.
krig | 9 hours ago
This article HAS to be AI slop.
Like.. this pair of sentence-length paragraphs are literally describing exactly why message passing is NOT shared mutable state. This is just the CAP theorem. With message passing you are giving up availability in order to achieve consistency.
alexmu | 15 hours ago
Message passing is the same as shared mutable state in the same way everything boils down to a von Neumann machine, if you squint hard enough.
darth-cheney | 13 hours ago
It is not surprising that we have so much difficulty with concurrency on machines whose architecture is based around low level "processes" (in the traditional unix sense). Truly addressing these problems requires rethinking whole computing systems at a lower level than the language.
The uFork project is perhaps the best distillation of an alternative that I have come across in this day and age.
rbr | a day ago
If only the author/LLM expanded upon this paragraph, or just dropped a few links right after. Or did I skim over it? Please say yes...!
raphting | 17 hours ago
I feel the LLM too...
andyferris | a day ago
It seems this article is a teaser for a series of articles: https://causality.blog/series/
polywolf | a day ago
Yes and no. Channels do have directional subtypes, but they are not used for cleanup. Overall title is still correct tho.
technomancy | a day ago
Citation needed? This has not been my experience even a little bit.
ahelwer | 23 hours ago
That is pretty impressive then. I think the combinatoric state space explosion makes concurrency inherently difficult to reason about intuitively.
quad | 16 hours ago
How do you design your concurrent programs?
dfawcus | 5 hours ago
I generally use the send only, and receive only channel sub-types whenever I have channels. Simply because it can catch some errors at compile time, and it helps with reasoning about what the specific instance / use of the channel will be.
Yes one can have deadlocks in channel cycles between goroutines, and I've even created a couple in a complex program I coded quite quickly.
The one advantage I found though is that with sufficient logging of high level events associated with the generation of the messages, it is usually quite easy to reason around why the deadlock happened. Certainly I find it a lot easier than the equivalent when trying to reason around explicit locks, and potential multiple threads accessing a piece of data.
As to the complained of goroutine leak issue, I recall reading that the latest version has added a for of detection involving the GC catching these cases.
https://go.dev/doc/go1.26#goroutineleak-profiles
https://dl.acm.org/doi/pdf/10.1145/3676641.3715990
From what I recall, the non-deterministic (pseudo-random) behaviour when multiple goroutines access the same channel (multi-read, or multi-write) is intentional. Possibly one could argue that Go (or a Go like language) should support multiple forms of channel, such that 1:1, 1:M, N:1, and N:M are explicitly different forms?
I also have a vague recall of some form of CSP based message static analysis for detecting issues. It was apparently used at AT&T with some of the predecessor languages and libraries. It would however require one to express the program in CSP form.