I realize that we like to use the page title here on HN, but this really should be something like "Data loss cases with MariaDB Glaera Cluster 12.1.2".
I really like glaera for low volume clustering, because of the true multi-master nature. I've been using it for over a decade on a clustered mail server for storing account information, and more recently I've pumped the log information in there so each user can see their related log messages, for a user base of around 6,000 users, and it's been a real workhorse.
Uh, that scale doesn't even need clustering beyond high availability.
And as Jepsen showed, if you actually do increase volume, it loses consistency... Invalidating the use case for multi master entirely. So, ymmv I guess
I don't understand why, if you are creating a distributed db, that you don't at least try using eg. aphyrs jepsen library (1).
The story seems to repeat itself for distributed database:
Documentation looks more like advertisement.
Promises a lot but contains multiple errors, and failures that can corrupt the data.
It's great that jepsen doing the work they do!
It also surprises me. Every company that creates a distributed database should pay for Jepsen testing. First, it is a great chance to improve their software, and second, if there are problems, they will eventually come to light anyway.
I was kind of surprised by this one--I know the MariaDB folks and have worked with some of them before. They made significant changes to fix the Repeatable Read issues we found in the last report, so I know the team cares about safety.
There wasn't much reaction on the mailing list to the lost-write problem back in January, or to the Jira tickets. I actually tried calling MariaDB on the phone to see if they'd like to talk about it, but no dice. I assume they're probably busy with other projects at the moment (hi, it's me too) and haven't had a chance to switch gears.
It's the "healthy cluster" aspect that makes this scary. Partition errors are expected—that's what Jepsen is testing. However, stale reads during normal operation mean that most Galera deployments behind a round-robin load balancer silently encounter this problem. The classic scenario: we create a user on node A, the next request goes to node B, and the user doesn't exist yet. The solution is wsrep_sync_wait or pinning reads to the writer node, but most setups don't use either of these methods because they assume a healthy cluster equals consistent reads.
linsomniac | 10 hours ago
hu3 | 9 hours ago
linsomniac | 10 hours ago
ffsm8 | 5 hours ago
And as Jepsen showed, if you actually do increase volume, it loses consistency... Invalidating the use case for multi master entirely. So, ymmv I guess
nchmy | 34 minutes ago
linsomniac | 34 minutes ago
taneliv | 8 hours ago
> It also exhibits Stale Read, Lost Update, and other forms of G-single in healthy clusters
This looks like quite a fundamental issue.
mono442 | 6 hours ago
fluxcorethread | 5 hours ago
The story seems to repeat itself for distributed database: Documentation looks more like advertisement. Promises a lot but contains multiple errors, and failures that can corrupt the data. It's great that jepsen doing the work they do!
1. https://github.com/jepsen-io/jepsen
jwr | 5 hours ago
[OP] aphyr | 2 hours ago
There wasn't much reaction on the mailing list to the lost-write problem back in January, or to the Jira tickets. I actually tried calling MariaDB on the phone to see if they'd like to talk about it, but no dice. I assume they're probably busy with other projects at the moment (hi, it's me too) and haven't had a chance to switch gears.
gebalamariusz | 2 hours ago
Diggsey | an hour ago