This is the reason that all these “self driving” mode cars with cameras to watch the drivers eyes, sensors to ensure they’re holding the steeering wheel, etc are fundamentally broken.
They are requiring a greater degree of inactive constant attention and response time than we expect of pilots. The whole purpose of this nonsense is to allow manufacturers of faulty cars to offload liability onto the driver. It’s essentially the same as if a manufacturer said it that drivers must constantly tap their brakes so they can detect failure earlier while driving, and if they were not doing that then claiming any accident caused by faulty manufacturing was still the drivers fault.
I fundamentally agree to this and I've been referring to drivers of "self-driving" cars as "sacrificial drivers" (because they're there to basically take the blame), but I just noticed that trains work the same way — you have to confirm your attention constantly so the train doesn't stop.
Now, admittedly, there's no vile deception about technological capabilities behind that, but I wonder if there's a better way to make train conductors drive trains witout constant nagging.
Well, one aspect is that train drivers have actual things to do beyond mere 'wait around in case things go wrong': look ahead, read signals, respond by changing or maintainig speed, repeat at next signal.
A train's dead man's switch' has a different history and purpose (in the POSIWID sense) than a software-driven car's driver presence confirmation prompt. The dead man's switch's purpose is to fail safe in case of sudden incapacity of an active operator. Self-driving vehicles sideline the driver – place them in a into passive observing role – and then use presence prompts as goads to make them perform attention and, as you say, make them responsible for a process they are not operating.
I wonder if there's a better way to make train conductors drive trains witout constant nagging.
One way to increase alertness and reduce mistakes: Several countries' and cities' railways make the 'pay attention moments' (e.g. signals, and checks/checklists) a full-body activity.
The technical models of trains and cars are very different though. Trains drive on fixed rails and they're hundreds (or thousands, or more, depending on type) tons on wheels that need hundreds of meters or multiple kilometres to stop. So there's considerable scheduling effort that goes into making sure two trains are never in a situation where they can collide in the first place, and into making sure that people or vehicles don't get on the railroad when a train is heading that way in the first place (which, sure, doesn't work all the time, but it still makes for a very different environment vs. a highway). That's one of the reasons why the permanent attention approach works for train conductors.
But the "permanent attention" thing is also very different in trains vs. "self-driving" cars, even though on the outside it looks the same. It's different both in terms of how it's integrated with learning how to operate a train and its actual operation, and in terms of what it actually entails, i.e. what you actually have to pay attention to.
First off, operating a train isn't an entry-level job -- it requires way more (and far more intensive) training than getting a driver's license, and operating the various autonomous modes is integrated in that training, it's not something you're basically expected to do by doing your best after reading the car's manual and maybe going through a ten tips on how to use autopilot video on Youtube.
Second -- and one way in which it's different -- operating a train isn't a casual thing, the way you'd do with a car that you take for a casual drive around the block to go shopping. I'm not saying it's always successful but in general train operators at least go through the motions of ensuring that operators are at full physical capacity -- they can't work longer than some specific hours, there are various types of mandated breaks and so on.
And third, which is where the fundamental difference comes from, I think: the train route is always pre-planned, and a lot of circumstances are controlled. For example, you basically get to know in advance how fast the train is moving every step of the way, you know which way it's going, there's a whole chain of redundancy around every decision everyone on the rails makes which means you can assume a bunch of things about the behaviour of other trains that you can't assume on the road, you know exactly what the expected outcome of each decision you make is and so on.
So this gets to be a very different experience. E.g. if you're in the driver's seat in a self-driving car, "am I going the right speed?" requires permanent and complete situational awareness vs. the other vehicles, traffic conditions, what the the vehicle is actually doing, and so on. With a train it basically requires you to consult the route agenda and some extraordinary signage. You're not constantly looking around at a permanently-evolving environment with hundreds of variables, including cars and pedestrians, analysing what to do in the event of any of the hundreds of things that The Algorithm might do -- you're looking at a well-defined set of things to confirm that The Algorithm is working as expected, and you're specifically trained for what to do if it doesn't, based on a protocol that's specifically design to depend as little as possible on split-second reflexes.
Sauce: I have a relative who used to be a train conductor, and then an instructor. He's now retired so a bunch of this might be out of date, and obviously subject to my own lack of understanding (I don't even have a driver's license, I've got a bad eye, so operating a train is definitely out of the question). But the way it's been relayed to me is basically that the whole process is engineered -- over decades -- to ensure that people who are specifically trained for it actively pay attention to as few things as possible, in a well-defined order, with a whole bunch of protocols in place to mitigate for punctual operator failure, in an infrastructure that was devised to minimise the consequences of failure and which is specifically devised to accommodate realistic limits to human attention, perception and reaction time. It's obviously not perfect, which is why train accidents are still a thing, unfortunately, but it's certainly working a lot better than public roads, at least in terms of safety. This is extremely different not just from road infrastructure, it's very different compared to how operators of "self-driving" are supposed to supervise them.
I really wish for an inverted model: Instead of what the current companies are doing, add sensors to the car and have the driving algorithm record the decisions it would make. Compare the algorithm's list to what the human actually did and work from there. Transcripts from the collective driving of a million cars on the road would make a killer training dataset.
Bainbridge's paper "Ironies of Automation" is so so so good on this subject — not just on what humans can not do, but also on what skilled and engaging process operation does looks like. I'll post quotes later today or tomorrow; link for now.
Great paper, thank you for linking. I've saved this to my Zotero library. Bainbridge locks onto a number of fundamental concerns that apply to many contemporary domains. She was looking through the lens of a plant or an aircraft but the same set of concerns apply to driving (as stated above) but also software development. Her use of the phrase "on-line" meaning the human is actively in-the-loop resonated with my experiences with LLM-generated software. Without being actively in that process, there is a time penalty when I need to get in and adjust something. I can't reason about it. I can't quickly diagnose and resolve an error condition.
Okay, here come some quotes. I'm going to try to post them in separate comments.
One result of skill is that the operator knows he can take-over adequately if required. Otherwise the job is one of the worst types, it is very boring but very responsible, yet there is no opportunity to aquire or maintain the qualities required to handle the responsibility.
(comment speed limit reached, posing the quotes as a batch instead)
Unfortunately, physical skills deteriorate when they are not used, [...] This means that a formerly experienced operator who has been monitoring an automated process may now be an inexperienced one.
We know from many 'vigilance" studies (Mackworth, 1950) that it is impossible for even a highly motivated human being to maintain effective visual attention towards a source of information on which very little happens, for more than about half an hour. This means that it is humanly impossible to carry out the basic function of monitoring for unlikely abnormalities, which therefore has to be done by an automatic alarm system connected to sound signals.
Ekkers and colleagues (1979) [...] To greatly simplify: high coherence of process information, high process complexity and high process controllability (whether manual or by adequate automatics) were all associated with low levels of stress and workload and good health, and the inverse, while fast process dynamics and a high frequency of actions which cannot be made directly on the interface were associated with high stress and workload and poor health.
One might state these problems as a paradox, that by automating the process the human operator is given a task which is only possible for someone who is in on-line control.
"Graceful degradation' of performance is quoted in 'Fitts List's of man-computer qualities as an advantage of man over machine. This is not an aspect of human performance to be aimed for in computers, as it can raise problems with monitoring for failure (e.g. Wiener and Curry, 1980); automatic systems should fail obviously.
I’m using Safari on a version of iOS released in late 2024 and getting the same thing. Whatever definition of “suspiciously old” this filter is using is kinda dubious.
I was very confused at first, this seems to be a post about LLM crawling not matching the title of the submission, but I think the site flags me as a crawler.
Your browser has suspicious Sec-CH-UA-* headers
I'm using Google Chrome Beta on a Google Pixel ...
Same. Can't open the article, and even though I want to report this to the author, I don't like the "you can figure out the domain name for my email address by yourself" approach. I use QuteBrowser, just for the information of the author.
This is my existence so this doesn't surprise me. As someone with ADHD, I feel like I have a minor superpower when designing systems because I can ask myself, "Would I seriously pay attention to this?" and determine if something has too many steps or requires too much context. Systems can ask too much of their users because their designers thought, "Just do one more thing!" You tried that last time and it didn't work! Stop!
That's a nice way to think of it, thanks! I'm in the same boat,
I sort of wonder if that intuition helps with stuff like concurrency, I know there's ways to reason it out logically, but it helps to feel really annoyed about dealing with cross thread communication and keeping track of shared state. Knowing what I'll hate to get stuck reasoning about makes it hard to accept a confusing design I'd hate to trace through.
What's the solution, exactly, for software engineers/SREs/etc? Make the processes less boring? Make AI do the boring stuff? Both sound like a tall order.
In practice, I hate when my work tools work to get cuter and more fun. There's always more crap to wade through in whatever okta tile we've added. AI at least is okay for semantic search and light anomaly detection, but then my human mind is accountable for it, and then I feel like crap for not inventing myself a locus of control that lets me obsess over random things.
Legit, what's the right path here? Scripting solid automatons (probably the best option)? Parallel humans on tasks? Have models prove themselves reliable in a model gym to meet human standards reliably?
As a neruodivergent person, it's a real struggle when I have to beat myself up to "think" about the right things, and I'm oddly comforted that this isn't just something people with my quirks experience.
One technique is this https://en.wikipedia.org/wiki/Pointing_and_calling. When I have to run a manual playbook I’m unfamiliar with but is otherwise boring I open a slack thread and post each step followed by the result.
Another option is to automate away the boredom. Even if not “worth it” for others it might be an investment you are willing to take and you coworkers will likely thank you (as long as you don’t take this opportunity to disappear from other prioritized work entirely). I frame this automation efforts as “toil reduction” it helps management understand that beyond “5 seconds saved” there is an energy and morale component as well. ( I have ADHD FWIW)
olliej | a month ago
This is the reason that all these “self driving” mode cars with cameras to watch the drivers eyes, sensors to ensure they’re holding the steeering wheel, etc are fundamentally broken.
They are requiring a greater degree of inactive constant attention and response time than we expect of pilots. The whole purpose of this nonsense is to allow manufacturers of faulty cars to offload liability onto the driver. It’s essentially the same as if a manufacturer said it that drivers must constantly tap their brakes so they can detect failure earlier while driving, and if they were not doing that then claiming any accident caused by faulty manufacturing was still the drivers fault.
jeeger | a month ago
I fundamentally agree to this and I've been referring to drivers of "self-driving" cars as "sacrificial drivers" (because they're there to basically take the blame), but I just noticed that trains work the same way — you have to confirm your attention constantly so the train doesn't stop. Now, admittedly, there's no vile deception about technological capabilities behind that, but I wonder if there's a better way to make train conductors drive trains witout constant nagging.
Sietsebb | a month ago
Well, one aspect is that train drivers have actual things to do beyond mere 'wait around in case things go wrong': look ahead, read signals, respond by changing or maintainig speed, repeat at next signal.
A train's dead man's switch' has a different history and purpose (in the POSIWID sense) than a software-driven car's driver presence confirmation prompt. The dead man's switch's purpose is to fail safe in case of sudden incapacity of an active operator. Self-driving vehicles sideline the driver – place them in a into passive observing role – and then use presence prompts as goads to make them perform attention and, as you say, make them responsible for a process they are not operating.
One way to increase alertness and reduce mistakes: Several countries' and cities' railways make the 'pay attention moments' (e.g. signals, and checks/checklists) a full-body activity.
https://en.wikipedia.org/wiki/Pointing_and_calling
x64k | a month ago
The technical models of trains and cars are very different though. Trains drive on fixed rails and they're hundreds (or thousands, or more, depending on type) tons on wheels that need hundreds of meters or multiple kilometres to stop. So there's considerable scheduling effort that goes into making sure two trains are never in a situation where they can collide in the first place, and into making sure that people or vehicles don't get on the railroad when a train is heading that way in the first place (which, sure, doesn't work all the time, but it still makes for a very different environment vs. a highway). That's one of the reasons why the permanent attention approach works for train conductors.
But the "permanent attention" thing is also very different in trains vs. "self-driving" cars, even though on the outside it looks the same. It's different both in terms of how it's integrated with learning how to operate a train and its actual operation, and in terms of what it actually entails, i.e. what you actually have to pay attention to.
First off, operating a train isn't an entry-level job -- it requires way more (and far more intensive) training than getting a driver's license, and operating the various autonomous modes is integrated in that training, it's not something you're basically expected to do by doing your best after reading the car's manual and maybe going through a ten tips on how to use autopilot video on Youtube.
Second -- and one way in which it's different -- operating a train isn't a casual thing, the way you'd do with a car that you take for a casual drive around the block to go shopping. I'm not saying it's always successful but in general train operators at least go through the motions of ensuring that operators are at full physical capacity -- they can't work longer than some specific hours, there are various types of mandated breaks and so on.
And third, which is where the fundamental difference comes from, I think: the train route is always pre-planned, and a lot of circumstances are controlled. For example, you basically get to know in advance how fast the train is moving every step of the way, you know which way it's going, there's a whole chain of redundancy around every decision everyone on the rails makes which means you can assume a bunch of things about the behaviour of other trains that you can't assume on the road, you know exactly what the expected outcome of each decision you make is and so on.
So this gets to be a very different experience. E.g. if you're in the driver's seat in a self-driving car, "am I going the right speed?" requires permanent and complete situational awareness vs. the other vehicles, traffic conditions, what the the vehicle is actually doing, and so on. With a train it basically requires you to consult the route agenda and some extraordinary signage. You're not constantly looking around at a permanently-evolving environment with hundreds of variables, including cars and pedestrians, analysing what to do in the event of any of the hundreds of things that The Algorithm might do -- you're looking at a well-defined set of things to confirm that The Algorithm is working as expected, and you're specifically trained for what to do if it doesn't, based on a protocol that's specifically design to depend as little as possible on split-second reflexes.
Sauce: I have a relative who used to be a train conductor, and then an instructor. He's now retired so a bunch of this might be out of date, and obviously subject to my own lack of understanding (I don't even have a driver's license, I've got a bad eye, so operating a train is definitely out of the question). But the way it's been relayed to me is basically that the whole process is engineered -- over decades -- to ensure that people who are specifically trained for it actively pay attention to as few things as possible, in a well-defined order, with a whole bunch of protocols in place to mitigate for punctual operator failure, in an infrastructure that was devised to minimise the consequences of failure and which is specifically devised to accommodate realistic limits to human attention, perception and reaction time. It's obviously not perfect, which is why train accidents are still a thing, unfortunately, but it's certainly working a lot better than public roads, at least in terms of safety. This is extremely different not just from road infrastructure, it's very different compared to how operators of "self-driving" are supposed to supervise them.
pgronkievitz | a month ago
There's full-autonomy mode for ETCS https://transport.ec.europa.eu/transport-modes/rail/ertms/what-ertms-and-how-does-it-work/etcs-levels-and-modes_en
elephantium | a month ago
I really wish for an inverted model: Instead of what the current companies are doing, add sensors to the car and have the driving algorithm record the decisions it would make. Compare the algorithm's list to what the human actually did and work from there. Transcripts from the collective driving of a million cars on the road would make a killer training dataset.
rplacy | a month ago
oh man, I'd love to sell my driving style to bmw drivers :D the world will be much safer.. with less of them..
Sietsebb | a month ago
Bainbridge's paper "Ironies of Automation" is so so so good on this subject — not just on what humans can not do, but also on what skilled and engaging process operation does looks like. I'll post quotes later today or tomorrow; link for now.
https://ckrybus.com/static/papers/Bainbridge_1983_Automatica.pdf
rondahlgren | a month ago
Great paper, thank you for linking. I've saved this to my Zotero library. Bainbridge locks onto a number of fundamental concerns that apply to many contemporary domains. She was looking through the lens of a plant or an aircraft but the same set of concerns apply to driving (as stated above) but also software development. Her use of the phrase "on-line" meaning the human is actively in-the-loop resonated with my experiences with LLM-generated software. Without being actively in that process, there is a time penalty when I need to get in and adjust something. I can't reason about it. I can't quickly diagnose and resolve an error condition.
Sietsebb | a month ago
Okay, here come some quotes. I'm going to try to post them in separate comments.
Sietsebb | a month ago
(comment speed limit reached, posing the quotes as a batch instead)
conartist6 | a month ago
I'm using a current version of Helium at the moment and I'm hard-blocked from reading the page because by browser is "suspiciously old" :/
mkremins | a month ago
I’m using Safari on a version of iOS released in late 2024 and getting the same thing. Whatever definition of “suspiciously old” this filter is using is kinda dubious.
s3thi | a month ago
gnodar01 | a month ago
Would be pretty great if the author purposely misspelled "expect" as "expert" to drive the point.
cks | a month ago
Sadly it was definitely not deliberate, although maybe that also illustrates the point. I've fixed the mistake now.
(I'm the author of the linked-to article.)
ruuda | a month ago
I was very confused at first, this seems to be a post about LLM crawling not matching the title of the submission, but I think the site flags me as a crawler.
I'm using Google Chrome Beta on a Google Pixel ...
omidmash | a month ago
Same. Can't open the article, and even though I want to report this to the author, I don't like the "you can figure out the domain name for my email address by yourself" approach. I use QuteBrowser, just for the information of the author.
ruuda | a month ago
Trying on my desktop now, Chromium on Arch Linux is also banned.
brudish | a month ago
This is my existence so this doesn't surprise me. As someone with ADHD, I feel like I have a minor superpower when designing systems because I can ask myself, "Would I seriously pay attention to this?" and determine if something has too many steps or requires too much context. Systems can ask too much of their users because their designers thought, "Just do one more thing!" You tried that last time and it didn't work! Stop!
kmm | a month ago
That's a nice way to think of it, thanks! I'm in the same boat,
I sort of wonder if that intuition helps with stuff like concurrency, I know there's ways to reason it out logically, but it helps to feel really annoyed about dealing with cross thread communication and keeping track of shared state. Knowing what I'll hate to get stuck reasoning about makes it hard to accept a confusing design I'd hate to trace through.
kmm | a month ago
What's the solution, exactly, for software engineers/SREs/etc? Make the processes less boring? Make AI do the boring stuff? Both sound like a tall order.
In practice, I hate when my work tools work to get cuter and more fun. There's always more crap to wade through in whatever okta tile we've added. AI at least is okay for semantic search and light anomaly detection, but then my human mind is accountable for it, and then I feel like crap for not inventing myself a locus of control that lets me obsess over random things.
Legit, what's the right path here? Scripting solid automatons (probably the best option)? Parallel humans on tasks? Have models prove themselves reliable in a model gym to meet human standards reliably?
As a neruodivergent person, it's a real struggle when I have to beat myself up to "think" about the right things, and I'm oddly comforted that this isn't just something people with my quirks experience.
schneems | a month ago
One technique is this https://en.wikipedia.org/wiki/Pointing_and_calling. When I have to run a manual playbook I’m unfamiliar with but is otherwise boring I open a slack thread and post each step followed by the result.
Another option is to automate away the boredom. Even if not “worth it” for others it might be an investment you are willing to take and you coworkers will likely thank you (as long as you don’t take this opportunity to disappear from other prioritized work entirely). I frame this automation efforts as “toil reduction” it helps management understand that beyond “5 seconds saved” there is an energy and morale component as well. ( I have ADHD FWIW)