This article does a great job of describing what the academic PL community has in common. I sometimes hear people assume that academic PL just means Haskell/functional programming. But to me, it means semantic precision.
I think that's part of why I can really see the academic influence in Rust, whereas that's relatively rare in most other mainstream programming languages.
The Rust community’s desire for precise semantics, in the long run, leads to more robust software systems
I'm about 80-90% convinced that this kind of precision is useful when reading code, after having seen just how clearer code can get when introducing more specific types instead of reusing general ones everywhere (related concepts: boolean blindness + variants of it extended to Option/Maybe/integers/strings).
However, I'm having trouble coming up with actual evidence/concrete examples in favor of the "robust software systems" point (assuming "robust" as "not ~failing at execution time", which is different from "robust" as "can deal gracefully with maintainers changing over time"). Most notably, SQLite is probably one of the most robust databases in existence and it is written in C, and takes inspiration from Tcl for its flexible typing (yes, I know this can be turned off nowadays).
My current operating hypothesis is that while fine-grained type design provides local reasoning benefits, robustness being a whole-system property is (most likely) more closely linked to test coverage than type design.
As another example, internet outages among large-scale cloud providers tend to have some recurring patterns like configuration changes that were (in hindsight, sometimes) deemed to be lacking sufficient test coverage, as well as saturation. It seems more conceivable that future developments in testing can help reduce the risk of these kinds of issues, moreso than type system advancements.
Most notably, SQLite is probably one of the most robust databases in existence and it is written in C, and takes inspiration from Tcl for its flexible typing (yes, I know this can be turned off nowadays).
I think you could alternately view this as just a difficulty level. SQLite is proof that it's possible to make robust software without any of this, but that doesn't mean it's not harder. Personally, the reason I try to use specific types when possible is because it means I can more easily get into a flow state without having to constantly slam the brakes and figure out what process(true, false, true) is actually doing.
OT: This reminded me of the time I've been criticized by a colleague for replacing wording about contracted developers having to deliver work at "highest possible quality" with "agreed upon quality", deviating from the corporate template document.
They thought I am making it easy for the contractors, playing favorites and so on. Then I've explained we are not making car brake software and I do not want them to bill us for formal proof of every statement in a Django website.
I think a lot of the software development conflicts stem from disagreement about the level of quality a product requires.
But also because we don't know what quality is :D
I'm a bit jealous of error budgets in SRE. I think it's realistic to set up service level objectives for systems administration work that help guide your work- you're not meeting the objectives, you should slow down and focus on making things better. You're exceeding your objectives, you should be a bit looser.
But I don't think you can establish similar metrics for software development. To me it's clear that when you accumulate enough technical debt, feature delivery slows significantly and things suck- but how do you measure that in a reasonably objective way?
SQLite is one of our greatest collective feats as a profession, but the level of testing that goes into it (500 lines of tests for each line of source) is far beyond what most organizations can budget for, unfortunately. My general belief is that semantic precision via types can help you approach that level of quality in a more reasonable budget.
I have to express some doubt here, the SQLite TH2 suite is, as far as I know, a mutation test suite. It tests what happens if all conditions in the code paths are flipped. Static typing can be powerful but it can't realistically cover all runtime code paths.
I think this came from the safe/unsafe split in Rust. Since safe APIs are not allowed to just tell user they're holding it wrong or declare that some edge case is unsupported, tricky/ambiguous APIs are just not allowed as "safe". Safe APIs are required to fully contain unsafety of their implementation, so they need precisely defined unsafe APIs too.
jschuster | a month ago
This article does a great job of describing what the academic PL community has in common. I sometimes hear people assume that academic PL just means Haskell/functional programming. But to me, it means semantic precision.
I think that's part of why I can really see the academic influence in Rust, whereas that's relatively rare in most other mainstream programming languages.
typesanitizer | a month ago
I'm about 80-90% convinced that this kind of precision is useful when reading code, after having seen just how clearer code can get when introducing more specific types instead of reusing general ones everywhere (related concepts: boolean blindness + variants of it extended to Option/Maybe/integers/strings).
However, I'm having trouble coming up with actual evidence/concrete examples in favor of the "robust software systems" point (assuming "robust" as "not ~failing at execution time", which is different from "robust" as "can deal gracefully with maintainers changing over time"). Most notably, SQLite is probably one of the most robust databases in existence and it is written in C, and takes inspiration from Tcl for its flexible typing (yes, I know this can be turned off nowadays).
My current operating hypothesis is that while fine-grained type design provides local reasoning benefits, robustness being a whole-system property is (most likely) more closely linked to test coverage than type design.
As another example, internet outages among large-scale cloud providers tend to have some recurring patterns like configuration changes that were (in hindsight, sometimes) deemed to be lacking sufficient test coverage, as well as saturation. It seems more conceivable that future developments in testing can help reduce the risk of these kinds of issues, moreso than type system advancements.
refi64 | a month ago
I think you could alternately view this as just a difficulty level. SQLite is proof that it's possible to make robust software without any of this, but that doesn't mean it's not harder. Personally, the reason I try to use specific types when possible is because it means I can more easily get into a flow state without having to constantly slam the brakes and figure out what
process(true, false, true)is actually doing.mordae | a month ago
OT: This reminded me of the time I've been criticized by a colleague for replacing wording about contracted developers having to deliver work at "highest possible quality" with "agreed upon quality", deviating from the corporate template document.
They thought I am making it easy for the contractors, playing favorites and so on. Then I've explained we are not making car brake software and I do not want them to bill us for formal proof of every statement in a Django website.
koala | a month ago
I think a lot of the software development conflicts stem from disagreement about the level of quality a product requires.
But also because we don't know what quality is :D
I'm a bit jealous of error budgets in SRE. I think it's realistic to set up service level objectives for systems administration work that help guide your work- you're not meeting the objectives, you should slow down and focus on making things better. You're exceeding your objectives, you should be a bit looser.
But I don't think you can establish similar metrics for software development. To me it's clear that when you accumulate enough technical debt, feature delivery slows significantly and things suck- but how do you measure that in a reasonably objective way?
yawaramin | a month ago
process(true, false, true)is much clearer in a language with named arguments.sunshowers | a month ago
SQLite is one of our greatest collective feats as a profession, but the level of testing that goes into it (500 lines of tests for each line of source) is far beyond what most organizations can budget for, unfortunately. My general belief is that semantic precision via types can help you approach that level of quality in a more reasonable budget.
yawaramin | a month ago
I have to express some doubt here, the SQLite TH2 suite is, as far as I know, a mutation test suite. It tests what happens if all conditions in the code paths are flipped. Static typing can be powerful but it can't realistically cover all runtime code paths.
sunshowers | a month ago
Agreed, and that's why I said "approach".
yawaramin | a month ago
We must be using different meanings of 'approach' then :-)
masklinn | a month ago
"Primitive obsession"
typesanitizer | a month ago
Ah, that was the phrase. Thanks!
kornel | a month ago
I think this came from the safe/unsafe split in Rust. Since safe APIs are not allowed to just tell user they're holding it wrong or declare that some edge case is unsupported, tricky/ambiguous APIs are just not allowed as "safe". Safe APIs are required to fully contain unsafety of their implementation, so they need precisely defined unsafe APIs too.