The 'responsible disclosure' policy was always a polite fiction people told each other. It was always a 'go along to get along' kind of situation. LLM-based vuln discovery tools have just exposed it for what it is.
Social norms keep society together. Social norms kept people from being at each others' throats when doing vulnerability research, turning natural opponents into people on the same team. Without those norms, any security research that isn't planned by the team doing the work is an unwelcome surprise and the world is worse off for it.
Only makes sense. If you have an original idea, you need to publish it before someone else does. Critical thinking and self-reflection is dead. Once the thought is out you have 39 minutes to blog, otherwise someone else will beat you to it :P
doesn't to me tbh. this was a type of writing style in tech long before llms, and the big ones tend to write this style slightly different than this article, imo.
Integrate LLMs at the point of code push. Every pull request, every merge, every deploy. Run AI-assisted security review as part of your CI pipeline, the same way you run linters and unit tests. Not as an afterthought, not as a quarterly audit. At push time. If the code has a vulnerability, catch it before it reaches production.
The issue is the LLM will find 10 things on every review, and 1 in 400 will be a real exploitable vuln. The other 399 still need to be discarded by humans.
Here's what I do wonder: Will LLMs end up finding "all" security vulns, or will they hit a ceiling at some point?
I still remember the times when capable fuzzers (and AFL in particular) appeared and started finding dozens of critical vulns in more or less everything. For a period, the time to find security-critical bugs was also very short (but bug-to-exploit time mostly stayed the same, unlike with LLMs). Eventually, thanks to projects like oss-fuzz, the blue team did with fuzzers what this post is asking for with LLMs: Everyone integrated them into their development flow, as early and with as much compute resources as feasible. That caused the number of new fuzzer-findable vulns to go down considerably.
So there is a future where we do what the author asks, and then we mostly stop releasing software with vulnerabilities discoverable by LLMs. And then that either means we stop releasing software with vulnerabilities at all, or we only release software with vulnerabilities not discoverable by LLMs. Of course there is also the probable intermediate scenario where we only ship vulns discoverable by the smartest LLMs, and we have an arms race. This is maybe the most likely, and arguably where we are today, but who knows what time will bring.
freddyb | 5 hours ago
Begrudgingly, I need to agree. We have done some research on this at my workplace and patch-to-exploit is almost instant.
yawaramin | 12 hours ago
The 'responsible disclosure' policy was always a polite fiction people told each other. It was always a 'go along to get along' kind of situation. LLM-based vuln discovery tools have just exposed it for what it is.
evert | 12 hours ago
Finally someone exposed social norms for what they really are. Good riddance
Vaelatern | 8 hours ago
Social norms keep society together. Social norms kept people from being at each others' throats when doing vulnerability research, turning natural opponents into people on the same team. Without those norms, any security research that isn't planned by the team doing the work is an unwelcome surprise and the world is worse off for it.
evert | 7 hours ago
Fully agree. Just in case it wasn't clear, my original comment was meant sarcastically
FeepingCreature | 5 hours ago
Social norms are great for getting along with society, not so great for getting along with attackers.
Riolku | 15 hours ago
It feels moderately ironic that this article too smells LLM-written
freddyb | 5 hours ago
Only makes sense. If you have an original idea, you need to publish it before someone else does. Critical thinking and self-reflection is dead. Once the thought is out you have 39 minutes to blog, otherwise someone else will beat you to it :P
FeepingCreature | 5 hours ago
doesn't to me tbh. this was a type of writing style in tech long before llms, and the big ones tend to write this style slightly different than this article, imo.
Aks | 3 hours ago
Yeah this reads like a fear mongering ad
[OP] fro | 12 hours ago
i think this is just the way things are now for better or worse
junot | 3 hours ago
Maybe it is the era of hand-crafted artisanal boutique C programs used in mission critical circumstances that is dead?
cosarara | 2 hours ago
The issue is the LLM will find 10 things on every review, and 1 in 400 will be a real exploitable vuln. The other 399 still need to be discarded by humans.
muvlon | an hour ago
Here's what I do wonder: Will LLMs end up finding "all" security vulns, or will they hit a ceiling at some point?
I still remember the times when capable fuzzers (and AFL in particular) appeared and started finding dozens of critical vulns in more or less everything. For a period, the time to find security-critical bugs was also very short (but bug-to-exploit time mostly stayed the same, unlike with LLMs). Eventually, thanks to projects like oss-fuzz, the blue team did with fuzzers what this post is asking for with LLMs: Everyone integrated them into their development flow, as early and with as much compute resources as feasible. That caused the number of new fuzzer-findable vulns to go down considerably.
So there is a future where we do what the author asks, and then we mostly stop releasing software with vulnerabilities discoverable by LLMs. And then that either means we stop releasing software with vulnerabilities at all, or we only release software with vulnerabilities not discoverable by LLMs. Of course there is also the probable intermediate scenario where we only ship vulns discoverable by the smartest LLMs, and we have an arms race. This is maybe the most likely, and arguably where we are today, but who knows what time will bring.