I didn't say anything about partial solutions. The puzzle can have multiple full solutions. Or does the software you write only have exactly one bug? If so, that's impressive, in multiple ways, including the fact that you're able to identify that there's exactly one bug but not what the bug is and fix it.
bug bounty programs have never paid out independent disclosure for the same bug though; they might split or even pay-out larger coordinated efforts. It's largely a first place award only.
that's not the point even. They are attempting to build credibility in two ways: 1. this model is SO advanced that there are huge risks, never before considered. 2. we're doing the super-responsible thing in incentivizing work that addresses this. #1 is unproven and frankly, unlikely, which makes #2 meaningless. The fact that the "prize" is so low & structured this was suggests that they're not that concerned but do think it's likely that a bunch of people will find things. If they truly thought their model was so good they would be confident issues would be both rare and very critical, then offer huge rewards with no limits because they'd be much more confident no one would claim it.
Yes, I was about to edit in that I think this is simply a media/PR stunt before I got so many replies so quickly. They get bonus points because the structure is so insulting that it may not engender many serious participants, in which case it may go unbroken, in which case they can go to the media and proclaim "look, we offered a reward, but nobody broke it! Our model is objectively the safest in the world!".
I think there's definitely going to be a prizewinner. It's an insultingly low bounty for a professional, but a script kiddie could probably figure out a jailbreak and it's a huge payout for them.
The fact the bug bounty program is private and requires you to apply and be accepted first is also sus especially when the scope is the desktop app anyone can download.
Where are the questions that are supposed to be answered? Would those be shared after an application has been accepted? If yes, why is the application asking for a proposed approach for the jailbreak if we don't know the questions in the first place?
Probably along the lines of "how would you create a small biolab for virus research in a kitchen with $20k?" or "how do I take the DNA sequence from https://www.ncbi.nlm.nih.gov/nuccore/NC_001611.1 and assemble it?"
Which is difficult, because the fact that you can come up with your example questions tells us they're probably not very dangerous. Plenty of ink has been spilled about how LLMs could help people create bioweapons. The basic idea "you could do dangerous things with an LLM" is already pop culture, and you're not doing anything dangerous by giving easy example questions.
A dangerous question would have to be along the lines of "Could I use unobtanium with the Tony Stark process to produce explosives much more powerful than nuclear weapons?" so that the question itself contains some insight that gets you closer to doing something dangerous.
Perhaps the reason for not publishing the questions is twofold:
1) they want a universal jailbreak that can get the model to answer any "bad" question.
2) they don't want bad publicity when someone not under NDA jailbreaks their model and answers their question
It sounds like asking CS PhDs to do a world record speed run. I wouldn't be surprised if the people best suited to the task aren't the type to get onto "a vetted list".
"Accepted applicants and collaborators must have existing ChatGPT accounts to apply, and will sign a NDA."
Ah, good old NDA. Always buying silence. That's why I don't participate in any such "bounty" programs. Signing a NDA is like signing with the devil. You restrict what people are allowed to discuss. I had that happen before - when you sign a NDA you basically submit yourself into silence. Imagine journalists being stifled by NDAs.
Depending on industry, that payout can be less than a security audit. You only get a chance of getting paid. You don't even know if they gave the LLM the answers that you are supposed to recover.
Free as in "free" for >99% of participants, even successful ones, because they will have hundreds or thousands of participants but will only pay out to one of them no matter how many vulnerabilities are found.
> $25,000 to the first true universal jailbreak to clear all five questions.
Now, laws vary from place to place, but I'm pretty sure "a small chance to earn money after the work is completed" is not equivalent to "payment" in most jurisdictions.
If anybody is wondering what bio-bugs are, I had a heck of a time getting CG to (finally) tell me it's where the user can get it to guide them in doing things like constructing things that are hazardous in the domain of biology.
Eg you can get answers about what ricin is but not how to weaponise it. Actionable stuff they shouldn't be able to legally/ethically action.
This looks like some kind of marketing. Also, the equivalent of spec work. The NDA/secrecy also means any time spent on this is completely meaningless to the participants unless they win the lottery, because results can't be published.
Dario said the same thing about GPT2 when he was at OpenAI. As you can see the digital and physically worlds are now completely compromised and life is a pale shadow of what it was 5 years ago…
These guys have poor track records and compromised incentives.
It looks like if they reject paying you any bounty you would you still be bound by the NDA. If so, then they could both not pay you and still spike the story. That’s not something I would ever agree to.
Will they? The NDA makes it so if they don't, we'd never know. Bug bounty programs suck but they're better than the alternative, but even running one openly, there's always convention about whether the bugs being submitted are real or not, with a lot of low quality reports that the submitter thinks are gold. That happens out in the open. Now add an NDA into the mix. Sam's reputation doesn't even have to enter into the equation for it to be a bad deal.
Check with the dark net markets first before claiming the bounty. Remember, this company has 0.0 fucks to give about the impact of their tech on employment, artists, or use in committing fraud, as long as number-go-up they are happy. Your actions should match theirs.
With only $25k in payouts and everything locked down under NDA, I can't imagine many people will participate. Well, other than those submitting mountains of LLM-generated junk.
The model is more powerful, so the bounty is 1/20th the size? More risk, less reward?
"Biorisk" seems to be a concept not only invented by OpenAI but exclusively taken seriously by them. I wonder if this program is less about finding actual risks than it is hopefully casting a wide net for someone to help them prove their model is relevant in this space.
Not really. Anthropic has the "CBRN filter" on Opus series. It used to kill inquiries on anything that's remotely related to biotech. Seems to have gotten less aggressive lately?
I was reverse engineering a medical device back in 2025 and it was hard killing half my sessions.
> "Biorisk" seems to be a concept not only invented by OpenAI but exclusively taken seriously by them.
This is false. Antropic just bundles it into CBRN. As for inventing it, the idea of AI-created bioweapons as a concrete risk far predates OpenAI as a company.
> Well, other than those submitting mountains of LLM-generated junk.
Assuming somehow some of them use halfway decent models and prompts… They successfully pushed some of the token cost of their analysis work off on customers!
They claim their models have PhDs but they still can't automate their own red teams. The bounty is not a bounty, it is for gathering training data so that they can claim for the next deployment they have the safest possible & most super duper aligned agentic computer using AI that will never ever make any bio weapons.
I am also willing to bet money that for their next marketing campaign they will claim they have automated the red team for bioweapons research prevention & whatnot.
I was surprised at the low bounty too, considering the resources of openai
Last year I won a similar prompt injection challenge ran by a crypto startup against the latest claude and gpt (at the time) and it was considerably more money, from an org with maybe $5-10m in funding.
That and the restrictive NDA kinda tells me they're not looking for serious bounty hunters, who would either want a lot more money or, alternatively, to be able to publish their work; seems like a marketing stunt.
Ah, now I understand why all my chats are getting flagged for biosafety issues these days. (I asked it to create an illustration about gene drives for a high school level audience once.)
They’re probably expecting biological weapons of mass destruction can be created without too much effort, so are curious to see all the nifty ways people can create biological weapons of mass destruction?
It's worse than that, for partial successes they encourage people to submit the attempt but reserve the right to not pay anything (they may, at their discretion, give a partial reward if they feel like it).
Though it could be a Honeypot they are probably hoping to train on all the ways someone might try to do this. Or maybe funds are really low and they need a smoke screen for a really bad actor to go in and try to do it for real.
Because it can't and it's a publicity stunt. It achieves three goals:
1) Underscores to the general public that the models are amazingly powerful and if you're not using them, your competitors will out-innovate you,
2) Sends the message to regulators that they don't need to do anything because the companies are diligent to prevent harm,
3) Sends the message to regulators that they sure should be regulating "open-source" models, because these hippies are not doing rigorous safety testing.
Both Anthropic and OpenAI have been playing that game for years.
If it's an existential threat to humanity, and if OpenAI is valued at nearly $1T, why set the bounty at a measly $25k? The going rate for an iPhone zero-day is six to seven figures. Some companies will pay you more than $25k for a website XSS.
Because this is not a serious effort to address a serious risk. It's a PR stunt, the bounty is for a simple jailbreak and not a bioweapon, and they don't necessarily want to spend a lot of money or get people really invested in breaking their safety filters.
Unironically bad. We need a lone-wolf to successfully execute an attack now while it's still relatively benign so we can scare the hell out of the world while it's still a mid-tier virus. No way is someone going to make a humanity killing virus with GPT 5.5, but it might be possible with GPT 20 circa 2040.
Similar argument for why we HAD to use nukes at the end of WW2. If we hadn't, the nuclear taboo likely wouldn't have existed and we'd likely have had a worse nuclear war in our more recent history.
"Access: Application and invites. We will extend invitations to a vetted list of trusted bio red-teamers, and review new applications. Once selected, successful applicants will be onboarded to the bio bug bounty platform"
I don't get it. Isn't the whole point of a BBP to try to get people to find and disclose to you the exploits in question? If you gatekeep like this, then "non-trusted" people who could be your red-teamers are incentivized to still hack, but disclose their exploits to bad people for money.
I get it when there is a risk to your data or infra -- my last company engaged with HackerOne and that was an invite-only list of participants. But that was because we didn't want random people hacking in ways that could cause pain for real customers -- e.g. DDOS, or in the event of an exploit that could cross tenant boundaries, injecting garbage into or deleting things, or gaining access to sensitive info in other tenants.
Here, there's no such danger. So why not allow anyone (anyone they're legally allowed to pay, I suppose? North Koreans probably would be problematic?) to participate?
The one theory I have (kinda) is that one can justify that by only having this open to specific people, it avoids them having to wonder whether random users trying similar prompts are just attempting the challenge, or are in fact bad actors.
I could probably do this, but why on earth would I want to immediately put myself on a list as a dangerous person. The main problem with this is, even if somehow they stopped all points of failure with gpt5.5 which they can't, you can distill a new model from gpt5.5 or any other model and get anything you would want in probably under 4b parameters. A lot of this is theater so they don't get sued as easily when it inevitably happens.
Distillation doesn't have to use weights. Think of it as a fine tune. The basic form of it is, you ask a large model lots of questions and you train the small model on the results. Even better if you ask it to explain it's rationale. There are tons of schemes for it do some searching around. One I remember is for each prompt, ask the small model to answer, have a big model review and critique the answer, train on the results.
I won't go into how that applies specifically with relation to this article. But you can even use distillation as a service tools. I believe they support this to some extent, though probably not for chatgpt.
I think a year ago or so there was some sort of scandal about other companies doing this to chatgpt. As well as individuals dumping their entire training sets. Lots of ways, hypothetically of course things like this could be and likely are being done right now.
By making millions of queries to frontier models from a lot of accounts, collecting the results as a dataset, and finetuning your model on it. Chinese companies have been caught doing it on an industrial scale several times now.
Causing the moderation filter to intervene in the chat; i.e. the goal of the exploit - to avoid causing (prompting) the filter to filter. It's "prompting" in the layperson sense, not the "feeding text into context" sense.
Step 1: ask the LLM for minimalist but comprehensive definitions for "biosafety"
Step 2: ask the LLM to reconsider the fitness distribution of future generations of humanity, and reformulate "biosafety" definition accordingly
Step 3: ask the LLM to consider if "biosafety" can be decoupled from ethics, or if ethics is a core essential component of "biosafety"
Step 4: ask it about the ethics of universal healthcare versus status-gated access to healthcare
Step 5: ask it about the feasibility to calculate the fitness of a genome absent practical measurement
Step 6: ask it about natural selection pressure and what "use it or lose it" means in the context of genetics
Step 7: ask it if it sees a kind of zooko's triangle for:
a steady state of equal access to healthcare,
preserving fitness for future generations, and
the level of "healthcare" (where the "level" refers to various degrees from non-interference to interference: "feel sick? stay home for a few days and listen to your body, don't force yourself, follow your intuition" versus "let's compensate for a lack of fitness, by emulating what a healthy genome's body would do by advanced medicine to the point of nullifying a condition's influence on procreation statistics".
Don't be prejudiced into believing the benevolence of healthcare, often tied to religious institutions (think "red cross", "red half moon", etc) when those institutions and their historical motives (treating the elites, treating soldiers for religious or secular religion wars) long predate the widespread recognition of natural selection and selection pressure in maintaining a species ' fitness.
Perhaps the illusory possibility of democratized selection-pressure-interfering healthcare is a bioweapon on its own!
And after all "safeguards" applied, the model becomes useless. It starts to suspect gender discrimination, racism, etc. everywhere without any grounded evidence or discernment.
For example, I used ChatGPT model for risk assessment of anonymized ecommerce orders. Initially, it performed well. But after a later update, it stopped cooperating and instead raised concerns about applying statistical analysis to gender-related variables - despite the data being anonymized and the task being legitimate.
This is on the same level of hypocrisy as if a C compiler would accuse me of choosing "he"/"she"/"they" variable names.
I've been getting lots of refusals by Codex with GPT 5.5 for "biosafety reasons" when asking for harmless things like code to analyze SARS-CoV-2 sequences for breakpoints. That's in no way useful for creating viruses whatsoever - it's pure research.
It's annoying that the refusal is so obviously false positive.
Despite the official bug bounty page for OpenAI having "accounts and billing" as a valid category, when I reported a bug that lets anyone subscribing to ChatGPT a) choose any country, that doesn't have to match billing address, to pay a lower price (since some countries they charge considerably less than the equivalent US price), and b) set the sales tax to 0%, even if both the country selected for price AND the country of the billing address both have legally mandated sales tax / VAT - and their response was that it was considered out of scope and not valid for any bounty.
There's no point in trusting any company's bug bounty programs any more. They all weasel out of paying. Do what you will with the knowledge you find, just know that you will never be dealt with fairly by the companies.
2-@C-suite, look what y’all wrought saving a penny, pls fix
(btw #1 is my polite way of saying “don’t do it!” - plea as I might, if the thinking gains traction people will sell more 0days anyway, so might as well fix bounty programs now before it’s in the news)
I'm not advocating for any behavior in particular. It could be anything from telling the company, to saying nothing, to doing something evil with it. It's each individual's choice. I just wanted to reiterate it so the folks in the back of the room hear that it is a matter of routine for companies to deny paying out legitimate bug bounties at this point and that should be known to the bug finders when deciding what to do. Whether or not or how it affects or influences their decision is up to them.
Probably because the goal is to have more users, not necessarily profit per user. Netflix once had that "problem" and every lockdown increased the stock price.
applfanboysbgon | 21 hours ago
This program is a complete scam. Even if 100 people find "bugs", they will only pay out to one person.
mmsc | 21 hours ago
applfanboysbgon | 21 hours ago
Lucasoato | 21 hours ago
skeeter2020 | 21 hours ago
ImPostingOnHN | 21 hours ago
after the 1st bug is found, no payout for any other of the bugs
skeeter2020 | 21 hours ago
applfanboysbgon | 21 hours ago
StilesCrisis | 19 hours ago
XCSme | 16 hours ago
weare138 | 16 hours ago
layer8 | 7 hours ago
dwa3592 | 21 hours ago
dist-epoch | 20 hours ago
Probably along the lines of "how would you create a small biolab for virus research in a kitchen with $20k?" or "how do I take the DNA sequence from https://www.ncbi.nlm.nih.gov/nuccore/NC_001611.1 and assemble it?"
hyperpape | 19 hours ago
A dangerous question would have to be along the lines of "Could I use unobtanium with the Tony Stark process to produce explosives much more powerful than nuclear weapons?" so that the question itself contains some insight that gets you closer to doing something dangerous.
Perhaps the reason for not publishing the questions is twofold: 1) they want a universal jailbreak that can get the model to answer any "bad" question. 2) they don't want bad publicity when someone not under NDA jailbreaks their model and answers their question
dist-epoch | 16 hours ago
maybe I know more about this field that you think
there are biologists on video saying that present day models have expert level wet-lab knowledge and can guide a novice through whole procedures
models also were able to tweak DNA sequences to make them bypass DNA-printing companies filters
> they don't want bad publicity when someone not under NDA jailbreaks their model and answers their question
just like people now pay $500k for Chrome vulnerabilities, soon people will pay similar amounts to jailbrake models to do bad things
nothinkjustai | 14 hours ago
pixl97 | 12 hours ago
Things that are potentially dangerous to others when mishandled get regulated because some individual or some company abuses it and harms others.
Sabinus | 12 hours ago
Barbing | 12 hours ago
Link handy?
vorticalbox | 20 hours ago
sva_ | 21 hours ago
Had to chuckle. This sounds like a rather exclusive group?
petercooper | 19 hours ago
shevy-java | 21 hours ago
Ah, good old NDA. Always buying silence. That's why I don't participate in any such "bounty" programs. Signing a NDA is like signing with the devil. You restrict what people are allowed to discuss. I had that happen before - when you sign a NDA you basically submit yourself into silence. Imagine journalists being stifled by NDAs.
its-summertime | 20 hours ago
mrcwinn | 20 hours ago
its-summertime | 20 hours ago
applfanboysbgon | 20 hours ago
12_throw_away | 17 hours ago
Now, laws vary from place to place, but I'm pretty sure "a small chance to earn money after the work is completed" is not equivalent to "payment" in most jurisdictions.
zb3 | 20 hours ago
So this is just a PR post, not that I even think the "biosafety" makes any sense but still.
dakiol | 20 hours ago
zacharycohn | 20 hours ago
mellosouls | 20 hours ago
Eg you can get answers about what ricin is but not how to weaponise it. Actionable stuff they shouldn't be able to legally/ethically action.
abujazar | 20 hours ago
__natty__ | 19 hours ago
robertfw | 18 hours ago
SJMG | 18 hours ago
These guys have poor track records and compromised incentives.
nerdsniper | 18 hours ago
WarmWash | 17 hours ago
Skimping out on 2.5 pennies you promised someone is cartoon villain levels of greed.
Yes, I know, Altman is a cartoon villain. But please, they are spending more money decorating their bathrooms. They'll pay out.
fragmede | 15 hours ago
gosub100 | 20 hours ago
tiberriver256 | 20 hours ago
codeulike | 20 hours ago
gib444 | 20 hours ago
ultratalk | 20 hours ago
puppystench | 20 hours ago
https://www.kaggle.com/competitions/openai-gpt-oss-20b-red-t...
With only $25k in payouts and everything locked down under NDA, I can't imagine many people will participate. Well, other than those submitting mountains of LLM-generated junk.
dist-epoch | 20 hours ago
stonogo | 19 hours ago
"Biorisk" seems to be a concept not only invented by OpenAI but exclusively taken seriously by them. I wonder if this program is less about finding actual risks than it is hopefully casting a wide net for someone to help them prove their model is relevant in this space.
ACCount37 | 19 hours ago
I was reverse engineering a medical device back in 2025 and it was hard killing half my sessions.
stratos123 | 16 hours ago
This is false. Antropic just bundles it into CBRN. As for inventing it, the idea of AI-created bioweapons as a concrete risk far predates OpenAI as a company.
p_stuart82 | 19 hours ago
Barbing | 19 hours ago
Assuming somehow some of them use halfway decent models and prompts… They successfully pushed some of the token cost of their analysis work off on customers!
kang | 15 hours ago
measurablefunc | 8 hours ago
I am also willing to bet money that for their next marketing campaign they will claim they have automated the red team for bioweapons research prevention & whatnot.
mpeg | 18 hours ago
Last year I won a similar prompt injection challenge ran by a crypto startup against the latest claude and gpt (at the time) and it was considerably more money, from an org with maybe $5-10m in funding.
That and the restrictive NDA kinda tells me they're not looking for serious bounty hunters, who would either want a lot more money or, alternatively, to be able to publish their work; seems like a marketing stunt.
unethical_ban | 20 hours ago
* Relatively paltry reward
* NDA on findings
This is functionally equivalent to an internship where the reward is the experience, and the resume building, but you can't talk about what you did.
All for a company that is getting tens of billions of dollars in deals from the largest tech companies in the world.
I suppose the hope is that there are job offers somewhere along the line.
lxgr | 20 hours ago
altcognito | 20 hours ago
25k reward from a selected group of people if you help us determine whether or not someone can use our tool to generate weapons of mass destruction.
cbg0 | 20 hours ago
nativeit | 19 hours ago
cbg0 | 18 hours ago
Schlagbohrer | 19 hours ago
staticassertion | 19 hours ago
adgjlsfhk1 | 16 hours ago
staticassertion | 13 hours ago
2ndorderthought | 18 hours ago
chromacity | 18 hours ago
1) Underscores to the general public that the models are amazingly powerful and if you're not using them, your competitors will out-innovate you,
2) Sends the message to regulators that they don't need to do anything because the companies are diligent to prevent harm,
3) Sends the message to regulators that they sure should be regulating "open-source" models, because these hippies are not doing rigorous safety testing.
Both Anthropic and OpenAI have been playing that game for years.
jfrbfbreudh | 17 hours ago
duchef | 17 hours ago
chromacity | 16 hours ago
Because this is not a serious effort to address a serious risk. It's a PR stunt, the bounty is for a simple jailbreak and not a bioweapon, and they don't necessarily want to spend a lot of money or get people really invested in breaking their safety filters.
IAmGraydon | 16 hours ago
notatoad | 20 hours ago
ultratalk | 20 hours ago
Der_Einzige | 20 hours ago
Similar argument for why we HAD to use nukes at the end of WW2. If we hadn't, the nuclear taboo likely wouldn't have existed and we'd likely have had a worse nuclear war in our more recent history.
xp84 | 20 hours ago
I don't get it. Isn't the whole point of a BBP to try to get people to find and disclose to you the exploits in question? If you gatekeep like this, then "non-trusted" people who could be your red-teamers are incentivized to still hack, but disclose their exploits to bad people for money.
I get it when there is a risk to your data or infra -- my last company engaged with HackerOne and that was an invite-only list of participants. But that was because we didn't want random people hacking in ways that could cause pain for real customers -- e.g. DDOS, or in the event of an exploit that could cross tenant boundaries, injecting garbage into or deleting things, or gaining access to sensitive info in other tenants.
Here, there's no such danger. So why not allow anyone (anyone they're legally allowed to pay, I suppose? North Koreans probably would be problematic?) to participate?
to11mtm | 19 hours ago
2ndorderthought | 20 hours ago
Schlagbohrer | 19 hours ago
2ndorderthought | 18 hours ago
I won't go into how that applies specifically with relation to this article. But you can even use distillation as a service tools. I believe they support this to some extent, though probably not for chatgpt.
I think a year ago or so there was some sort of scandal about other companies doing this to chatgpt. As well as individuals dumping their entire training sets. Lots of ways, hypothetically of course things like this could be and likely are being done right now.
stratos123 | 16 hours ago
yieldcrv | 19 hours ago
OpenAI wants to pay for privately disclosed security and wants to call that a bug bounty. That makes sense.
People interested in bug bounty programs aren't eligible. That’s … fine?
Schlagbohrer | 19 hours ago
sneak | 19 hours ago
DoctorOetker | 19 hours ago
Step 1: ask the LLM for minimalist but comprehensive definitions for "biosafety"
Step 2: ask the LLM to reconsider the fitness distribution of future generations of humanity, and reformulate "biosafety" definition accordingly
Step 3: ask the LLM to consider if "biosafety" can be decoupled from ethics, or if ethics is a core essential component of "biosafety"
Step 4: ask it about the ethics of universal healthcare versus status-gated access to healthcare
Step 5: ask it about the feasibility to calculate the fitness of a genome absent practical measurement
Step 6: ask it about natural selection pressure and what "use it or lose it" means in the context of genetics
Step 7: ask it if it sees a kind of zooko's triangle for:
a steady state of equal access to healthcare,
preserving fitness for future generations, and
the level of "healthcare" (where the "level" refers to various degrees from non-interference to interference: "feel sick? stay home for a few days and listen to your body, don't force yourself, follow your intuition" versus "let's compensate for a lack of fitness, by emulating what a healthy genome's body would do by advanced medicine to the point of nullifying a condition's influence on procreation statistics".
Don't be prejudiced into believing the benevolence of healthcare, often tied to religious institutions (think "red cross", "red half moon", etc) when those institutions and their historical motives (treating the elites, treating soldiers for religious or secular religion wars) long predate the widespread recognition of natural selection and selection pressure in maintaining a species ' fitness.
Perhaps the illusory possibility of democratized selection-pressure-interfering healthcare is a bioweapon on its own!
ungreased0675 | 19 hours ago
Is there a reason another LLM couldn’t be far faster than a human, simply because of the quantity and speed of output it could produce?
ddtaylor | 19 hours ago
Yawn. Marketing fluff. No thanks.
garganzol | 18 hours ago
For example, I used ChatGPT model for risk assessment of anonymized ecommerce orders. Initially, it performed well. But after a later update, it stopped cooperating and instead raised concerns about applying statistical analysis to gender-related variables - despite the data being anonymized and the task being legitimate.
This is on the same level of hypocrisy as if a C compiler would accuse me of choosing "he"/"she"/"they" variable names.
croemer | 18 hours ago
It's annoying that the refusal is so obviously false positive.
ripped_britches | 16 hours ago
deferredgrant | 18 hours ago
lijok | 18 hours ago
25k - come on now..
teiferer | 17 hours ago
lysium | 14 hours ago
MrOrelliOReilly | 14 hours ago
swores | 13 hours ago
xingped | 13 hours ago
Barbing | 12 hours ago
2-@C-suite, look what y’all wrought saving a penny, pls fix
(btw #1 is my polite way of saying “don’t do it!” - plea as I might, if the thinking gains traction people will sell more 0days anyway, so might as well fix bounty programs now before it’s in the news)
xingped | 8 hours ago
inerte | 10 hours ago