But one of the reasons they switched was because the compiler upstream for the original language they used, Zig, wouldn't accept slop contributions they wanted to make for Bun perf. What will they do when they need to try to push a slop contribution upstream to rust?
At this point they will probably just fork yet again and maintain some vibe compiler.
They should make FullstackLang. It compiles English in .md to machine code that can directly run on the specialized hardware it designs for it that you have to 3d print at runtime. Every program gets its own custom hardware. Composability and reuse be damned. Pay the token masters for every thought you have
Huh. I wonder if the original intent was to merge an AI generated PR to a high-profile project like Zig. It makes the headlines and generates hype. But that went embarassingly bad for them so they had "port Bun to Rust" as a backup.
No, they've explicitly denied it.[0] However, they do regularly dig at how much faster their fork is[1][2] that they can't merge because of Zig's AI policy.
It does in the narrower sense of vibe coding (as opposed to more general agentic coding, which is also called vibe coding from time to time...).
> Solicited, non-critical, high-quality, well-tested, and well-reviewed code changes that are originally authored by an LLM are allowed, with disclosure.
Vibe coding (in its original meaning) would have hard time arguing it's of high quality.
> This policy is intended to live in Forge as a living document, not as a dead RFC.
Oh... I can’t say for certain who wrote it, and I won’t make any definitive claims - personally, I tend to think it was probably mostly written, or at least conceived, by a man - but this sort of phrase… I get a nervous twitch every time I see it, even though it’s actually quite a clever rhetorical device. Hell... Maybe I just need a break; I don’t know, since I’m starting to see LLMs everywhere...
It's in-line with the 'nanny' stereotype of the Rust community that they give you permission to act in a way they would never be able to verify anyways:
> The following are allowed.
> Asking an LLM questions about an existing codebase.
> Asking an LLM to summarize comments on an issue, PR, or RFC...
Like seriously, what's the point of explicitly allowing this? Imagine the opposite were true, you weren't allowed to do this - what would they do? Revert an update because the person later claimed they checked it with an LLM?
The Linux policy on this is much superior and more sensible.
> Like seriously, what's the point of explicitly allowing this? Imagine the opposite were true, you weren't allowed to do this - what would they do?
Imagine if they just say "LLMs are banned" then there's a lot of ambiguity. So they specifically outlined that generative uses of LLMs are banned, and that non-generative ones are not banned (i.e. "allowed").
I think it's a poor choice of words on their part, but it makes sense (considering what their policy is). It's more of a "we're not disallowing use in these particular scenarios, so you can still use LLMs for these if you want". Remember: it's a big project, and if they don't explicitly state something then people will ask and waste everyone's time.
If anything, it reads to me as a proactive rebuttal of complaints that they don't allow LLMs; they're definitively stating that they do allow using them for very specific purposes.
> Like seriously, what's the point of explicitly allowing this?
Explicit permission can be useful to preemptively cut off some questions from well meaning people who, acting in good faith, might otherwise pester for clarification (no matter how silly / "obvious" it might otherwise be), or get agitated by misconstruing an all-banned list as being an overly verbose "no LLMs ever" overreach.
> It's in-line with the 'nanny' stereotype of the Rust community that they give you permission to act in a way they would never be able to verify anyways: [...]
Many of us work or have worked in corporate settings where IT takes great pains to help detect and prevent data exfiltration, and have absolutely installed the corporate spyware to detect those kinds of actions when performed on their own closed source codebases. Others rely on the honor system - at least as far as you know - but still ban such actions out of copyright/trade secret concerns. If you're steeped deeply enough in that NDA-preserving culture, a reminder that you've switched contexts might help when common sense proves uncommon.
While nannying can be obnoxious, I'm not sure that having a document one can point to/link/cite, to allay any raised concerns, counts.
> If you're steeped deeply enough in that NDA-preserving culture, a reminder that you've switched contexts might help when common sense proves uncommon.
> Like seriously, what's the point of explicitly allowing this?
I would have LOVED if the university course I took last winter had this. I had to take a very paranoid attitude to what was allowed.
What they're trying to avoid is a lot of unnecessary conflict with zealous anti-AI people calling for your exclusion for admitting to doing these things. There are people who would ban this too.
This is highly interesting. It seems clear to me that a lot of thought and work went into this. If I ever were to write a similar document, I'm sure I could learn a lot from this one. Props to the authors and all involved.
Note that there are currently several proposed policies (plus hundreds of discussions mostly in private channels), and frankly I'm not sure we'll ever reach a consensus (I'm a Rust project member).
This policy is straightforward and shouldn't be particularly controversial (I'm sure it will be bikeshedded to death though). It basically bans the obvious stuff ("don't just drop LLM generated comments onto PRs") and allows the important stuff like LLMs writing code so long as you disclose.
edit: Wow people did not read the policy. It's literally just "if you use an LLM you are responsible for it, we will reject low quality PRs, please disclose that you have used an LLM". This is bog standard.
So...big caveat that this is still under review, so what we're talking about is a moving target, but based on what I can see, it seems considerably more nuanced than that. They basically ban LLM-authored code, with a careful carve-out to run an experiment to try to get only high-quality LLM PRs:
> It's fine to use LLMs to answer questions, analyze, distill, refine, check, suggest, review. But not to create.
> We carve out a space for "experimentation" to inform future revisions to this policy.
Importantly, the LLM contributions must be solicited, i.e., the people responsible for reviewing the final implementation have to opt in explicitly beforehand.
> Using an LLM to discover bugs, as long as you personally verify the bug, write it up yourself, and disclose that an LLM was used.
What are they going to do go back and reject a bug if someone later admits they found it with an LLM? Honestly they and most other project would probably be better off just ignoring the situation until norms start developing.
What are you even talking about lol the policy doesn't imply that at all.
That's in the "allowed with caveats" section. It's just saying to not open bug reports without first reading them yourself or your bug may be closed. No one is saying "by policy we will have to add the bug back in" jesus christ
The policy is insanely straightforward, idk how you can be misinterpreting it this badly. It's just "Disclose that you use a model, you are on the hook for reviewing model output as a human" and then some clear cut examples.
The assumption here is that people act in good faith. If you break the rules, this indicates that you are not acting in good faith, and perhaps should no longer be welcome.
They're trying to avoid a Boy Who Cried Wolf situation.
If they get swamped with 100 bugs that turned out, after they investigate them to be hallucinations then it's likely they will ignore or lose in the noise a real bug.
A llm generated bug that pretends it was a human created bug would be trying to abuse that presumption of validity, and therefore considered a dick move.
Kudos to the team for this. I think it’s brave of them to stand up for their own experiences and push back against the hype train.
Before you knee jerk hate on the team for being luddites, consider:
1. For a language like rust there’s too few eyes and too many mouths. Reviewing is a job, and is extremely taxing.
2. The code base needs to be highly hermetic because it’s load bearing across the global economy
3. Most changes are only relevant if they’ve followed extensive process, including community feedback.
spprashant | 8 hours ago
ares623 | 7 hours ago
lifthrasiir | 7 hours ago
voidhorse | 7 hours ago
At this point they will probably just fork yet again and maintain some vibe compiler.
whattheheckheck | 6 hours ago
ares623 | 5 hours ago
noobermin | 3 hours ago
sheept | an hour ago
[0]: https://x.com/jarredsumner/status/2051600118886138262
[1]: https://x.com/bunjavascript/status/2048427636414923250
[2]: https://x.com/jarredsumner/status/2053050239423312035
staticassertion | 2 hours ago
This policy does not seem to forbid vibe coding?
dgellow | 46 minutes ago
lifthrasiir | 12 minutes ago
> Solicited, non-critical, high-quality, well-tested, and well-reviewed code changes that are originally authored by an LLM are allowed, with disclosure.
Vibe coding (in its original meaning) would have hard time arguing it's of high quality.
nmg | 7 hours ago
> These are organized along a spectrum of AI friendliness, where top is least friendly, and bottom is most friendly.
This section is an extremely useful reference
dryarzeg | 6 hours ago
Oh... I can’t say for certain who wrote it, and I won’t make any definitive claims - personally, I tend to think it was probably mostly written, or at least conceived, by a man - but this sort of phrase… I get a nervous twitch every time I see it, even though it’s actually quite a clever rhetorical device. Hell... Maybe I just need a break; I don’t know, since I’m starting to see LLMs everywhere...
saghm | 5 hours ago
jynelson | 2 hours ago
mw888 | 6 hours ago
https://github.com/jyn514/rust-forge/blob/llm-policy/src/pol...
It's in-line with the 'nanny' stereotype of the Rust community that they give you permission to act in a way they would never be able to verify anyways:
> The following are allowed. > Asking an LLM questions about an existing codebase. > Asking an LLM to summarize comments on an issue, PR, or RFC...
Like seriously, what's the point of explicitly allowing this? Imagine the opposite were true, you weren't allowed to do this - what would they do? Revert an update because the person later claimed they checked it with an LLM?
The Linux policy on this is much superior and more sensible.
kouteiheika | 5 hours ago
Imagine if they just say "LLMs are banned" then there's a lot of ambiguity. So they specifically outlined that generative uses of LLMs are banned, and that non-generative ones are not banned (i.e. "allowed").
I think it's a poor choice of words on their part, but it makes sense (considering what their policy is). It's more of a "we're not disallowing use in these particular scenarios, so you can still use LLMs for these if you want". Remember: it's a big project, and if they don't explicitly state something then people will ask and waste everyone's time.
saghm | 5 hours ago
MaulingMonkey | 5 hours ago
Explicit permission can be useful to preemptively cut off some questions from well meaning people who, acting in good faith, might otherwise pester for clarification (no matter how silly / "obvious" it might otherwise be), or get agitated by misconstruing an all-banned list as being an overly verbose "no LLMs ever" overreach.
> It's in-line with the 'nanny' stereotype of the Rust community that they give you permission to act in a way they would never be able to verify anyways: [...]
Many of us work or have worked in corporate settings where IT takes great pains to help detect and prevent data exfiltration, and have absolutely installed the corporate spyware to detect those kinds of actions when performed on their own closed source codebases. Others rely on the honor system - at least as far as you know - but still ban such actions out of copyright/trade secret concerns. If you're steeped deeply enough in that NDA-preserving culture, a reminder that you've switched contexts might help when common sense proves uncommon.
While nannying can be obnoxious, I'm not sure that having a document one can point to/link/cite, to allay any raised concerns, counts.
bcjdjsndon | 21 minutes ago
What?
vintermann | 5 hours ago
I would have LOVED if the university course I took last winter had this. I had to take a very paranoid attitude to what was allowed.
What they're trying to avoid is a lot of unnecessary conflict with zealous anti-AI people calling for your exclusion for admitting to doing these things. There are people who would ban this too.
davesque | 3 hours ago
aabhay | 2 hours ago
bcjdjsndon | 20 minutes ago
davesque | 3 hours ago
dgellow | 50 minutes ago
staticassertion | 2 hours ago
This is not very different from the Linux kernel's policy so it's an odd comparison. It's actually almost identical in practical terms.
edit: lol proof that this doc needs to be stupidly explicit is in the pudding with the HN comments going out of their way to radically misread it
DennisL123 | 6 hours ago
Will it fix a related but different problem? Likely.
TazeTSchnitzel | 3 hours ago
saagarjha | an hour ago
classified | 5 hours ago
prashantk_ | 3 hours ago
> People must be vouched for before interacting with certain parts of a project (the exact parts are configurable to the project to enforce).
https://github.com/mitchellh/vouch
I think many projects will adopt this instead of allowing everyone / blocking everyone
Many projects have "ai slop" check in place to directly close and ban user if it is "ai slop". Else, it will be hard to handle the velocity of PRs
Chris2048 | 3 hours ago
I don't know if having your name/ face a secret is still acceptable? Maybe tiers of devs (anon vs other) on that one?
afdbcreid | 3 hours ago
triyambakam | 2 hours ago
staticassertion | 2 hours ago
edit: Wow people did not read the policy. It's literally just "if you use an LLM you are responsible for it, we will reject low quality PRs, please disclose that you have used an LLM". This is bog standard.
WCSTombs | an hour ago
> It's fine to use LLMs to answer questions, analyze, distill, refine, check, suggest, review. But not to create.
> We carve out a space for "experimentation" to inform future revisions to this policy.
Importantly, the LLM contributions must be solicited, i.e., the people responsible for reviewing the final implementation have to opt in explicitly beforehand.
dgellow | 49 minutes ago
tick_tock_tick | 2 hours ago
> Using an LLM to discover bugs, as long as you personally verify the bug, write it up yourself, and disclose that an LLM was used.
What are they going to do go back and reject a bug if someone later admits they found it with an LLM? Honestly they and most other project would probably be better off just ignoring the situation until norms start developing.
staticassertion | 2 hours ago
That's in the "allowed with caveats" section. It's just saying to not open bug reports without first reading them yourself or your bug may be closed. No one is saying "by policy we will have to add the bug back in" jesus christ
The policy is insanely straightforward, idk how you can be misinterpreting it this badly. It's just "Disclose that you use a model, you are on the hook for reviewing model output as a human" and then some clear cut examples.
MallocVoidstar | an hour ago
saagarjha | an hour ago
ZeroGravitas | 57 minutes ago
If they get swamped with 100 bugs that turned out, after they investigate them to be hallucinations then it's likely they will ignore or lose in the noise a real bug.
A llm generated bug that pretends it was a human created bug would be trying to abuse that presumption of validity, and therefore considered a dick move.
aabhay | 2 hours ago
Before you knee jerk hate on the team for being luddites, consider:
1. For a language like rust there’s too few eyes and too many mouths. Reviewing is a job, and is extremely taxing. 2. The code base needs to be highly hermetic because it’s load bearing across the global economy 3. Most changes are only relevant if they’ve followed extensive process, including community feedback.