Yeah I'm a bit confused because you can have an entirely unsafe code base with just the public interface marked as safe. No unsafe in the interface isn't a measure of safety at all.
It is a measure of the intended level of care that the users of your interface have to take. If there's no unsafe in the interface, then that implies that the library has only provided safe interfaces, even if it uses unsafe internally, and that the interface exposed enforces all necessary invariants.
It is absolutely a useful distinction on whether your users need to deal with unsafe themselves or not.
Sure, it's a useful distinction for whether users need to care about safety but not whether the underlying code is safe itself, which is what I wrote about.
No or very little but verified unsafe internal code is the bar for many Rust reimplementations. It would also be what keeps the code memory safe.
I guess I don't write enough rust to say this with confidence, but isn't that the bare minimum? I find it difficult to believe the rust community would accept using a library where the API requires unsafe.
>I guess I don't write enough rust to say this with confidence, but isn't that the bare minimum
I have some experience and yes, unless you're putting out a library for specifically low-level behavior like manual memory management or FFI. Trivia about the unsafe fn keyword missed the point of my comment entirely.
Not at all. Some things are fundamentally unsafe. mmap is inherently unsafe, but that doesn’t mean a library for it shouldn’t exist.
If you’re thinking of higher level libraries, involving http, html, more typical file operations, etc, what you’re saying may generally be true. But if you’re dealing with Direct Memory Access, MCU peripherals, device drivers, etc, some or all of those libraries have two options: accept unsafe in the public interface, or simply don’t exist.
(I guess there’s a third option: lie about the unsafety and mark things as safe when they fundamentally, inherently are not and cannot be safe)
It's useful, to be sure, but I wouldn't want to use a library with a safe public interface that is mostly unsafe underneath (unless it's a -sys crate, of course). I think "this crate has no unsafe code" or "this crate has a minimal amount of carefully audited unsafe code" are good things to see, in general.
I agree the wording is a bit strange, but a quick grep of the repo shows that it doesn't imply that.
The only usages of unsafe are in src/ffi, which is only compiled when the ffi feature is enabled. ffi is fundamentally unsafe ("unsafe" meaning "the compiler can't automatically verify this code won't result in undefined behavior") so using it there is reasonable, and the rest of the crate is properly free of unsafe.
If by flaws you mean the security researchers spamming libxml2 with low effort stuff demanding a CVE for each one so they can brag about it – no, I don’t think anybody can fix that.
Because it was written in C, libxml2's CVE history has been dominated by use-after-free, buffer overflows, double frees, and type confusion. xmloxide is written in pure Rust, so these entire vulnerability classes are eliminated at compile time.
Is that true? I thought if you compiled a rust crate with, `#[deny(unsafe_code)]`, there would not be any issues. xmloxide has unsafe usage only in the the C FFI layer, so the rest of the system should be fine.
Amazing work! I'd love to hear more details about your workflow with Claude Code.
As a side note and this isn't a knock on your project specifically. I think the community needs to normalize disclaimers for "vibe-coded" packages. Consumers really need to understand the potential risks of relying on agent-generated code upfront.
Yeah its a fair point. I wondered if it might be irresponsible to publish the package because it was made this way, but I suspect I'm not the first person to try and develop a package with Claude Code, so I think the best I can do is be honest about it.
As for the workflow, I think the best advice I can give is to setup as many guardrails and tools as possible, so Claude and do as many iterations before needing any intervention. So in this case I setup pre-commit hooks for linting and formatting, gave it access to the full testing suite, and let it rip. The majority of the work was done in a single thinking loop that lasted ~3 hours where Claude was able to run the tests, see what failed, and iterate until they all passed. From there, there was still lots of iterations to add features, clean up, test, and improve performance - but allowing Claude to iterate quickly on it's own without my involvement was crucial.
Yes, if you tripped across this package in crates.io the readme gives the impression of a serious piece of software but your comments here imply it is a one off experiment rather than something you plan to maintain for the next decade.
I don't think it was irresponsible to publish it, but I do think it was irresponsible to publish it without clearly disclosing at the top of the crates.io README that it was built entirely by AI, and that you haven't reviewed the code (assuming you haven't).
If I were looking for an XML parser/generator library, I might stumble across this and think it might be production-quality, and assume it was built by humans, or at least that humans had fully vetted and understand the code.
Even more interesting is how much did the effort cost.
Unlike the development work of old (pre-2025), work with high-end models incurs a very direct monetary cost, one burns tokens which cost money, and you can't have something as powerful to be running locally (even if you happened to have a Mac Pro Ultra with RAM maxed out).
Some of my friends burned through hundreds of dollars a day while doing large amounts of (allegedly efficient) work with Claude Code.
Do they? Tons of extremely popular human generated libraries are absolute trash. Just as an example, nearly all of the JS zip file libraries are dumpers fires. Same with QR code libraries and command line parsing libraries.
This feels like if you want to know if the code is good or bad, read the code and check the tests. Assuming human = good, LLM = bad does not make much sense given the amount of bad human code I've seen.
Sure, if the code is from a repuatable company or creator then I'd take that as a strong signal quality over an LLM but I wouldn't take a random human programmer as a strong signal over generated.
A comment on libxml, not on your work:
Funny how so many companies use this library in production and not one steps in to maintain this project and patch the issues.
What a sad state of affairs we are in.
Yeah I agree, maintaining OS projects has been a weird thing for a long time.
I know a few companies have programs where engineers can designate specific projects as important and give them funds. But it doesn't happen enough to support all the projects that currently need work, maybe AI coding tools will lower the cost of maintenance enough to improve this.
I do think there are two possible approaches that policy makers could consider.
1) There could probably be tax credits or deductions for SWEs who 'volunteer' their time to work on these projects.
2) Many governments have tried to create cyber reserve corps, I bet they could designate people as maintainers of key projects that they rely on to maintain both the projects as well as people skilled with the tools that they deem important.
Feels more like you don’t understand the concept of the tragedy of the commons.
EDIT: Sorry, I’ve had a shitty day and that wasn’t a helpful comment at all. I should’ve said that as I understand it TOTC primarily relates to finite resources, so I don’t think it applies here. Sorry again for being a dick.
we need a tax on companies using or selling anything OSS, the funds of which go into OSS, the wealth it generated is insane, and it's nearly all just donations of experts
There is nothing in the open source licensees that prevents charging money, in fact, non-commercial clauses are seen as incompatible with the Debian Free Software Guidelines.
And there is a lot of companies out there that make their money based on open source software, red hat is maybe the biggest and most well known.
I meant in the sense that someone else can redistribute the source for free, not that the company has to do it.
> The license shall not restrict any party from selling or giving away the software as a component of an aggregate software distribution containing programs from several different sources. The license shall not require a royalty or other fee for such sale.
OSS is allowed to make money and there are projects that require paid licenses for commerical use.
The source is available and collaborative.
Qt states this on their site:
Simply put, this is how it works: In return for the value you receive from using Qt to create your application, you are expected to give back by contributing to Qt or buying Qt.
Which is approximately all companies because all companies use software and depending on what the researchers look at, 90% to 98% of codebases depend on OSS.
Conclusion: support OSS from general taxation, like the Sovereign Tech Fund in Germany does. It's a public good!
About a day after I resigned as maintainer, SUSE stepped in and is now maintaining the project. As announced here [1], I'm currently trying a different funding model and started a GPL-licensed fork with many security and performance improvements [2].
It should also be noted that the remaining security issues in the core parser have to do with algorithmic complexity, not memory safety. Many other parts of libxml2 aren't security-critical at all.
> I do think there is something interesting to think about here in how coding agents like Claude Code can quickly iterate given a test suite.
This is a point I've tried to advocate for a while. Specially to empower non coders and make them see that we CAN approach automation with control.
Some aspects will be the classic unit or integration tests for validation. Others, will be AI Evals [1] which to me could be the common language for product design for different families/disciplines who don't quite understand how to collaborate with each other.
The amount of progress in a short time is amazing to see.
Please stop spreading this "AI evals" terminology. "evals" is what providers like OpenAI and Anthropic do with their models. If you wrote a test for a feature that uses an LLM, it's just a test, there's no need to say "evals." Having a separate term only further confuses people who already have no idea what that actually means.
I respectfully disagree. I think there needs to be a common term for the aspects around LLM testing and saying "It's just integration/system tests" doesn't really reach audiences well. They don't disambiguate the differences.
Words win when they're used. Just because Agent Skills is just a pattern for standarization and saving context doesn't mean it wasn't incredibly useful.
Think beyond software developers by trade. Think beyond people those who realized they needed tests instead of those who thought "the models will just get smarter" and "they told me there's guardrails".
It would be interesting to try this approach out with mQuickJS, QuickJS or micropython. They could potentially run hoops around the ones that were first coded in Rust, such as Boa or RustPython.
Yes, you can rip off any sucker who published a test suite when the AI is trained on existing code as well. Congratulations, you will be showered with praise and AI mafia money.
Nothing against AI - just to inform people about quality, maintainability and future of this library. No human has mental model of the code, so don’t waste your time creating it - the original author didn’t either.
Involves and made only by, are 2 different things.
I use agentic coding in my daily work. I do make mental model of the code I write and I also test the code, exactly the same way, as when written completely manually.
I understand it can be difficult to label, and there's an inconveniently large grey area here, but there is a difference between plain vibe-coded software and software built with AI under the control, direction and validation of a developer.
The distinction is probably not very important for small applications, as nobody cares if a minor script or a one-shot data processor has been vibe-coded, but for large applications it surely matters in the long term.
i ban the use of AI generated code in my projects. at least one of my projects uses libxml and someone could propose to switch to an alternative. a label would make it easier to avoid this library.
The code might be a little verbose which is tiresome for humans to read and follow. Structure and functions look idiomatic. It seems to be using xml parser idioms which makes it readable.
It could be doing double checks in both tokeniser and parser and things like that.
Actually looks like a good starting point and reference for someone working on xml parsers in rust.
Can this work with XLSX (The Open XML format) & .odt format though these also use zip. It would be interesting to think if this can help solve this and create a rust GUI app with very basic XLSX doc editing as alternative to OpenOffice/LibreOffice.
blegge | 21 hours ago
Why "in the public API"? Does this imply it's using unsafe behind the hood? If so, what for?
DetroitThrow | 20 hours ago
mirashii | 20 hours ago
It is absolutely a useful distinction on whether your users need to deal with unsafe themselves or not.
DetroitThrow | 19 hours ago
No or very little but verified unsafe internal code is the bar for many Rust reimplementations. It would also be what keeps the code memory safe.
ahepp | 18 hours ago
DetroitThrow | 17 hours ago
I have some experience and yes, unless you're putting out a library for specifically low-level behavior like manual memory management or FFI. Trivia about the unsafe fn keyword missed the point of my comment entirely.
cstrahan | 15 hours ago
If you’re thinking of higher level libraries, involving http, html, more typical file operations, etc, what you’re saying may generally be true. But if you’re dealing with Direct Memory Access, MCU peripherals, device drivers, etc, some or all of those libraries have two options: accept unsafe in the public interface, or simply don’t exist.
(I guess there’s a third option: lie about the unsafety and mark things as safe when they fundamentally, inherently are not and cannot be safe)
kelnos | 14 hours ago
gpm | 19 hours ago
The only usages of unsafe are in src/ffi, which is only compiled when the ffi feature is enabled. ffi is fundamentally unsafe ("unsafe" meaning "the compiler can't automatically verify this code won't result in undefined behavior") so using it there is reasonable, and the rest of the crate is properly free of unsafe.
fulafel | 18 hours ago
fourthark | 21 hours ago
blegge | 20 hours ago
Doesn't seem to have shut down or even be unmaintained. Perhaps it was briefly, and has now been resurrected?
fweimer | 14 hours ago
notpushkin | 20 hours ago
bawolff | 20 hours ago
notpushkin | 13 hours ago
[OP] jawiggins | 19 hours ago
sarchertech | 18 hours ago
[OP] jawiggins | 18 hours ago
nicoburns | 21 hours ago
[OP] jawiggins | 20 hours ago
kburman | 20 hours ago
As a side note and this isn't a knock on your project specifically. I think the community needs to normalize disclaimers for "vibe-coded" packages. Consumers really need to understand the potential risks of relying on agent-generated code upfront.
[OP] jawiggins | 20 hours ago
As for the workflow, I think the best advice I can give is to setup as many guardrails and tools as possible, so Claude and do as many iterations before needing any intervention. So in this case I setup pre-commit hooks for linting and formatting, gave it access to the full testing suite, and let it rip. The majority of the work was done in a single thinking loop that lasted ~3 hours where Claude was able to run the tests, see what failed, and iterate until they all passed. From there, there was still lots of iterations to add features, clean up, test, and improve performance - but allowing Claude to iterate quickly on it's own without my involvement was crucial.
tonyedgecombe | 15 hours ago
kelnos | 14 hours ago
If I were looking for an XML parser/generator library, I might stumble across this and think it might be production-quality, and assume it was built by humans, or at least that humans had fully vetted and understand the code.
nine_k | 17 hours ago
Unlike the development work of old (pre-2025), work with high-end models incurs a very direct monetary cost, one burns tokens which cost money, and you can't have something as powerful to be running locally (even if you happened to have a Mac Pro Ultra with RAM maxed out).
Some of my friends burned through hundreds of dollars a day while doing large amounts of (allegedly efficient) work with Claude Code.
socalgal2 | 14 hours ago
This feels like if you want to know if the code is good or bad, read the code and check the tests. Assuming human = good, LLM = bad does not make much sense given the amount of bad human code I've seen.
Sure, if the code is from a repuatable company or creator then I'd take that as a strong signal quality over an LLM but I wouldn't take a random human programmer as a strong signal over generated.
wooptoo | 20 hours ago
[OP] jawiggins | 20 hours ago
I know a few companies have programs where engineers can designate specific projects as important and give them funds. But it doesn't happen enough to support all the projects that currently need work, maybe AI coding tools will lower the cost of maintenance enough to improve this.
I do think there are two possible approaches that policy makers could consider.
1) There could probably be tax credits or deductions for SWEs who 'volunteer' their time to work on these projects.
2) Many governments have tried to create cyber reserve corps, I bet they could designate people as maintainers of key projects that they rely on to maintain both the projects as well as people skilled with the tools that they deem important.
da_chicken | 17 hours ago
The alternative is another XZ backdoor.
da_chicken | 17 hours ago
wrboyce | 17 hours ago
EDIT: Sorry, I’ve had a shitty day and that wasn’t a helpful comment at all. I should’ve said that as I understand it TOTC primarily relates to finite resources, so I don’t think it applies here. Sorry again for being a dick.
em-bee | 14 hours ago
ddlsmurf | 17 hours ago
skybrian | 15 hours ago
capitol_ | 15 hours ago
And there is a lot of companies out there that make their money based on open source software, red hat is maybe the biggest and most well known.
skybrian | 15 hours ago
> The license shall not restrict any party from selling or giving away the software as a component of an aggregate software distribution containing programs from several different sources. The license shall not require a royalty or other fee for such sale.
https://opensource.org/osd
saintfire | 15 hours ago
OSS is allowed to make money and there are projects that require paid licenses for commerical use.
The source is available and collaborative.
Qt states this on their site: Simply put, this is how it works: In return for the value you receive from using Qt to create your application, you are expected to give back by contributing to Qt or buying Qt.
mlinksva | an hour ago
Conclusion: support OSS from general taxation, like the Sovereign Tech Fund in Germany does. It's a public good!
socalgal2 | 14 hours ago
redhat, apple, samsung, huawei, google, etc...
nwellnhof | 11 hours ago
It should also be noted that the remaining security issues in the core parser have to do with algorithmic complexity, not memory safety. Many other parts of libxml2 aren't security-critical at all.
[1] https://gitlab.gnome.org/GNOME/libxml2/-/issues/976
[2] https://codeberg.org/nwellnhof/libxml2-ee
alexhans | 20 hours ago
This is a point I've tried to advocate for a while. Specially to empower non coders and make them see that we CAN approach automation with control.
Some aspects will be the classic unit or integration tests for validation. Others, will be AI Evals [1] which to me could be the common language for product design for different families/disciplines who don't quite understand how to collaborate with each other.
The amount of progress in a short time is amazing to see.
- [1] https://ai-evals.io/
koakuma-chan | 19 hours ago
alexhans | 12 hours ago
Words win when they're used. Just because Agent Skills is just a pattern for standarization and saving context doesn't mean it wasn't incredibly useful.
Think beyond software developers by trade. Think beyond people those who realized they needed tests instead of those who thought "the models will just get smarter" and "they told me there's guardrails".
benatkin | 18 hours ago
mkj | 17 hours ago
hrtla | 17 hours ago
mdavid626 | 15 hours ago
It’s time to make this mandatory.
Nothing against AI - just to inform people about quality, maintainability and future of this library. No human has mental model of the code, so don’t waste your time creating it - the original author didn’t either.
agentifysh | 14 hours ago
none of your arguments make sense here
kelnos | 14 hours ago
agentifysh | 14 hours ago
mdavid626 | 14 hours ago
I use agentic coding in my daily work. I do make mental model of the code I write and I also test the code, exactly the same way, as when written completely manually.
agentifysh | 14 hours ago
invaliduser | 14 hours ago
The distinction is probably not very important for small applications, as nobody cares if a minor script or a one-shot data processor has been vibe-coded, but for large applications it surely matters in the long term.
em-bee | 11 hours ago
agentifysh | 14 hours ago
libxml2 is always one of those libraries that i used to have trouble with for different platforms
I think its great that more and more OSS projects get attention now with ai coding agents
dmitrygr | 14 hours ago
yobbo | 14 hours ago
It could be doing double checks in both tokeniser and parser and things like that.
Actually looks like a good starting point and reference for someone working on xml parsers in rust.
Imustaskforhelp | 13 hours ago