"Antirez closes his careful legal analysis as though it settles the matter. Ronacher acknowledges that “there is an obvious moral question here, but that isn't necessarily what I'm interested in.” Both pieces treat legal permissibility as a proxy for social legitimacy. "
This whole article is just complaining that other people didn't have the discussion he wanted.
Ronacher even acknowledged that it's a different discussion, and not one they were trying to have at the moment.
If you want to have it, have it. Don't blast others for not having it for you.
Having this discussion involves blasting others for not considering it. Consider the rest of the paragraph you quoted:
> But law only says what conduct it will not prevent—it does not certify that conduct as right. Aggressive tax minimization that never crosses into illegality may still be widely regarded as antisocial. A pharmaceutical company that legally acquires a patent on a long-generic drug and raises the price a hundredfold has not done something legal and therefore fine. Legality is a necessary condition; it is not a sufficient one.
They're not innocent bystanders: if you take the premise of the article seriously, their actions should be criticised. Please consider re-reading the article more slowly.
I believe it is a narrow view of the situation. If we take a look into the history, into the reasons for inventing GPL, we'll see that it was an attempt to fight copyrights with copyrights. The very name 'copyleft' is trying to convey the idea.
What AI are eroding is copyright. You can re-implement not just a GPL program, but to reverse engineer and re-implement a closed source program too, people have demonstrated it already, there were stories here on HN about it.
AI is eroding copyright, so there may no longer be a need for the GPL. GNU should stop and rethink its stance, chuck away the GPL as the main tool to fight evil software corporations and embrace LLM as the main weapon.
Copyleft is a mirror of copyright, not a way to fight copyright. It grants rights to the consumer where copyright grants rights to the creator. Importantly, it gives the end-user the right to modify the software running on their devices.
Unfortunately, there are cases where you simply can't just "re-implement" something. E.g., because doing so requires access to restricted tools, keys, or proprietary specifications.
"So, I looked for a way to stop that from happening. The method I came up with is called “copyleft.” It's called copyleft because it's sort of like taking copyright and flipping it over. [Laughter] Legally, copyleft works based on copyright. We use the existing copyright law, but we use it to achieve a very different goal."
That’s not a rebuttal of the OP’s point. None of that says anything about fighting copyright. It literally says he flipped it which is wha the OP said when they said it’s a mirror.
"very different goal" isn't the same as "fundamentally destroying copyright"
the very different goal include to protect public code to stay public, be properly attributed, prevent companies from just "sizing" , motivate other to make their code public too etc.
and even if his goals where not like that, it wouldn't make a difference as this is what many people try to archive with using such licenses
this kind of AI usage is very much not in line with this goals,
and in general way cheaper to do software cloning isn't sufficient to fix many of the issues the FOSS movement tried to fix, especially not when looking at the current ecosystem most people are interacting with (i.e. Phones)
---
("sizing"): As in the typical MS embrace, extend and extinguish strategy of first embracing the code then giving it proprietary but available extensions/changes/bug fixes/security patches to then make them no longer available if you don't pay them/play by their rules.
---
Through in the end using AI as a "fancy complicated" photocopier for code is as much removing copyright as using a photocopier for code would. It doesn't matter if you use the photocopier blind folded and never looked at the thing you copied.
LLM's - to date - seem to require massive capital expenditures to have the highest quality ones, which is a monumental shift in power towards mega corporations and away from the world of open source where you could do innovative work on your own computer running Linux or FreeBSD or some other open OS.
I don't think that's an exciting idea for the Free Software Foundation.
Perhaps with time we'll be able to run local ones that are 'good enough', but we're not there yet.
There's also an ethical/moral question that these things have been trained on millions of hours of people's volunteer work and the benefits of that are going to accrue to the mega corporations.
Edit: I guess the conclusion I come to is that LLM's are good for 'getting things done', but the context in which they are operating is one where the balance of power is heavily tilted towards capital, and open source is perhaps less interesting to participate in if the machines are just going to slurp it up and people don't have to respect the license or even acknowledge your work.
> LLM's - to date - seem to require massive capital expenditures to have the highest quality ones, which is a monumental shift in power towards mega corporations and away from the world of open source
Yeah, a bit of a conundrum. But I don't think that fighting for copyright now can bring any benefits for FOSS. GNU should bring Stallman back and see whether he can come with any new ideas and a new strategy. Alternatively they could try without Stallman. But the point is: they should stop and think again. Maybe they will find a way forward, maybe they won't but it means that either they could continue their fight for a freedom meaningfully, or they could just stop fighting and find some other things to do. Both options are better then fighting for copyright.
> There's also an ethical/moral question that these things have been trained on millions of hours of people's volunteer work and the benefits of that are going to accrue to the mega corporations.
I want a clarify this statement a bit. The thing with LLM relying on work of others are not against GPU philosophy as I understand it: algorithms have to be free. Nothing wrong with training LLMs on them or on programs implementing them. Nothing wrong with using these LLMs to write new (free) programs. What is wrong are corporations reaping all the benefits now and locking down new algorithms later.
I think it is important, because copyright is deemed to be an ethical thing by many (I think for most people it is just a deduction: abiding the law is ethical, therefore copyright is ethical), but not for GNU.
IMO the primary significant trend in AI. Doesn't get talked about nearly enough. Means the AI is working, I guess.
>GNU should bring Stallman back ... Alternatively they could try without Stallman.
Leave Britney alone >:(
>copyright is deemed to be an ethical thing by many (I think for most people it is just a deduction: abiding the law is ethical, therefore copyright is ethical)
I've busted out "intellectual property is a crime against humanity" at layfolk to see if that shortcuts through that entire little politico-philosophical minefield. They emote the requisite mild shock when such things as crimes against humanity are mentioned; as well as at someone making such a radical statement which seems to come from no familiar species of echo chamber; and then a moment later they begin to very much look like they see where I'm coming from.
How do you even argue such a thing? I've had no such luck, I've met many people who seem to view copyright and a person owning their ideas and work as a sort of inherent moral.
Not saying this gets through to people, but copyright is purely about the legal ability to restrict what other people do. Whereas property rights are about not allowing others to restrict what you do (e.g. by taking your stuff).
> LLM's - to date - seem to require massive capital expenditures to have the highest quality ones
There are near-SOTA LLM's available under permissive licenses. Even running them doesn't require prohibitive expenses on hardware unless you insist on realtime use.
> There's also an ethical/moral question that these things have been trained on millions of hours of people's volunteer work and the benefits of that are going to accrue to the mega corporations.
This was already the case and it just got worse, not better.
At a certain point, I think we had reached a kind of equilibrium where some corporations were decent open source citizens. They understood that they could open source things like infrastructure or libraries and keep their 'crown jewels' closed. And while Stallman types might not have been happy with that, it seemed to work out for people.
Now they've just hoovered up all the free stuff into machines that can mix it up enough to spit it out in a way that doesn't even require attribution, and you have to pay to use their machine.
AI essentially gatekeeps all of open source to companies to pluck from to their hearts content. And individual contributors using these tools and freely mixing it with their own - usual minor - contributions are another step of whitewashing because they're definitely not going to own up to writing only 5% of the stuff they got paid for.
Before we had RedHat and Ubuntu, who at least were contributing back, now we have Microsoft, Anthropic and OpenAI who are racing to lock the barn door around their new captive sheep. It's just a massive IP laundromat.
Is massive capital expenditure not also required to enforce the GPL? If some company steals your GPLed code and doesn't follow the license, you will have to sue them and somebody will have to pay the lawyers.
> Is massive capital expenditure not also required to enforce the GPL?
It's nowhere near the order of magnitude of the kind of spending they're sinking into LLM's. The FSF and other groups were reasonably successful at enforcing the GPL, operating on a budget 1000's of times smaller than that of AI companies.
Right but LLM companies are building frontier models with frontier talent while trying to sock up demand with a loss leader strategy, on top of an historic infrastructure build out.
Being able to coat efficiently run frontier models is i think, not a high priced endeavor for an org (compared to an individual).
IMO the proposition is little fishy, but its not totally without merit and imo deserves investigation. If we are all worried about our jobs, even via building custom for sale software, there is likely something there that may obviate the need at least for end user applications. Again, im deeply skeptical, but it is interesting.
> Being able to coat efficiently run frontier models is i think, not a high priced endeavor for an org
Running proprietary model would make you subject to whatever ToS the LLM companies choose on a particular day, and what you can produce with them, which circles back to the raison d'etre for the GPL and GNU.
Until all software copyright is dead and buried, there is no need for copyleft to change tack. Otherwise there rising tide may rise high enough to drown GPL, but not proprietary software.
Open source is easier to counterfeit/license-launder/re-implement using LLMs because source code is much lower-hanging fruit, and is understood by more people than closed-source assembly.
How close are we to good enough and who's working on that? I would be interested in supporting that work; to my mind, many of the real objections to LLMs are diminished if we can make them small and cheap enough to run in the home (and, perhaps, trained with distributed shared resources, although the training problem is the harder one).
Good question. It seems like most of the tech world is perfectly happy to be sharecroppers on the Big AI farms. I guess that's not quite the right analogy, since they're doing their own things with it; just that at the end of the day, the tool they're building everything on is owned by someone else.
> LLM's - to date - seem to require massive capital expenditures to have the highest quality ones, which is a monumental shift in power towards mega corporations and away from the world of open source where you could do innovative work on your own computer running Linux or FreeBSD or some other open OS.
When the FSF and GPL were created, I don't think this was really a consideration. They were perfectly happy with requiring Big Iron Unix or an esoteric Lisp Machine to use the software - they just wanted to have the ability to customize and distribute fixes and enhancements to it.
>Perhaps with time we'll be able to run local ones that are 'good enough', but we're not there yet.
Right now, we can get local models that you can run on consumer hardware, that match capabilities of state of the art models from two years ago. The improvements to model architecture may or may not maintain the same pace in the future, but we will get a local equivalent to Opus 4.6 or whatever other benchmark of "good enough" you have, in the foreseeable future.
That's naive. Copyright doesn't just apply to software. There already have been countless lawsuits about copying music long before the term "open source" was invented. No, changing the lyrics a bit doesn't circumvent copyright. Nor does translating a Stephen King novel to German and switching the names of the places and characters.
A court ordered the first Nosferatu movie to be destroyed because it had too many similarities to Dracula. Despite the fact that the movie makes rather large deviations from the original.
If Claude was indeed asked to reimplement the existing codebase, just in Rust and a bit optimized, that could well be a copyright violation. Just like rephrasing A Song ot Ice and Fire a bit, and switching to a different language, doesn't remove its copyright.
Claude was asked to implement a public API, not an entire codebase. The definition of a public API is largely functional; even in an unusually complex case like the Java standard facilities (which are unusually creative even in the structure and organization of the API itself) the reimplementation by Google was found to be fair use.
> Claude was asked to implement a public API, not an entire codebase.
Allegedly. There have been several people who doubted this story. So how to find out who is right? Well, just let Claude compare the sources. Coincidentally, Claude Opus 4.6 doesn't just score 75.6% on SWE-bench Verified but also 90.2% on BigLaw Bench.
It's like our copyright lawyer is conveniently also a developer. And possibly identical to the AI that carried out the rewrite/reimplemention in question in the first place.
> Just like rephrasing A Song ot Ice and Fire a bit, and switching to a different language, doesn't remove its copyright.
There is some precedent for this, e.g. Alchemised is a recent best seller that had just enough changed from its Harry Potter fan fiction source in order to avoid copyright infringement: https://en.wikipedia.org/wiki/Alchemised
(I avoided the term “remove copyright” here because the new work is still under copyright, just not Harry Potter - related copyright.)
I'm pretty sure the plot is copyrightable, otherwise you could just translate Harry Potter to a different language and change the names of the characters.
Well the general guideline is that copyright covers the *expression of an idea*, not the idea itself.
Translations are pretty much the textbook example of a derivative work in copyright.
Your jurisdiction may vary, of course, but it's pretty well established in mine (Canada) that "plot" is an idea, and can't be copyrighted, only the expression of the idea (e.g. the written novel) falls under copyright.
Its purpose "if you run the software you should be able to inspect and modify that software, and to share those modifications with your peers" not explicitly resist copyright. Yes copyright is bad in that it often prevents one from doing that, but it is not the purpose of the GPL to dismantle copyright.
Reducing it to "well you can clone the proprietary software you're forced to use by LLM" is really missing the soul of the GPL.
Just because something is copyleft doesn't mean the person who gave you the binary you're using has to supply you with the code the used to build it. That's what the GPL does.
> we'll see that it was an attempt to fight copyrights with copyrights
it's not that simple
yes, GPLs origins have the idea of "everyone should be able to use"
but it also is about attribution the original author
and making sure people can't just de-facto "size public goods"
the kind of AI usage is removing attribution and is often sizing public goods in a way far worse then most companies which just ignored the license did
so today there is more need then ever in the last few decades for GPL like licenses
> AI is eroding copyright, so there may no longer be a need for the GPL. GNU should stop and rethink its stance, chuck away the GPL as the main tool to fight evil software corporations and embrace LLM as the main weapon.
Is this LLM thing freely available or is it owned and controlled by these companies? Are we going to rent the tools to fight "evil software corporations"?
With the release of GLM-5, I would say that they are pretty much almost as good. Basically 90% as good as Opus 4.6 on most tasks for 20% of inference cost, and open weights.
There already are LLMs with open weights that are better at code than state of the art closed source models from a year ago. For now, you most people may have to rent the hardware to run those models, since it's too expensive for most people to own something that can run inference on one trillion parameters, but I wouldn't consider LLMs to be controlled by "evil software corporations" at this point.
> There already are LLMs with open weights that are better at code than state of the art closed source models from a year ago.
A year ago, the "state of the art" models were total turds. So this isn't exactly good news
Not to mention the performance of local LLMs makes them utterly unusable unless you have multiple tens of thousands to invest in hardware (and that was before the recent price spike). If you're using commodity hardware, they're just awful to use.
While I personally agree with you, Richard Stallman (the creator of the GPL) does not. He has always advocated in favor of strong copyright protection, because the foundation of the GPL is the monopoly power granted by copyright. The problem that the GPL is intended to solve is proprietary software.
Generative models (AI) are not really eroding copyright. They are calling its bluff. The very notion of intellectual property depends on a property line: some arbitrary boundary where the property begins and ends. Generative models blur that line, making it impractical to distinguish which property belongs to whom.
Ironically, these models are made by giant monopolistic corporations whose wealth is quite literally a market valuation (stock price) of their copyrights! If generative models ever become good enough to reimplement CUDA, what value will NVIDIA have left?
The reality is that generative models are nowhere near good enough to actually call the bluff. Copyright is still the winning hand, and that is likely to continue, particularly while IP holders are the primary authors of law.
---
This whole situation is missing the forest for the trees. Intellectual Property is bullshit. A system predicated on monopoly power can only result in consolidated wealth driving the consolidation of power; which is precisely what has happened. The words "starving artist" ring every bit as familiar today as any time in history. Copyright has utterly failed the very goals it was explicitly written with.
It isn't the GPL that needs changing. So long as a system of copyright rules the land, copyleft is the best way to participate. What we really need is a cohesive political movement against monopoly power; one that isn't conveniently ignorant of copyright as its most significant source.
So not only are we moving goalposts here, but we've decided the GNU team should join the other team? I don't understand how GNU would see mass model LLM training as anything but the most flagrant violations of their ethos. LLM labs, in their view, would be among the most evil software corporations to have ever existed.
I agree with almost all of that, except the part about GNU changing their stance. I think GNU should stay true and consistent, if for no other reason than to not make many of their supporters who aren't on board with AI feel betrayed and have GNUs legacy soured. If the cause of LLMs conquering proprietary software needs an organization to champion it, let that be a new organization, not GNU.
Until there is a capable open source open weight AI that is easily hostable by an average person - no, we still have a long way to go. You aren't going to have software freedom when the tool that enables it is controlled by a handful of powerful tech companies.
> Blanchard's account is that he never looked at the existing source code directly. He fed only the API and the test suite to Claude and asked it to reimplement the library from scratch
This feels sort of like saying "I just blindly threw paint at that canvas on the wall and it came out in the shape of Mickey Mouse, and so it can't be copyright infringement because it was created without the use of my knowledge of Micky Mouse"
Blanchard is, of course, familiar with the source code, he's been its maintainer for years. The premise is that he prompted Claude to reimplement it, without using his own knowledge of it to direct or steer.
The article is poorly written. Blanchard was a chardet maintainer for years. Of course he had looked at it's code!
What he claimed, and what was interesting, was that Claude didn't look at the code, only the API and the test suite. The new implementation is all Claude. And the implementation is different enough to be considered original, completely different structure, design, and hey, a 48x improvement in performance! It's just API-compatible with the original. Which as per the Google Vs oracle 2021 decision is to be considered fair use.
> I'll take a commit authored by someone else and then git amend the author to myself, did I write that commit then
I did say co-author didn't I? Even if you added 0.000000001% to something you did so technically, yes.
> By your logic I did apparently
If you take someone's email and forward it did you write that email? Instead of debating that imagine you took a trojan email and forwarded it to someone and they opened it - do you think you'd be held up in any way?
> Blanchard is, of course, familiar with the source code, he's been its maintainer for years.
I would argue it's irrelevant if they looked or didn't look at the code. As well as weather he was or wasn't familiar with it.
What matters is, that they feed to original code into a tool which they setup to make a copy of it. How that tool works doesn't really matter. Neither does it make a difference if you obfuscate that it's an copy.
If I blindfold myself when making copies of books with a book scanner + printer I'm still engaging in copyright infringement.
If AI is a tool, that should hold.
If it isn't "just" a tool, then it did engage in copyright infringement (as it created the new output side by side with the original) in the same way an employee might do so on command of their boss. Which still makes the boss/company liable for copyright infringement and in general just because you weren't the one who created an infringing product doesn't mean you aren't more or less as liable of distributing it, as if you had done so.
What does derivative mean here? Because IMO it means that the existing work was used as input. So if you used a LLM and it was trained on the existing work, that's a derivative work. If you rot13 encode something as input, so you can't personally read it, and then a device decides to rot13 on it again and output it, that's a derivative work.
Of course, the problem with this interpretation is that all modern LLMs are derivatives from huge amounts of text under completely different licenses, including "All rights reserved", and therefore can not be used for any purpose.
I'm not sure how you square the circle of "it's alright to use the LLM to write code, unless the code is a rewrite of an open source project to change its license".
> Of course, the problem with this interpretation is that all modern LLMs are derivatives from huge amounts of text under completely different licenses, including "All rights reserved", and therefore can not be used for any purpose.
> I'm not sure how you square the circle of "it's alright to use the LLM to write code
You seem like you're on the cusp of stating the obvious correct conclusion: it isn't.
As a cynical person I assume all the frontier LLMs were trained on datasets that include every open source project, but as a thought experiment, if an LLM was trained on a dataset that included every open source project _execept_ chardet, do you think said LLM would still be able to easily implement something very similar?
In order for it to be creatively derivative you would need to copy the structure, logic, organization, and sequence of operations not just reimplement the functionality. It is pretty clear in this case that wasn't done.
LLMs do not encode nor encrypt their training data. The fact they can recite training data is a defect not a default. You can understand this more simply by calculating the model size as an inverse of a fantasy compression algorithm that is 50% better than SOTA. You'll find you'd still be missing 80-90% of the training data even if it were as much of a stochastic parrot as you may be implying. The outputs of AI are not derivative just because they saw training data including the original library.
Then onto prompting: 'He fed only the API and (his) test suite to Claude'
This is Google v Oracle all over again - are APIs copyrightable?
> This is Google v Oracle all over again - are APIs copyrightable?
Yes this is the best way to ask the question. If I take a public facing API and reimplement everything, whether it's by human or machine, it should be sufficient. After all, that's what Google did, and it's not like their engineers never read a single line of the Java source code. Even in "clean room" implementations, a human might still have remembered or recalled a previous implementation of some function they had encountered before.
I find the "compression" argument not very strong, both because copyright still applies to (very) lossy codecs (e.g. your 16kbps Opus file of Thriller infringes, even if the original 192khz/32bit wav file was 12,000kbps), and because copyright still applies to transformed derivative works (a tiny midi file of Thriller might still be enough for the Jackson's label to get you)
> LLMs do not encode nor encrypt their training data. The fact they can recite training data is a defect not a default.
About this specific point, it is unclear how much of a defect memorization actually is - there are also reasons to see it as necessary for effective learning. This link explains it well:
"The clean-room reimplementation test" isn't a legal standard, it's a particular strategy used by would-be defendants to clearly meet the standard of "is the new work free of copyrightable expression from the original work".
If you pirate a movie and reencode it, does that apply as well? You can still watch the movie and it is “obviously” the same movie, even though the bytes are completely different. Here you can use the program and it is, to the user, also the same.
Copyright protects even very abstract aspects of human creative expression, not just the specific form in which it is originally expressed. If you translate a book into another language, or turn it into a silent movie, none of the actual text may survive, but the story itself remains covered by the original copyright.
So when you clone the behavior of a program like chardet without referencing the original source code except by executing it to make sure your clone produces exactly the same output, you may still be infringing its copyright if that output reflects creative choices made in the design of chardet that aren't fully determined by the functional purpose of the program.
So, let's say that rather than actually touching any copyrighted material, a human merely tells an AI about how to go onto the internet and find copyrighted material, download it, and ingest it for training. The AI, fully autonomously, does so, and after training itself on the material deletes it so no human ever downloads, consumes, or shares it.
If we are saying AI is "more than a tool", which seems to be the case courts are leaning since they've ruled AI output without direct human involvement is not copyrightable[0], then the above seems like it would be entirely legal.
Someone would likely get prosecuted if they instructed AI agent to run say a pump and dump scheme...
Even if the final output doesn't have copyright protection it might still be copyright violation. I think it could be reasonable to have work that itself violates copyright when distributed even if it does not have copy right itself.
>that they feed to original code into a tool which they setup to make a copy of it
Well, no. They fed the spec (test cases, etc) into a tool which made a new program matching the spec. This is not a copy of the original code.
But also this feels like arguing over the color of the iceberg while the titanic sinks. If you have a tool that can make code to spec, what is the value in source code anymore? Even if your app is closed-source, you can just tell claude to write new code that does the same thing.
Blanchard fed the spec to the tool, and Anthropic fed the code to the tool, so Blanchard didn't do anything wrong, and Anthropic didn't do anything wrong. Nothing to see here.
Everyone writes as if he just fed the spec and tests to Claude Code. Ignoring for now that the tests are under LGPL as well, the commit history shows that this has been done with two weeks of steering Claude Code towards the desired output. At every one of these interactions, the maintainer used his deep knowledge of the chardet codebase to steer Claude.
I just don't see how it's relevant whether he did look or didn't. In my opinion, it's not just legally valid to make a re-implementation of something if you've seen the code as long as it doesn't copy expressive elements. I think it's also ethically fine as well to use source code as a reference for re-implementing something as long as it doesn't turn into an exact translation.
Ignoring the legal or ethical concerns. Let’s say we live in a world where the cost of copying code is so close to zero that it’s indistinguishable from a world without copyright.
Anything you put out can and will be used by whatever giant company wants to use it with no attribution whatsoever.
Doesn’t that massively reduce the incentive to release the source of anything ever?
Yes, and it reduces the incentives to release binaries too. Such a world will be populated by almost entirely SaaS, which can still compete on freedom.
No, because (most) people don't work on OSS for vanity, they do it to help other people, whether it's individuals or groups of individuals, ie corporations.
It's the same question as, if an AI can generate "art", or photographers can capture a scene better than any (realistic) painter, then will people still create art? Obviously yes, and we see it of course after Stable Diffusion was released three years ago, people are still creating.
I don’t know what a world without copyright does to corporate sponsored open source. It certainly reduces it because there are many corporate sponsored projects that monetize through dual licensing. My guess is in a world where you can’t even guarantee attribution, it’s much harder to convince your boss to let you open source a project in the first place.
So ignoring people who are being paid by corporations directly to work on open source, in my experience the vast majority of contributors expect to be able to monetize their work eventually in a way that requires attribution. And out of the small number who don’t expect a monetary return of any kind, a still smaller number don’t expect recognition.
If this weren’t the case you’d see a much larger amount of anonymous contributions. There are people who anonymously donate to charity. The vast majority want some kind of recognition.
Obviously we still see art, if you greatly reduce the monetary benefit to producing art, you’ll see a lot less of it. This is especially true of non trivial open source software that unlike static artwork requires continual maintenance.
If the cost to copying code based on specifications, tests, etc is so close to zero as to be functionally zero cost, then any user can simply turn their AI on any library for which there is documentation and any ability to generate tests, have it reverse engineer it, and release their reverse engineered copy on GitHub for others to use as they like.
So I'm not sure it matters whether a giant company uses it because random users can get the same thing for ~ free anyway.
It's actually not legally fine, or at least it's extremely dangerous. Projects that re-implement APIs presented by extremely litigious companies specifically do not allow people who, for instance, have seen the proprietary source code to then work on the project.
I don't think fear or legal action makes it illegal.
If I know it is legal to make a turn at a red light. And I know a court will uphold that I was in the right but a police officer will fine me regardless and I would need to go to actually pursue some legal remedy I'm unlikely to do it regardless of whether it is legal because it is expensive, if not in money but time.
In the case of copyright lawsuits they are notoriously expensive and long so even if a court would eventually deem it fine, why take the chance.
That's my point. It's dangerous and there are sharks in the water. That sounds like you're not going to have a good time if you do the described approach to someone who might assert you're infringing.
Right. The alternative is that we reward Dan for his 14 years of volunteer maintenance of a project... by banning him from working on anything similar under a different license for the rest of his life.
If you only stick to the API and ignore the implementation, it is not Mickey Mouse any more but a rodent. If it was just a clone it wouldn't be 50x as fast. Nevertheless, APIs apparently can be copyrightable. I generally disagree with this; it's how PC compatibles took off, giving consumers better options.
That is what he claimed. However, his design document instructs the AI to download the codebase, references specific files in the codebase, and to create a rewrite of the same project by name. It seems very unlikely it didn't look at the code while working, even forgetting that it had already likely been trained on it.
He would have had a better argument if he created a matching spec from scratch using randomized names.
What if we said that generative AI output is simply not copyrightable. Anything an AI spits out would automatically be public domain, except in cases where the output directly infringes the rights of an existing work.
This would make it so relicensing with AI rewrites is essentially impossible unless your goal is to transition the work to be truly public domain.
I think this also helps somewhat with the ethical quandary of these models being trained on public data while contributing nothing of value back to the public, and disincentivize the production of slop for profit.
> No Copyright Protection for AI-Assisted Creations: Thaler v. Perlmutter
> A recent key judicial development on this topic occurred when the U.S. Supreme Court declined to review the case of Thaler v. Perlmutter on March 2, 2026, effectively upholding lower court rulings that AI-generated works lacking human authorship are not eligible for copyright protection under U.S. law
> > A recent key judicial development on this topic occurred when the U.S. Supreme Court declined to review the case of Thaler v. Perlmutter on March 2, 2026, effectively upholding lower court rulings that AI-generated works lacking human authorship are not eligible for copyright protection under U.S. law
This was AI summary? Those words were not in the article.
The courts said Thaler could not have copyright because he refused to list himself as an author.
Oracle had it's day in court with Google over the Java APIs. Reimplementing APIs can be done without copyright infringement, but Oracle must have tried to find real infringement during discovery.
In this case, we could theoretically prove that the new chardet is a clean reimplementation. Blanchard can provide all of the prompts necessary to re-implement again, and for the cost of the tokens anyone can reproduce the results.
>This feels sort of like saying "I just blindly threw paint at that canvas on the wall and it came out in the shape of Mickey Mouse, and so it can't be copyright infringement because it was created without the use of my knowledge of Micky Mouse"
IANAL, but that analogy wouldn't work because Mickey Mouse is a trademark, so it doesn't matter how it is created.
I think we're going one step too far even, AI itself is a gray area and how can they guarantee it was trained legally or if it's even legal what they're doing and how can they assert that the input training data didn't contain any copyrighted data.
Google already spent billions of dollars and decades of lawyer hours proving it out as fair use. The legal challenges we see now are the dying convulsions of an already broken system of publishers and IP hoarders using every resource at their disposal to manipulate authors and creators and the public into thinking that there's any legitimacy or value underlying modern copyright law.
AI will destroy the current paradigm, completely and utterly, and there's nothing they can do to stop it. It's unclear if they can even slow it, and that's a good thing.
We will be forced to legislate a modern, digital oriented copyright system that's fair and compatible with AI. If producing any software becomes a matter of asking a machine to produce it - if things like AI native operating systems come about, where apps and media are generated on demand, with protocols as backbone, and each device is just generating its own scaffolding around the protocols - then nearly none of modern licensing, copyright, software patents, or IP conventions make any sense whatsoever.
You can't have horse and buggy traffic conventions for airplanes. We're moving in to a whole new paradigm, and maybe we can get legislation that actually benefits society and individuals, instead of propping up massive corporations and making lawyers rich.
Google has cut out some very specific ruling that have nothing to do with modern AI. These systems are just a really slow/lossy git clone, current law has no trouble with it, it's broadly illegal.
If corporations are allowed to launder someone else work as their own people will simply stop working and just start endlessly remixing a la popular music.
What if someone doesn't declare that it has been reimplemented using an LLM? Isn't it enough to simply declare that you have reimplemented the software without using an LLM? Good luck proving that in court...
One thing is certain, however: copyleft licenses will disappear: If I can't control the redistribution of my code (through a GPL or similar license), I choose to develop it in closed source.
One of the things that irks me about this whole thing is, if it’s so clean room and distinct, why make the changes to the existing project? Why not make an entirely new library?
The answer to that, I think, is that the authors wanted to squat an existing successful project and gain a platform from it. Hence we have news cycle discussing it.
Nobody cares about a new library using AI, but squash an existing one with this stuff, and you get attention. It’s the reputation, the GitHub stars, whatever
I mean, Blanchard was the longtime maintainer of chardet already, and had wanted to relicense it for years. So I think that complicates your picture of "squatting an existing successful project".
Honestly it's a weird test case for this sort of thing. I don't think you'd see an equivalent in most open source projects.
Imagine if the author has his way, and when we have AI write software, it becomes legally under the license of some other sufficiently similar piece of software. Which may or may not be proprietary. "I see you have generated a todo app very similar to Todoist. So they now own it." That does not seem like a good path either for open source software or for opening up the benefits of AI generated software.
Probably a wiser approach is to consider different times require different measures (in general!).
I did not study in detail if copyright "has always been nonsense", but I do agree that nowadays some of the copyright regulations are nonsense (for example the very long duration of life + 70 years)
I think AI is very much eroding the legitimacy of copyright - at least to software, which is long been questioned since it's more like math than creative expression.
I think the industry will realize that it made a huge mistake by leaning on copyright for protection rather than on patents.
IMO the core idea of copyright isn't nonsense, but I do think the current implementation (70+ years after death) is egregiously overpowered. I've always thought the current laws were too deeply entrenched to ever change, but I'm tentatively optimistic AI will shock the system hard enough to trigger actual reform.
Actually I think the last 20 years of the Internet demonstrates that copyright is more important than ever, because unless it's enforced, people with more capital than the copyright owner will simply steal creative works and profit from them.
The idea that "information wants to be free" was always a lie, meant to transfer value from creators to platform owners. The result of that has been disastrous, and it's long past time to push the pendulum in the other direction.
> Ronacher notes this as an irony and moves on. But the irony cuts deeper than he lets on. Next.js is MIT licensed. Cloudflare's vinext did not violate any license—it did exactly what Ronacher calls a contribution to the culture of openness, applied to a permissively licensed codebase. Vercel's reaction had nothing to do with license infringement; it was purely competitive and territorial. The implicit position is: reimplementing GPL software as MIT is a victory for sharing, but having our own MIT software reimplemented by a competitor is cause for outrage. This is what the claim that permissive licensing is “more share-friendly” than copyleft looks like in practice. The spirit of sharing, it turns out, runs in one direction only: outward from oneself.
This argument makes no sense. Are they arguing that because Vercel, specifically, had this attitude, this is an attitude necessitated by AI, reimplementation, and those who are in favor of it towards more permissive licenses? That certainly doesn't seem to be an accurate way to summarize what antirez or Ronacher believe. In fact, under the legal and ethical frameworks (respectively) that those two put forward, Vercel has no right to claim that position and no way to enforce it, so it seems very strange to me to even assert that this sort of thing would be the practical result of AI reimplementations. This seems to just be pointing towards the hypocrisy of one particular company, and assuming that this would be the inevitable universal, attitude, and result when there's no evidence to think so.
It's ironic, because antirez actually literally addresses this specific argument. They completely miss the fact that a lot of his blog post is not actually just about legal but also about ethical matters. Specifically, the idea he puts forward is that yes, corporations can do these kinds of rewrites now, but they always had the resources and manpower to do so anyway. What's different now is that individuals can do this kind of rewrites when they never have the ability to do so before, and the vector of such a rewrite can be from a permissive to copyleft or even from decompile the proprietary to permissive or copyleft. The fact that it hasn't been so far is a more a factor of the fact that most people really hate copyleft and find an annoying and it's been losing traction and developer mind share for decades, not that this tactic can't be used that way. I think that's actually one of the big points he's trying to make with his GNU comparison — not just that if it was legal for GNU to do it, then it's legal for you to do with AI, and not even just the fundamental libertarian ethical axiom (that I agree with for the most part) that it should remain legal to do such a rewrite in either direction because in terms of the fundamental axioms that we enforce with violence in our society, there should be a level playing field where we look at the action itself and not just whether we like or dislike the consequences, but specifically the fact that if GNU did it once with the ability to rewrite things, it can be done again, even in the same direction, it now even more easily using AI.
> They completely miss the fact that a lot of his blog post is not actually just about legal but also about ethical matters.
Honestly I was confused about the summarization of my blog post into just a legal matter as well. I hope my blog post will be able to flash at least a short time in the HN front page so that the actual arguments it contain will get a bit more exposure.
I'm failing to see what in the quoted text you took to be about AI rewrites specifically? It just reads as a slightly catty aside about the social reaction of rewrites in general (by implying the one example is generalizable.)
It should be noted that the Rust community is also guilty of something similar. That is, porting old GPL programs, typically written in C, to Rust and relicensing them as MIT.
...and the main distros are enthusiastically adopting them.
Within a relatively short time frame, expect everything in your Linux distro other than the kernel to be MIT-licensed because everything that is FSF-maintained will be rewritten in Rust with the MIT license.
The kernel will then be next, though it'll take a longer timeframe.
The GPL just didn't win in the marketplace of ideas.
People criticized Stallman et al. for being ideological, but popularity of permissive licenses is actually pure ideology, in Marxian sense - people doing without knowing, and conflating their interests with interests of big capital. You can see the same with people defending AI-laundering.
Stallman's proposal is opposite of ideology, it is conscious political project. And thus it is failing.
Not a lawyer, but my understanding is: In theory, copyright only protects the creative expression of source code; this is the point of the "clean room" dance, that you're keeping only the functional behavior (not protected by copyright). Patents are, of course, an entirely different can of worms. So using an LLM to strip all of the "creative expression" out of source code but create the same functionality feels like it could be equivalent enough.
I like the article's point of legal vs. legitimate here, though; copyright is actually something of a strange animal to use to protect source code, it was just the most convenient pre-existing framework to shove it in.
which is the actual relevant part: they didn't do that dance AFIK
AI is a tool, they set it up to make a non-verbatim copy of a program.
Then they feed it the original software (AFIK).
Which makes it a side by side copy, as in the original source was used as reference to create the new program. Which tend to be seen as derived work even if very different.
IMHO They would have to:
1. create a specification of the software _without looking at the source code_, i.e. by behavior observation (and an interface description). I.e. you give the AI access to running the program, but not to looking into the insides of it. I really don't think they did it as even with AI it's a huge pain as you normally can't just brute force all combinations of inputs and instead need to have a scientific model=>test=>refine loop (which AI can do, but can take long and get stuck, so you want it human assisted, and the human can't have inside knowledge about the program).
2. then generate a new program from specification, And only from it. No git history, no original source code access, no program access, no shared AI state or anything like that.
Also for the extra mile of legal risk avoidance do both human assisted and use unrelated 3rd parties without inside knowledge for both steps.
While this does majorly cut cost of a clean room approach, it still isn't cost free. And still is a legal mine field if done by a single person, especially if they have enough familiarity to potentially remember specific peaces of code verbatim.
Well sure they didn't do the dance, but you don't have to do the dance. The reason to do it is that it's a good defense in a lawsuit. Like you say, all of this is a legal minefield.
So my understanding was that the original code was specifically not fed into Claude. But was almost certainly part of its training data, which complicates things, but if that's fair use then it's not relevant? If training's not fair use and taints the output, then new-chardet is a derivative of a lot of things, not just old-chardet...
This is all new legal ground. I'm not sure if anyone will go to court over chardet, though, but something that's an actual money-maker or an FSF flagship project like readline, on the other hand, well that's a lot more likely.
Strong agree on it all being a legal minefield / new grass.
> But was almost certainly part of its training data, which complicates things
On this point specifically, my read of the Anthropic lawsuit was one of the precedents was that if it trains on something but does not regurgitate it, its fair use? Might help the argument that it was clean-room but ¯\_(ツ)_/¯
My understanding is they did do the dance. From the article: "He fed only the API and the test suite to Claude and asked it to reimplement the library from scratch."
One could still make the argument that using the test suite was a critical contributing factor, but it is not a part of the resulting library. So in my uninformed opinion, it seems to me like the clean room argument does apply.
Is your (1) description of clean room implementation, and (2) description of what was done, actually correct?
(1): my understanding was that a party _with access to copyrighted material_ made the functional spec, which was communicated to a party without access [1]. Under my understanding, theres no requirement for the authors of the functional spec to be 'clean'.
(2) Afaict, they limited the AI to access of just the functional spec and audited that it did not see the original source.
Edit: Not sure if sharing the 'test suite' matters, probably something for the courts in the unlikely event this ever gets there.
[1] Following the definition of clean room re implementation as it relates to US precedent, ie that described in the wikipedia page.
It's clear that we're entering a new era of copyright _expectations_ (whether we get new _legislation_ is different), but for now realise this: the people like me who like copyleft can do this too. We can take software we like, point an agent at it, and tell it to make a new version with the AGPL3.0-or-later badge on the front.
no, it isn't. The point of the GPL is to grant users of the software four basic freedoms (run, study, modify and redistribute). There's no restriction to distribution per se, other than disallowing the removal of these freedoms to other users.
This is only worth arguing about because software has value. Putting this in context of a world where the cost of writing code is trending to 0, there are two obvious futures:
1. The cost continues to trend to 0, and _all_ software loses value and becomes immediately replaceable. In this world, proprietary, copyleft and permissive licenses do not matter, as I can simply have my AI reimplement whatever I want and not distribute it at all.
2. The coding cost reduction is all some temporary mirage, to be ended soon by drying VC money/rising inference costs, regulatory barriers, etc. In that world we should be reimplementing everything we can as copyleft while the inferencing is good.
There was a recent ruling that LLM output is inherently public domain (presumably unless it infringes some existing copyright). In which case it's not possible to use them to "reimplement everything we can as copyleft".
it's more complicated, the ruling was that AI can't be an author and the thing in question is (de-facto) public domain because it has no author in context of the "dev" claim it was fully build by AI
but AI assisted code has an author and claiming it's AI assisted even if it is fully AI build is trivial (if you don't make it public that you didn't do anything)
also some countries have laws which treat it like a tool in the sense that the one who used it is the author by default AFIK
There’s an other option. The cost of copying existing software trends to 0, but the cost of writing new software stays far enough above 0 that it is still relatively expensive.
The article is proceeding from the premise that a reimplementation is legal (but evil). To help my understanding of your comment, do you mean:
1. An LLM recreating a piece of software violates its copyright and is illegal, in which case LLM output can never be legally used because someone somewhere probably has a copyright on some portion of any software that an LLM could write.
2. You read my example as "copying a project without distributing it", vs. "having an LLM write the same functionality just for me"
There will always be cost though. Even if perfect code is getting one-shotted out, that is constantly maintained and adapted to changing conditions and technology, it simply can't stay at 0 forever because one day the power is surely going to go out!
More and more I am drawn to these kinds of ideas lately, perhaps as a kind of ethical sidestep, but still:
It's not going to solve any general issue here, but the one thing these freaks need that can't be generated by their models is energy, tons of it. So, the one thing I can do as an individual and in my (digital) community is work to be, in a word, self-sustainable. And depending on my company I guess, if I was a CEO I would hope I was wise enough to be thinking on the same lines.
Everyone is making beautiful mountains from paper and wire. I will just be happy to make a small dollhouse of stone, I think it will be worth it. How can we see not just at least some small-level of hubris otherwise?
There would be no GPL if anybody could have cheaply and trivially reproduced the software for printers and Lisp machines Stallman was denied access to. There is no reason to force someone to give you the source code if takes no effort to reproduce.
Mind you, that isn't what happened here. The effort involved in getting a LLM to write software comes from three things: writing a clear unambiguous spec that also gives you a clean exported API, more clean unambiguous specs for the APIs you use, and a test suite the LLM can use to verify it has implemented the exported API correctly. Dan got them all for free, from the previous implementation which I'm sure included good documentation. That means his contribution to this new code consisted of little more than pressing the button.
Sadly, if you wrote some GPL software with excellent documentation, a thorough test suite, clean API, and implemented using well understood library the cost of creating a cleanroom reproduction has indeed gone to near zero over the past 24 months. The GPL licence is irrelevant.
Welcome to the brave new world.
PS: Sqlite keeping their test suite proprietary is looking like a prescient masterstroke.
PPS: The recent ruling that an API isn't copyrightable just took on a whole new dimension.
If you decide to improve it in any way to fit your needs you can merely tell your own AI to re-implement it with your changes. Then it's proprietary to you.
I think what is happening is the collapse of the “greater good”. Open source is dependent upon providing information for the greater good and general benefit of its readers. However now that no one is reading anything, its purpose is for the great good of the most clever or most convincing or richest harvester.
I don't think this part is correct: "If you distribute modified code, or offer it as a networked service, you must make the source available under the same terms."
You can't put a copyright and MIT license on something you generated with AI. It is derived from the work of many unknown, uncredited authors.
Think about it; the license says that copies of the work must be reproduced with the copyright notice and licensing clauses intact. Why would anyone obey that, knowing it came from AI?
Countless instances of such licenses were ignored in the training data.
When learning is sufficiently atomized and recombined, creations cease to be "derived from" in a legal sense.
A lego sculpture is copyrighted. Lego blocks are not. The threshold between blocks and sculpture is not well-defined, but if an AI isn't prompted specifically to attempt to mimic an existing work, its output will be safely on the non-copyrighted side of things.
A derivative work is separately copyrightable, but redistribution needs permission from the original author too. Since that usually won't be granted or would be uneconomical, the derivative work can't usually be redistributed.
AI-produced material is inherently not copyrightable, but not because it's a derivative work.
Token prediction is a form of "learning" that is reinforced by the goal of reproducing the correct next token of the work, rather that acquiring ideas and concepts. For instance, given the prefix "Four score and seven years", the weights are adjusted until "ago" is correctly predicted, which is a fancy way of saying that it was stored in the model in a lossy way. The model "learned" that "ago" follows "four score and seven years" exactly the way your hard drive "learns" the audio and video frames of a movie when you download a .mp4 file.
I dispute the idea that token sequences reproduced from the model are not derived works.
I predict, no pun intended, that a time is coming when the idea that it's not a derived work will be challenged in mainstream law.
The slop merchants are getting a free ride for the time being.
As you said, it's lossy. Try it with any other distinctive but non-famous passage, and you won't get a correct prediction for the immediately following clause, much less for multiple sentences or paragraphs.
That's the case even when an LLM correctly identifies which book the prompted text is from. It still won't accurately continue on from some arbitrary passage. By the time you ask it to reproduce hundreds of words, you're into brand new book territory. Even when it's slop content, it's distinct slop.
The exceptions are cases where a significant number of humans would also know a particular quote from memory. Then, chances are, a frontier LLM will too.
You know how else you can reproduce a quote? Search for it on google, and search the resulting top hits; if it's a significant quote, multiple people have probably quoted it -- legally. You can also search a pirate library for the actual book, and search the book for the quote; while illegal, it's very simple to do, so unless you propose to make the free and open internet illegal, I'd suggest that banning LLMs for being "derivative work" creation engines is not so different from destroying the internet.
> I predict, no pun intended, that a time is coming when the idea that it's not a derived work will be challenged in mainstream law.
If judges have any sense whatsoever, LLM generations (without specific prompt crafting to mimic existing works) will be judged to not be derived works and therefore not be violating copyright, in the same sense that you can live and breathe Taylor Swift's music, create new music in the same style, and still not be violating copyright.
The Stability AI case, and how Judge Orrick deals with it, will be interesting and uninteresting at the same time. It deals primarily with the fact that after specific prompting, an image-generation AI can generate something fairly close to existing copyrighted images. That doesn't say anything more about whether LLMs are inherently producers of [only or primarily] derivative works, just as the fact that a human can violate copyright doesn't say anything about whether humans primarily or exclusively output derivative works.
More likely, perhaps, is that everything will be so infused with LLM output that copyright ceases to be relevant, or forces copyright law to be rewritten from the ground up.
Copyright requirements, even prior to LLMs, weren't well-specified. There's no objective threshold for how close something has to be to a previous work before the new one violates copyright. It's whatever a judge thinks, refering to the 4-factor test but ultimately making subjective judgements about each of those prongs. It's all a house of cards, and LLMs may just be what topples it.
Broadly speaking, the “freedom of users” is often protected by competition from competing alternatives. The GNU command line tools were replacements for system utilities. Linux was was a replacement for other Unix kernels. People chose to install them instead of proprietary alternatives. Was it due to ideology or lower cost or more features? All of the above. Different users have different motivations.
Copyleft could be seen as an attempt to give Free Software an edge in this competition for users, to counter the increased resources that proprietary systems can often draw on. I think success has been mixed. Sure, Linux won on the server. Open source won for libraries downloaded by language-specific package managers. But there’s a long tail of GPL apps that are not really all that appealing, compared to all the proprietary apps available from app stores.
But if reimplementing software is easy, there’s just going to be a lot more competition from both proprietary and open source software. Software that you can download for free that has better features and is more user-friendly is going to have an advantage.
With coding agents, it’s likely that you’ll be able to modify apps to your own needs more easily, too. Perhaps plugin systems and an AI that can write plugins for you will become the norm?
If the model wasn't trained on copyleft, if he didn't use a copyleft test suite and if he wasn't the maintainer for years. Clearly the intent here is copyright infringement.
If you have software your testsuite should be your testsuite, you do dev with a testsuite and then mit without releasing one. Depending on the test-suite it may break clean room rules, especially for ttd codebases.
I feel like the licenses that suffer the most isn't the GPL, but the ones like SSPL. If your code can be re-implemented easily and legally by AWS using an LLM, why risk publishing it?
It does feel like open source is about to change. My hunch is that commercial open source (beyond the consultation model) risks disappearing. Though I'd be happy to be proven wrong.
> When GNU reimplemented the UNIX userspace, the vector ran from proprietary to free. Stallman was using the limits of copyright law to turn proprietary software into free software. […] The vector in the chardet case runs the other way.
That’s just your subjective opinion which many other people would disagree. I bet Armin Ronacher would agree that an MIT licensed library is even freer than an LGPL licensed library. To them, the vector is running from free to freer.
> If source code can now be generated from a specification, the specification is where the essential intellectual content of a GPL project resides. Blanchard's own claim—that he worked only from the test suite and API without reading the source—is, paradoxically, an argument for protecting that test suite and API specification under copyleft terms.
This is an interesting reversal in itself. If you make the specification protected under copyright, then the whole practice of clean room implementations is invalid.
IMHO, the API and Test Suite, particularly the latter, define the contract of the functional definition of the software. It almost doesn't matter what that definition looks like so long as it conforms to the contract.
There was an issue where Google did something similar with the JVM, and ultimately it came down to whether or not Oracle owned the copyright to the header files containing the API. It went all the way to the US supreme court, and they ruled in Google's favour; finding that the API wasn't the implementation, and that the amount of shared code was so minimal as to be irrelevant.
They didn't anticipate that in less than half a decade we'd have technology that could _rapidly_ reimplement software given a strong functional definition and contract enforcing test suite.
This article is setting up a bit of a moving target. Legal vs legitimate is at least only a single vague question to be defined but then the target changes to “socially legitimate” defined only indirectly by way of example, like aggressive tax avoidance as “antisocial”— and while I tend to agree with that characterization my agreement is predicated on a layering of other principals.
The fundamental problem is that once you take something outside the realm of law and rule of law in its many facets as the legitimizing principal, you have to go a whole lot further to be coherent and consistent.
You can’t just leave things floating in a few ambiguous things you don’t like and feel “off” to you in some way- not if you’re trying to bring some clarity to your own thoughts, much less others. You don’t have to land on a conclusion either. By all means chew over things, but once you try to settle, things fall apart if you haven’t done the harder work of replacing the framework of law with that of another conceptual structure.
You need to at least be asking “to what ends? What purpose is served by the rule?” Otherwise you’re stuck in things where half the time you end up arguing backwards in ways that put purpose serving rules, the maintenance of the rule with justifications ever further afield pulled in when the rule is questioned and edge cases reached. If you’re asking, essentially, “is the spirit of the rule still there?” You’ve got to stop and fill in what that spirit is or you or people that want to control you or have an agenda will sweep in with their own language and fill the void to their own ends.
In the corporate world, we've started using reimplementation as a way to access tooling that security won't authorize.
Sec has a deny by default policy. Eng has a use-more-AI policy. Any code written in-house is accepted by default. You can see where this is going.
We've been using AI to reimplement tooling that security won't approve. The incentives conspired in the worst outcome, yet here we are. If you want a different outcome, you need to create different incentives.
Not Invented Here's long, slow mutagenic march toward full antibiotic resistance continues apace.
There is a fundamental corpo-cognitive dissonance, to boot. If "AI" is cheap enough and good enough to implement security-relevant software from `git init` repeatedly, why isn't it also cheap enough and good enough to assess and approve the security of third-party software at pace with internal adoption? Is there some basis to believe LLMs' leverage on production differs from its leverage on analysis of existing code?
Surprised they don't mention Google LLC v. Oracle America, Inc. Seems a bit myopic to condone the general legality while arguing "you can only use it how I like it".
It also doesn't talk about the far more interesting philosophical queston. Does what Blanchard did cover ALL implementations from Claude? What if anyone did exactly what he did, feed it the test cases and say "re-implement from scratch", ostensibly one would expect the results to be largely similar (technically under the right conditions deterministically similar)
could you then fork the project under your own name and a commercial license? when you use an LLM like this, to basically do what anyone else could ask it to do how do you attach any license to it? Is it first come first serve?
If an agent is acting mostly on its own it feels like if you found a copy of Harry Potter in the fictional library of Babel, you didn't write it, just found it amongst the infinite library, but if you found it first could you block everyone else that stumbles on a near-identical copy elsewhere in the library? or does each found copy represent a "Re-implementation" that could be individually copyrighted?
Why are people even having problems with sharing their changes to begin with? Just publishing it somewhere does not seem too expensive. The risk of accidentally including stuff that is not supposed to become public? Or are people regularly completely changing codebases and do not want to make the effort freely available, maybe especially to competitors? I would have assumed that the common case is adding a missing feature here, tweaking something there, if you turn the entire thing on its head, why not have your own alternative solution from scratch?
If Blanchard is claiming not to have been substantively involved in the creation of the new implementation of chardet (i.e. "Claude did it"), then the new implementation is machine generated, and in the USA cannot be copyright and thus cannot be licensed.
If he is claiming to have been somehow substantively "enough" involved to make the code copyrightable, then his own familiarity with the previous LGPL implementation makes the new one almost certainly a derivative of the original.
>then his own familiarity with the previous LGPL implementation makes the new one almost certainly a derivative of the original.
The "clean room rewrite" is just an extreme way to have a bulletproof shield against litigation. Not doing it that way doesn't automatically make all new code he writes derivative solely because he saw how the code worked previously.
If the clean room re-write was done entirely by Claude, then the result cannot be copyright in the USA, and thus there is no license at all.
And if he was in fact more involved (which he appears to deny) that it's a bit weak to say that someone with huge familiarity with chardet could choose to reimplement chardet without the result being derivative.
There's a difference between "I've read a LGPL code once, maybe I could do something similar" and "I've been reading this LGPL code for 12 years and now I'm going to do exactly the same thing".
There's a Japanese version of that page, written in classical text writing direction, in columns. Which is cool. Makes me wonder, though - how readable is it with so many English loanwords which should be rotated sideways to fit into columns?
Total digression but yeah, that layout is stupid and the way those words are dropped in using Romaji makes no sense. That's not how Japanese people lay out pages on the web. In fact I don't think I've ever seen a Japanese web page laid out like a book like this, and in general I'd expect the English proper nouns and words that don't have obvious translations to get transliterated into Katakana. Smells like automatic conversion added by someone not really familiar with common practices for presenting Japanese on the web.
He also has a Korean vertical layout that lays out Latin-character words the same way. Is this common in Korea when vertical layout is used? The author seems to be Korean.
That's a non-sequitur. chardet v7 is GPL-derived work (currently in clear violation of the GPL). If xe wanted it to be a different thing xe should've published as such. Simple as.
i've been following this for a while.. and the trend for copyright (of any form - books code pictures music whatever) being laundered by reinventing the "same" thing in-some-way.. is kind-of clear.
But what happens with the new things? Has the era of software-making (or creating things at large) finished, and from now on everything will be re-(gurgitated|implemented|polished) old stuff?
Or all goes back to proprietary everything.. Babylon-tower style, noone talks to noone?
edit: another view - is open-source from now on only for resume-building? "see-what-i've-built" style
It seems that this chap didn't go and implement a new library, he reimplemented an existing one and became sole-controller of it. i.e. he seems to have taken its reputation, brand whatever you call it away from the contributors and entirely to himself. Their work of establishing it as a well known solution is no longer recognised.
So of course we feel that something wrong has happened even if it's not easy to put one's finger on it.
The really interesting question to me is if this transcends copyright and unravels the whole concept of intellectual property. Because all of it is premised on an assumption that creativity is "hard". But LLMs are not just writing software, they are rapidly being engineered to operate completely generally as knowledge creation engines: solving math proofs, designing drugs, etc.
So: once it's not "hard" any more, does IP even make sense at all? Why grant monopoly rights to something that required little to no investment in the first place? Even with vestigial IP law - let's say, patents: it just becomes and input parameter that the AI needs to work around the patents like any other constraints.
> So: once it's not "hard" any more, does IP even make sense at all? Why grant monopoly rights to something that required little to no investment in the first place? Even with vestigial IP law - let's say, patents: it just becomes and input parameter that the AI needs to work around the patents like any other constraints.
I think it still does: IIRC, the current legal situation is AI-output does not qualify for IP protections (at least not without substantial later human modification). IP protections are solely reserved for human work.
And I'm fine with that: if a person put in the work, they should have protections so their stuff can't be ripped off for free by all the wealthy major corporations that find some use for it. Otherwise: who cares about the LLMs.
I think you have a rather idealized model of IP in mind. In practice, IP law tends to be an expensive weapon the wealthy major corporations use against the little guy. Deep enough pockets and a big enough warchest of broad parents will drain the little guy every time.
> In practice, IP law tends to be an expensive weapon the wealthy major corporations use against the little guy. Deep enough pockets and a big enough warchest of broad parents will drain the little guy every time.
Then fix that instead of blowing it up. Because IP law is also literally the only thing that protects the little guy's work in many cases.
Arguments like yours are kinda unfathomably incomplete to me, almost like they're the remnants of some propaganda campaign. It's constructed to appeal to the defense of the little guy, but the actual effect would be to disempower him and further empower the wealthy major corporations with "big enough warchest[s]."
I mean, one thing I think the RIAA would love is to stop paying royalties to every artist ever. And the only thing they'd be worried about is an even bigger fish (like Amazon, Apple, or Spotify) no longer paying royalties to them. But as you said, they have a big enough war chest that they probably could force a deal somehow. All the artists without a war chest? Left out in the cold.
Blowing up IP would sink the RIAA. They would no longer have legal grounds to go after file sharing, and I’m confident that given the same legal footing that file sharing would win any day of the week.
It's not at all obvious whether copyright net protects or destroys the little guy.
It definitely does some of both, and we have no obvious measure or counterfactual to know otherwise.
You also have to take into account not just if optimal reform or optimal dismantle is better, but the realistic likelihood of each, and the risk of the bad outcomes from each.
Protect even more conceptual product ideas seems pretty strongly like it will result in more of a tool for big guys only, it's patents on crack and patents are already nearly exclusively "big guy crushes small guy" tool, versus copyright is at least debatably mixed.
> It's not at all obvious whether copyright net protects or destroys the little guy.
It's super obvious, unless your perspective basically stems from someone who was mad they couldn't BitTorrent a ton of movies.
I mean, FFS, copyright is the literal foundation for open source licenses like the GPL.
My sense is a lot of the radically anti-IP fervor ultimately stems from people who were outraged they could be sued for seeding an MP3 (though it's accreted other complaints to justify that initial impulse, and it's likely some where indoctrinated from secondary argumentation somewhat obscured from the core impulse).
That's not to say that there are not actors who abuse IP or there aren't meaningful reforms that could be done, but the "burn it all down" impulse is not thought through.
It is ad hominem that people who see it different are just pretty criminals.
Yes it is a genius move that copy left used copyright to achieve their goal. But the name is literally reflecting the judo going on in that case. Copyleft licenses also does have a lot benefits to big companies as well too so it's not strictly a David vs Goliath victory.
I don't think it's a commonly held belief that copyright benefits small YouTube creators more than it hurts them as a concrete example, they seem to live in constant fear of being destroyed in an asymmetrical system where copyright can take away they livelihood at any moment while not doing anything to meaningfully protect it.
GPL was created as a workaround for copyright - it wouldn’t have been needed if there wasn’t copyright. There are complex arguments both for and against copyright and there’s no reason to simply assume it must always be just as now even as circumstances change.
Does this matter in practice though? By modifying some of the generated code and not taking a solution produced by an LLM end-to-end but borrowing heavily from it, can't a human claim full ownership of the IP even though in reality the LLM did most of the relevant work?
I beg to differ. AI-output did not entitle the person creating the prompt for IP protections, so far – but my objection is not directed towards the "so far", but towards your omission of "the person creating the prompt", because if an AI outputs copyrighted material from the training data, that material is still copyrighted. AI is not a magical copyright removal machine.
What this means in practice is that (currently), all output of an LLM is legally considered to not be copyrightable (to the extent that it's an original work). If it happens to regurgitate an existing copyrighted work, though, is that infringement? I'm not sure we have a legal precedent on that question yet.
There’s several large settlements that say Anthropomorphic/OAI didn’t want to have legal precedent. In general if it’s not outright regurgitated it would be derivative.
The out of court settlements that avoid precedent don't mean anything in a broader legal context. Legally speaking, right now in the USA, output of LLMs is not copyrighted and cannot be copyrighted (without substantial transformation by a human).
I don't think this means the same thing as whether or not LLM output can infringe on someone else's copyright though (that does pose an interesting question -- can something non-copyrightable in general infringe on something copyrighted?).
I don't believe that you require to do much to claim copyright over an output of an LLM.
The input prompt is under copyright - a simple modification to the source code will grant copyright to you.
The Thaler case here is something different than "AI-generated = uncopyrightable" though. Thaler was not trying to copyright work in the way humans who make work with tools normally copyright their work ("Copyright 2026 by Me"), he was specifically trying to give AI the copyright ("Copyright 2026 by My-AI-Tool"). The court rejected this because only humans can own copyright.
I believe there are other cases where AI-generated works were found uncopyrightable but Thaler is not a good example* of them.
I'm afraid as of last week this is now as settled as it gets in US law: the output of LLMs is not per se copyrightable, though arrangements and modifications of it can be. It's like a producer who made a song entirely with public domain audio samples: he can't then demand the compulsory license when someone resamples that song.
They actually wouldn't, since they'd be sampling the new arrangement. They could reconstruct a new, similar sounding arrangement based on the original samples, but it'd be have to be different enough to that new arrangement so as not to be considered derivative of it.
That also applies to generative AI, pure output may not be copyrightable but as soon as you do something beyond type some words and press a button, like doing area-specific infills and paintovers, which involve direct and deliberate choices by a human, the copyrighted human-driven arrangement becomes so deeply intertwined with the generative work that it's effectively inseperable.
Don't worry. The courts have consistently sided with huge companies on copyright. In the US. In Europe. Doesn't matter.
Company incorporates GPL code in their product? Never once have courts decided to uphold copyright. HP did that many times. Microsoft got caught doing it. And yet the GPL was never applied to their products. Every time there was an excuse. An inconsistent excuse.
Schoolkid downloads a movie? 30,000 USD per infraction PLUS armed police officer goes in and enforces removal of any movies.
Or take the very subject here. AI training WAS NOT considered fair use when OpenAI violated copyright to train. Same with Anthropic, Google, Microsoft, ... They incorporated harry potter and the linux kernel in ChatGPT, in the model itself. Undeniable. Literally. So even if you accept that it's changed now, OpenAI should still be forced to redistribute the training set, code, and everything needed to run the model for everything they did up to 2020. Needless to say ... courts refused to apply that.
So just apply "the law", right. Courts' judgement of using AI to "remove GPL"? Approved. Using AI to "make the next Disney-style movie"? SEND IN THE ARMY! Whether one or the other violates the law according to rational people? Whatever excuse to avoid that discussion is good enough.
It might unravel intellectual property, just not in a fair way. When capitalism started, public land was enclosed to create private property. Despite this being in many cases a quite unfair process, we still respect this arrangement.
With AI, a similar process is happening - publicly available information becomes enclosed by the model owners. We will probably get a "vestigial" intellectual property in the form of model ownership, and everyone will pay a rent to use it. In fact, companies might start to gatekeep all the information to only their own LLM flavor, which you will be required to use to get to the information. For example, product documentation and datasheets will be only available by talking to their AI.
That also seems relevant for this whole discussion, actually -- if a work can't be copyrighted it certainly can't have a changed license, or any license at all. (I guess it's effectively public domain to the extent that it's public at all?)
You're really missing the point in multiple ways. First, precedents on copyright law are irrelevant to patent law. Second, AI generated works generally can be copyrighted under the human creator's name.
No, I think you are quite incorrect, at least on the latter point:
"Lower courts upheld a U.S. Copyright Office decision that the AI-crafted visual art at issue in the case was ineligible for copyright protection because it did not have a human creator."
Not eligible for copyright protection does not mean it can be copyrighted "under the human creator's name". It means there is no creative work at all. No copyright.
Even if all I have to do is tell my agent, "here is a patent for a drug, analyse the patent and determine an equivalent but non-infringing drug" and it chugs away for a couple of hours and spits out a drug along with all the specifications to manufacture it?
I guess the state of play will be that for new drugs the original manufacturer will already have done that and ensured that literally anything that could be found as a workaround is included in the scope of the patent. But I feel like it will not be possible to keep that wartertight.
Yes, even so. Human drug researchers have been doing the same thing for decades. As soon as one pharmaceutical company launches a successful small-molecule drug everyone else jumps to find a minor tweak that will hit the same target (ideally with fewer side effects) while evading the patent. There is already specialized software to help with this process so I'm skeptical that LLM agents would be very helpful for this use case.
If you think about creative outcomes as n dimensional 'volumes', AI expressions can cover more than humans in many domains. These are precisely artistic styles, music styles etc. and tbh not everyone can be a Mozart but may be a lot more with AI can be Mozart lite. This begs the question how much of creativity is appreciated as a shared experience
I've always thought the opposite: IP law was created to make sure creativity stays hard, and hence controllable by the elites.
Patents came along when farmers started making city goods, threatening guilds secrets. Copyright came when the printing press made copying and translating the bible easy and accessible to all. (Trademark admittedly does not fit this view, but doesn't seem all that damaging either)
To Protect The Arts, and To Time Limit Trade Secrets were just the Protect The Children of old times, a way to confuse people who didn't look too hard at actual consequences.
This means that the future of IP depends on what lets the powers that be pull up the ladder behind them. Long term I'd expect e.g. copyright expansion and harder enforcement, just because cloning by AI gets easy enough to threaten the status quo.
> Trademark admittedly does not fit this view, but doesn't seem all that damaging either
Isn’t trademark the only thing keeping a certain cartoon mouse out of the public domain, despite the fact that his earliest animations are out of copyright? Not sure if you’d consider that damaging, or if anyone has yet tested the boundaries of the House of Mouse’s patience here.
Good. Intellectual property is now a twisted concept by the elite, whatever its benefits were previously. As soon as Disney made Mickey popular, it was all downhill.
More likely: this is a transitional phase where our previously hard problems become easy, and we will soon set our sights on new and much harder problems. The pinnacle of creative achievement in the universe is probably not 2010s B2B SaaS.
It is entirely possible, however, that human beings will not be the primary drivers of progress on those problems.
"What happens when an LLM outputs a patented algorithm?" remains a huge land mine out there, particularly since patent infringement does not require intent or even knowledge, and these models have trained on every patent ever granted.
Intellectual property never made any sense to begin with. It is logically reducible to ownership of numbers. It is that absurd. Computers made the entire concept irrelevant the second they were invented but they kept holding on via lobbying power. Maybe AI will finally put the final nail on the coffin of intellectual property.
Sure, it's disgusting and hypocritical how these corporations enshrined all this nonsense into law only to then ignore it all the second LLMs were invented. It's ultimately a good thing though. The model weights are all that matters. All we need to do is wait for the models to hit diminishing returns, then somehow find a way to leak them so that everyone has access. If they refuse, then just force them. By law or by revolution.
> if this transcends copyright and unravels the whole concept of intellectual property.
I have been saying this for years. Intellectual property is based on the concept that ideas can be owned, which is fundamentally a contradiction with how reality operates. We've been able to write laws that paper over that contradiction by introducing concepts like "fair use", but it doesn't resolve it.
AI is just making the conflict arising out of that contradiction more intense in new ways and forcing us to reckon with it in this new technological landscape. You can follow two perfectly reasonable lines of logic and end up with contradictory solutions. So how are we going to get out of this mess? I don't know, not without rolling back (at least parts of) what intellectual property is in the first place.
At some level, IP makes sense — creators should be rewarded. But IP only benefits those who claim it. The benefits rarely flow back to humanity who made it all possible. Every LLM was trained on humanity's collective knowledge. The value was created collectively, then captured privately.
That's the reason I like the idea of DUKI/dju:ki/ — Decentralized Universal Kindness Income, similar to UBI but driven by voluntary kindness and sincere marketing rather than taxation. If AI makes creation trivially easy and IP loses its justification, the question becomes: how do we ensure a tiny part of the wealth generated flows back to everyone?
The point of IP is to encourage the creation of new things.
Not all protections have to be ones that give total control like copyright.
I think it's a mistaken assumption that costs will fall to zero. The low hanging fruit will get picked, and then we'll be doing expensive combined AI/wetlab search for new drugs.
If there is any meaningful headroom we will keep doing expensive things to make progress.
> The point of IP is to encourage the creation of new things.
Then why are corporations allowed to milk successful works for all eternity? Why do we have Disney monopolizing films made half a century ago? Why do we have Nintendo selling people the exact same Mario ROMs from the 80s every single console generation?
They should have like 10 years of copyright so they can turn a profit. Once it expires it's over and the work enters the public domain where it belongs. If they want to keep profiting they should have to keep creating new things. They shouldn't be able to turn shared culture into eternal intellectual property portfolios that they monopolize and then sit on like dragons.
There is always drift between intent and implementation, but to be generous here, Disney is generally making new works with their IP and so is Nintendo.
I am somewhat curious what you think shortening the copyright window would do that's so great for the culture though. We already have more than enough IP slop that's just licensed.
> operate completely generally as knowledge creation engines: solving math proofs, designing drugs, etc.
Any example of that? So far I haven't seen any but maybe I'm looking at the wrong places.
I've see a lot of :
- "solving" math proofs that were properly formalized, with often numerous documented past attempts, re-verified by proper mathematicians, without necessarily any interesting results
- haven't seen any designed trust, most I've seen was (again with entire teams of experts behind) finding slight optimizations
Basically all outputs I've seen so far have been both following existing trends (basically low hanging fruits without any paradigm shift) and never ever alone but rather as search supports for teams of World class experts. None of these that would quality IMHO as knowledge creation. Whenever such results were published the publication seemed mostly to be promotion about the workflow itself more than the actual results. DeepMind seems to be the prime example for that.
From what I understand, LLMs can't really generate anything meaningful that doesn't implicitly rely on the operator's choices. It's hard to make the right novel choices as soon as you leave well-defined problem spaces.
In terms of math and biochemistry the cost of generating candidates has collapsed, but the cost of validating them hasn't.
See also "A Declaration of the Independence of Cyberspace" (https://www.eff.org/cyberspace-independence), and what a goofy, naive, misguided disaster that early internet optimism turned into.
No, AI does not mean the end of either copyright or copyleft, it means that the laws need to catch up. And they should, and they will.
I think the missing thing here is that the license violation already happened. Most of the big models trained on data in a manner that violated terms of service. We'll need a court case but I think it's extremely reasonable to consider any model trained on GPL code to be infected with open licensing requirements.
I agree there has to be a court case about it. I think the current argument, however, is that it is transformative, and therefore falls under fair use.
Yea, a finding that training is transformative would be pretty significant and it's likely that the precedent of thumbnail creation being deemed transformative would likely steer us towards such a finding. Transformative is always a hard thing to bank on because it is such a nebulous and judgement based call. There are excellent examples of how precise and gritty this can get in audio sampling.
Didn't know about thumbnails being fair use. In that case, I just don't see an argument that genAI training on source code is less transformative than thumbnails.
You don’t get to simply claim fair use based on how transformative your derivative work is.
“””
Section 107 calls for consideration of the following four factors in evaluating a question of fair use:
Purpose and character of the use, including whether the use is of a commercial nature or is for nonprofit educational purposes: Courts look at how the party claiming fair use is using the copyrighted work, and are more likely to find that nonprofit educational and noncommercial uses are fair. This does not mean, however, that all nonprofit education and noncommercial uses are fair and all commercial uses are not fair; instead, courts will balance the purpose and character of the use against the other factors below. Additionally, “transformative” uses are more likely to be considered fair. Transformative uses are those that add something new, with a further purpose or different character, and do not substitute for the original use of the work.
Nature of the copyrighted work: This factor analyzes the degree to which the work that was used relates to copyright’s purpose of encouraging creative expression. Thus, using a more creative or imaginative work (such as a novel, movie, or song) is less likely to support a claim of a fair use than using a factual work (such as a technical article or news item). In addition, use of an unpublished work is less likely to be considered fair.
Amount and substantiality of the portion used in relation to the copyrighted work as a whole: Under this factor, courts look at both the quantity and quality of the copyrighted material that was used. If the use includes a large portion of the copyrighted work, fair use is less likely to be found; if the use employs only a small amount of copyrighted material, fair use is more likely. That said, some courts have found use of an entire work to be fair under certain circumstances. And in other contexts, using even a small amount of a copyrighted work was determined not to be fair because the selection was an important part—or the “heart”—of the work.
Effect of the use upon the potential market for or value of the copyrighted work: Here, courts review whether, and to what extent, the unlicensed use harms the existing or future market for the copyright owner’s original work. In assessing this factor, courts consider whether the use is hurting the current market for the original work (for example, by displacing sales of the original) and/or whether the use could cause substantial harm if it were to become widespread.
“””
>I haven't claimed anything, The courts did: https://www.whitecase.com/insight-alert/two-california-distr.... And regardless, my point still stands that it is an open question; however, given the already present body of cases, it is tipping in the favor of the AI companies. Also, if thumbnails fall under fair use due to it being transformative of full-sized pictures, I cannot see an argument that AI training on data is somehow less transformative than downscaling an image for a thumbnail.
You might wish that were true, but there are very strong arguments it's not. Training on copyleft licensed code is not a license violation. Any more than a person reading it is. In copyright terms, it's such an extreme transformative use that copyright no longer applies. It's fair use.
But agreed that we're waiting for a court case to confirm that. Although really, the main questions for any court cases are not going to be around the principle of fair use itself or whether training is transformative enough (it obviously is), but rather on the specifics:
1) Was any copyrighted material acquired legally (not applicable here), and
2) Is the LLM always providing a unique expression (e.g. not regurgitating books or libraries verbatim)
And in this particular case, they confirmed that the new implementation is 98.7% unique.
> Training on copyleft licensed code is not a license violation. Any more than a person reading it is.
Some might hold that we've granted persons certain exemptions, on account of them being persons. We do not have to grant machines the same.
> In copyright terms, it's such an extreme transformative use that copyright no longer applies.
Has the model really performed an extreme transformation if it is able to produce the training data near-verbatim? Sure, it can also produce extremely transformed versions, but is that really relevant if it holds within it enough information for a (near-)verbatim reproduction?
No we don't have to, but so far we do, because that's the most legally consistent. If you want to change that, you're going to need to pass new laws that may wind up radically redefining intellectual property.
> Has the model really performed an extreme transformation if it is able to produce the training data near-verbatim?
Of course it has, if the transformation is extreme, as it appears to be here. If I memorize the lyrics to a bunch of love songs, and then write my own love song where every line is new, nobody's going to successfully sue me just because I can sing a bunch of other songs from memory.
Also, it's not even remotely clear that the LLM can produce the training data near-verbatim. Generally it can't, unless it's something that it's been trained on with high levels of repetition.
> you're going to need to pass new laws that may wind up radically redefining intellectual property
You're correct that this is one route to resolving the situation, but I think it's reasonable to lean more strongly into the original intent of intellectual property laws to defend creative works as a manner to sustain yourself that would draw a pretty clear distinction between human creativity and reuse and LLMs.
> into the original intent of intellectual property laws to defend creative works as a manner to sustain yourself
But you're missing the other half of copyright law, which is the original intent to promote the public good.
That's why fair use exists, for the public good. And that's why the main legal argument behind LLM training is fair use -- that the resulting product doesn't compete directly with the originals, and is in the public good.
In other words, if you write an autobiography, you're not losing significant sales because people are asking an LLM about your life.
>Has the model really performed an extreme transformation if it is able to produce the training data near-verbatim? Sure, it can also produce extremely transformed versions, but is that really relevant if it holds within it enough information for a (near-)verbatim reproduction?
I feel as though, from an information-theoretic standpoint, it can't be possible that an LLM (which is almost certainly <1 TB big) can contain any substantial verbatim portion of its training corpus, which includes audio, images, and videos.
> I feel as though, from an information-theoretic standpoint, it can't be possible that an LLM (which is almost certainly <1 TB big) can contain any substantial verbatim portion of its training corpus, which includes audio, images, and videos.
It doesn't need to for my argument to make sense. It's a problem if it reproduces a single copyrighted work (near)-verbatim. Which we have plenty of examples of.
Do we? Even when people attempt to jail break most models with 1000s of prompts they are only able to get a paragraph or two of well known copyrighted works and some blocks of paraphrased text, and that's with giving it a substantially leading question.
It surely doesn't matter how leading or contorted the prompt has to be if it shows that the model is encoding the copyrighted work verbatimly or nearly so.
It definitely does, which is why I put substantial amount of verbatim material. If someone can recite the first paragraph of Harry Potter and the sorcerers stone from memory, it surely doesn't mean they have memorized the entire book.
The big difference between people reading code and LLMs reading code is that people have legal liability and LLMs do not. You can't sue an LLM for copyright infringement, and it's almost impossible for users to tell when it happens.
BTW in 2023 I watched ChatGPT spit out hundreds of lines of F# verbatim from my own GitHub. A lot of people had this experience with GitHub Copilot. "98.7% unique" is still a lot of infringement.
If you comission art from an artist who paints a modified copy of Warhol's work, the artist is liable (even if you keep that work private, for personal use).
If you commission it from OpenAI (by sending a query to their ChatGPT API), by your argument, you are the person liable — and OpenAI is off the hook even if that work is distributed further.
I'm not going to argue about the merits of creativity here, or that someone putting a prompt into ChatGPT considers themselves an artist.
That's irrelevant. The work is created on OpenAI servers, by the LLMs hosted there, and is then distributed to whoever wrote the prompt.
Models run locally are distributed by whoever trained them.
If you train a model on whatever data you legally have access to, and produce something for yourself, it's one thing.
Distribution is where things start to get different.
> If you commission it from OpenAI (by sending a query to their ChatGPT API), by your argument, you are the person liable — and OpenAI is off the hook even if that work is distributed further.
Let's distinguish two different scenarios here:
1) Your prompt is copyright-free, but the LLM produces a significant amount of copyrighted content verbatim. Then the LLM is liable, and you too are liable if you redistribute it.
2) Your prompt contains copyrighted data, and the LLM transforms it, and you distribute it. Then if the transformation is not sufficient, you are liable for redistributing it.
The second example is what I'm referring to, since the commercial LLM's are now very good about not reproducing copyrighted content verbatim. And yes, OpenAI is off the hook from everything I understand legally.
Your example of commissioning an artist is different from LLM's, because the artist is legally responsible for the product and is selling the result to you as a creative human work, whereas an LLM is a software tool and the company is selling access to it. So the better analogy is if you rent a Xerox copier to copy something by Warhol. Xerox is not liable if you try to redistribute that copy. But you are. So here, Xerox=OpenAI. They are not liable for your copyrighted inputs turning into copyrighted outputs.
The most salient difference is that it's impossible to tell if an LLM is plagiarizing, whereas Xeroxing something implies specific intent to copy. It makes no sense to push liability onto LLM users.
Are you following the distinction between my scenarios (1) and (2)?
In scenario (1) the LLM is plagiarizing. But that's not the scenario we're discussing. And I already said, this is where the LLM is liable. Whether a user should be too is a different question.
But scenario (2) is what I'm discussing, as I already explained, and it's very possible to tell, because you yourself submitted the copyrighted content. All you need to do is look at whether the output is too similar to the input.
If there's some scenario where you input copyrighted material and it transforms it into different material that is also copyrighted by someone else... that is a pretty unlikely edge case.
>So the better analogy is if you rent a Xerox copier to copy something by Warhol
It isn't.
One analogy in that case would be going to a FedEx copy center and asking the technician to produce a bunch of copies of something.
They absolve themselves of liability by having you sign a waiver certifying that you have complete rights to the data that serves as input to the machine.
In case of LLMs, that includes the entire training set.
A human reading a unit of work is not a “copy”. I’m pretty sure our legal systems agree that thought or sight is not copying something.
Training an LLM inherently requires making a copy of the work. Even the initial act of loading it from the internet and copying it into memory to then train the LLM is a copy that can be governed by its license and copyright law
> Training an LLM inherently requires making a copy of the work.
But that's not relevant here. Because the copyleft license does not prohibit that (and it's not even clear that any license can prohibit it, as courts may confirm it's fair use, as most people are currently assuming). That's why I noted under (1) that it's not applicable here.
It's absolutely prohibited to copy and redistribute for commercial purposes materials that you're unlicensed to do so with. This isn't an issue when it comes to the copy-left scenario (though it may potentially enforce transitive licensing requirements on the copier that LLM runners don't want to follow) but it is a huge issue that has come up with LLM training.
LLM training involves ingesting works (in a potentially transformative process) and partially reproduce them - that's a generally restricted action when it comes to licensing.
> It's absolutely prohibited to copy and redistribute for commercial purposes materials that you're unlicensed to do so with.
Sure, but that's not what LLM's generally do, and it's certainly not what they're intended to do.
The LLM companies, and many other people, argue that training falls under fair use. One element of fair use is whether the purpose/character is sufficiently transformative, and transforming texts into weights without even a remote 1-1 correspondence is the transformation.
And this is why LLM companies ensure that partial reproduction doesn't happen during LLM usage, using a kind of copyrighted-text filter as a last check in case anything would unintentionally get through. (And it doesn't even tend to occur in the first place, except when the LLM is trained on a bunch of copies of the same text.)
Yea, at the end of the day a big part of this question comes down to whether that copying is fair use and that is an open question with the transformative nature being the primary point in favor of the LLM. But it is copying from some works to another - if it doesn't have some fair use exception it is absolutely violating the licensing of most of the training data. It's a bit different from previous settled case law because it's copying so little from so many billions of different things. I think blocking reproduction is wise by LLM companies for PR purposes but it doesn't guarantee that training is a license exempted activity.
Yup. Of course it's copying. But all expectations are that courts will rule that fair use allows such copying, because of the nature of the transformation.
Would it be fair to say that if you steal from enough people then it becomes OK? I can’t see it—especially considering this is IP law, expected to grant people confidence in their authorship rights and thus encourage innovation and creativity.
it's "if you steal fast enough and ubiquitously enough, then you win" where the goal is you've so entrenched your position that by the time a lawsuit rolls around, there isn't any real remedy. ideally, there would have been a day 1 lawsuit and injunction.
I think you are confusing two different meanings of the word ‘copy’. The fact that a computer loads it into memory does not make it automatically a ‘copy’ in the copyright sense.
> The court held that making RAM copies as an essential step in utilizing software was permissible under §117 of the Copyright Act even if they are used for a purpose that the copyright holder did not intend.
> The fact that a computer loads it into memory does not make it automatically a ‘copy’ in the copyright sense.
IIRC this exact argument was made in the Blizzard vs bnetd case, wasn't it? Though I can't find confirmation on whether that argument was rejected or not...
Transformative is not the only component of determining fair use, there’s also the economic displacement aspect. If you’re doing a book report and include portions of the original (or provide an interface for viewing portions à la Google Books) you aren’t a threat to the original authors ability to make a living.
If you’ve used copyrighted books and turned them into a free write-a-book machine, you are suddenly using the authors own works against them, in a way that a judge might rule is not very fair.
“ Effect of the use upon the potential market for or value of the copyrighted work: Here, courts review whether, and to what extent, the unlicensed use harms the existing or future market for the copyright owner’s original work. In assessing this factor, courts consider whether the use is hurting the current market for the original work (for example, by displacing sales of the original) and/or whether the use could cause substantial harm if it were to become widespread.”
Sure. But it seems very difficult to argue that LLM's are harming that ability to make a living in a direct way.
This is for the same reason that search results or search snippets aren't deemed to harm creators according to copyright. Yes there might be some percentage lost of sales. And truly, people may be buying less JavaScript tutorial books now that LLM's can teach you JavaScript or write it for you. But the relation is so indirect, there's very little chance a court would accept the argument.
Because what the LLM is doing is reading tons of JavaScript and JavaScript tutorials and resources online, and producing its own transformed JavaScript. And the effect of any single JavaScript tutorial book in its training set is so marginal to the final result, there's no direct effect.
And the reason this makes sense is that it's no different from a teacher reading 20 books on JavaScript and then writing their own that turns out to be a best-seller. Yes, it takes away from the previous best-sellers. But that's fine, because they're not copying any of the previous works directly. They're transforming the facts they learned into a new synthesis.
> Training on copyleft licensed code is not a license violation. Any more than a person reading it is. In copyright terms, it's such an extreme transformative use that copyright no longer applies. It's fair use.
This is just an assertion that you're making. There's no argument here. I'm aware that this is also an assertion that some judges have made.
My claim is that LLMs are not human, therefore when you apply words like "training" to them, you're only doing it metaphorically. It's no more "training" than copying code to a different hard drive is training that hard drive. And it's no more "transformative" than rar'ing or zipping the code, then unzipping it. I can't sell my jpgs of pngs I downloaded from Getty.
I have no idea how LLMs can be considered transformative work that immunizes me from owing the least bit of respect to the source material, but if I sample 2-6 second snatches from 10 different songs, put them through over 9000 filters and blend them into a new work, I owe money to everyone involved. I might even owe money to the people who wrote the filters, depending on the licensing.
> 98.7% unique.
This doesn't mean anything. This is a meaningless arrangement of words. The way we figure out things are piracy is through provenance, not bizarre ad hoc measurements. If I read a book in Spanish and rewrite it in English, it doesn't suddenly become mine even though it's 96.6492387% unique. Not even if I drop a few chapters, add in a couple of my own, and change the ending.
Is the LLM acting as my agent? If the LLM has been exposed to the source code then have I been exposed to the source code? So in that case is a "clean room" implementation possible?
Well, the license change sounds pretty strange, but to be honest if I were to use this software I would use it without adhering to the MIT. It's machine-created content which is not, in general, copyrightable. You can assert whatever license you want on such content, but I am not going to adhere to it. For example, I declare you may use the following under the Elastic License
> He fed only the API and the test suite to Claude and asked it to reimplement the library from scratch.
From GPL2:
> The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable.
Is a project's test suite not considered part of its source code? When I make modifications to a project, its test cases are very much a part of that process.
If the test suite is part of this library's source code, and Claude was fed the test suite or interface definition files, is the output not considered a work based on the library under the terms of LGPL 2.1?
Google vs Oracle ruled that APIs fall under copyright (the contrary was thought before). However, it was ruled that, in that specific case, fair use applied, because of interoperability concerns. That's the important part of this case: fair use is never automatic, it is assessed case by case.
Regarding chardet, I'm not sure "I wanted to circumvent the license" is a good way to argue fair use.
Legally, using the tests to help create the reimplementation is fine.
However, it seems possible you can't redistribute the same tests under the MIT license. So the reimplementation MIT distribution could need to be source code only, not source code plus tests. Or, the tests can be distributed in parallel but still under LGPL, not MIT. It doesn't really matter since compiled software won't be including the tests anyways.
Sorry, I misspoke. Transformation is what makes the LLM itself legal -- its training data is sufficiently transformed into weights.
And so, a work being sufficiently transformative is one way in which copyright no longer applies, but that's not the case here specifically. The specific case here is essentially just a clean-room reimplementation (though technically less "clean", but still presumably the same legally). But the end result is still a completely different expression of underlying non-copyrightable ideas.
And in both cases, it doesn't matter what the original license was. If a resulting work is sufficiently transformative or a reimplementation, copyright no longer applies, so the license no longer applies.
The library's test suite and interfaces were apparently used directly, not transformed. If either of those are considered part of the library's source code, as the license's wording seems to suggest, then I think output from their use could be considered a work based on the library as defined in the license.
Google LLC v Oracle America assumed (though didn't establish) that API's are copyrightable... BUT that developing against them falls under fair use, as long as the function implementations are independent.
Test suites are again generally considered copyrightable... but the behavior being tested is not.
So no, it's not considered to be a work based on the library. This seems pretty clear-cut in US law by now.
Also, the LGPL text doesn't say "work based on the library". It says "If you modify a copy of the Library", and this is not a "combined work" either. And the whole point is that this is not a modified copy -- it's a reimplementation.
In theory, a license could be written to prevent running its tests from being run against software not derived from the original, i.e. clean-room reimplementations. In practice, it remains dubious whether any court would uphold that. And it would also be trivial to then get around it, by taking advantage of fair use to re-implement the tests in e.g. plain English (or any specification language), and then re-implementing those back into new test code. Because again, test behaviors are not copyrightable.
> Google LLC v Oracle America assumed (though didn't establish) that API's are copyrightable... BUT that developing against them falls under fair use, as long as the function implementations are independent.
That was only one prong of the four fair use considerations in that case. Look at Breyer's opinion, it does not say that copying APIs is fair use if implementations are independent, just that Google's specific usage in that instance met the four fair use considerations.
There are likely situations in which copying APIs is not fair use even if function implementations are independent, Breyer looked at substantiality of the code copied from Java, market effects and purpose and character of use.
If your goal is to copy APIs, and those APIs make up a substantial amount of code, and reimplement functions in order to skirt licenses and compete directly against the source work, or replace it, those three considerations might not be met and it might not be fair use. Breyer said Google copied a tiny fraction of code (<1%), its purpose was not to compete directly with Oracle but to build a mobile OS platform, and Google's reimplementation was not considered a replacement for Java.
without discussing copyright, I don't believe any of this is copied. Which I think should be the argument that actually matters.
I downloaded both 6.0 and 7.0 and based on only a light comparison of a few key files, nothing would suggest to me that 7.0 was copied from 6.0, especially for a 41x faster implementation.
It is a lot more organized and readable in my armature opinion, and the code is about 1/10th the size.
Someone should put this to the test. Take the recently leaked Minecraft source code and have Copilot build an exact replica in another programming language and then publish it as open source. See if Microsoft believes AI is copyright infringement or not.
Very much is. "Software programs, as such" are exempt in the EPC article 52. However if the software program interacts with the world - if it has a "further technical effect" - it is patentable.
The big question is: if copyrighted material was used in the training material, is the LLM's output copyright infringement when it resembles the training material? In your example, you are taking the copyrighted material and giving it to the LLM as input and instructing the LLM to process it. Regardless of where the legal cards fall, this is a much less ambiguous scenario.
There's a couple of different issues here that all get mangled together. If you're producing effectively the same expression that's infringement. You draw Captain America from memory, it's still Captain America, and therefore infringement. If you draw Captain Canada by tracing around Captain America that's also infringement but of a different type.
When it comes to software, again it's the expression that matters -- literally the actual source code. Software that does the same thing but uses entirely different code to do it is not the same expression. Like with the tracing example above, if you read the original source code then it's harder to claim that it isn't the same expression. This is why clean room implementations are necessary.
The "clean room" concept gets really blurry with LLMs in practice. I build a SaaS product that uses AI to process unstructured voice input and map it to
structured form fields. During development, we looked at how other tools solve similar problems — not their source code, but their public behavior and APIs.
Now imagine an LLM trained on every GitHub repo doing the same thing at scale. The model has "seen" the source, but the output is statistically generated, not
copied. Is that a clean room? The model never "read" the code the way a human would, but it clearly learned patterns from it.
I think the practical answer is that clean room as a legal concept was designed for a world where reimplementation was expensive and intentional. When an LLM
can do it in minutes from a spec, we need a different framework entirely.
As described, this would not be the same thing. If the AI is looking at the source and effectively porting it, that is likely infringement. The idea instead should be "implement Minecraft from scratch" but with behavior, graphics, etc. identical. Note that you'll need to have an AI generate assets or something since you can't just reuse textures and models.
AI models have already looked at the source of GPL software and contain it in their dataset. Adding the minecraft source to the mix wouldn't seem much different. Of course art assets and trade marks would have to be replaced. But an AI "clean room" implementation has yet to be legally tested.
For copyright purposes I think there is an important legal distinction between training data (fed in once, ahead of time, and can in theory no longer be recovered as-is) and context window data (stored exactly for the duration of the model call).
I'm not sure there should be, but I think there is.
That's why he is saying it's not equivalent. For it to be the same, the LLM would have to train on/transform Minecraft's source code into its weights, then you prompt the LLM to make a game using the specifications of Minecraft solely through prompts. Of course it's copyright infringement if you just give a tool Minecraft's source code and tell it to copy it, just like it would be copyright infringement if you used a copier to copy Minecraft's source code into a new document and say you recreated Minecraft.
The context window is quite literally not a transformation of tokens or a "jumbling of bytes," it's the exact tokens themselves. The context actually needs to get passed in on every request but it's abstracted from most LLM users by the chat interface.
What if Copilot was already trained with Minecraft code in the dataset? Should be possible to test by telling the model to continue a snippet from the leaked code, the same way a news website proved their articles were used for training.
I feel as though the fact that you are asking a valid question shows how transformative it is; clearly, while the LLM gets a general ability to code from its training corpus, the data gets so transformed that it's difficult to tell what exactly it was trained on except a large body of code.
This would still be true of the case where you ask an LLM to rewrite a program while referencing the source. Unless someone was in the room watching or the logs are inspected, how would they know if the LLM was referencing the original source material, or just using general programing knowledge to build something similar.
"Behavior, graphics, etc." would likely constitute separate IP from the code. I am not sure there's a model that allows you to make AI reproduce Minecraft without telling it what "Minecraft" is - which would likely contaminate it with IP-protected information.
> Note that you'll need to have an AI generate assets or something since you can't just reuse textures and models.
As far as I know, you can as long as you own a copy of the original. In other words, you can't redistribute the assets, but you can distribute the code that works with them. This is literally how every free/libre game remake works. The copyright of your new, from-scratch code, is in no way linked to that of the assets.
They might not care. Products win not by quality or features but by advertisement, hype and network effects.
The original implementation would still have the upper hand here. OTOH if I as a nobody create something cool, there's nothing stopping a huge corporation from "reimplementing" (=stealing) it and and using their huge advertising budget to completely overshadow me.
Given how hard companies like Nintendo and Microsoft have been taking down leaks or fan creations, it seems they very much do care about keeping this stuff locked down.
this question should've been posed earlier when first LLMs were training. many people chose to ignore the question, and now, several distillation epochs later, it is not a question that matters, as both yes/no are true, and not true.
is it legitimate for millions of people to exploit, expound on knowledge that was perhaps, to begin with, not legitimate to use? well they did already, who's to judge the commons now?
What a ridiculous take. Many people loudly raised the question and objected to the practice from the beginning, but a handful of companies ignored the objections and ran faster than the legal system. If they were in the wrong, legally or morally, they still deserve to face repercussions for it.
it is a take, ridiculous or not. the fact you rage against it implies its not as improbable as you may want it to be. besides ridiculousness is a very subjective matter, right? many things are super ridiculous in 2026 from 2020s perspective, and this just piles on top.
to me is superb ridiculous to shun the comment though. but we'll be having this split for a while, that for sure.
Decompiling binaries is easy when they are C# or Java, even before AI. C# is a Microsoft language, and C# games have thriving mod communities with deep hooks into the core game, and detailed documentation reverse-engineered from the binary.
Wow, it feels like this argument rewired my brain.
When I first read about the chardet situation, I was conflicted but largely sided on the legal permissibility side of things. Uncomfortably I couldn't really fault the vibers; I guess I'm just liberal at heart.
The argument from the commons has really invoked my belief in the inherent morality of a public good. Something being "impermissible" sounds bad until you realize that otherwise the arrow of public knowledge suddenly points backwards.
Seeing this example play out in real life has had retroactive effects on my previously BSD-aligned brain. Even though the argument itself may have been presented before, I now understand the morals that a GPL license text underpins better.
FWIW I like to explain it to folks like this: ignore all of your moral baggage around licensing and just focus on the fact that licensing is a legal tool of art that pretty much only becomes relevant in the context of threatening lawsuits.
BSD-type stuff is very simple because it says "here is this stuff. you can use it as long as you promise not to sue me. I promise not to sue you too."
Very simple.
GPL-type stuff is intrinsically more complex because it's trying to use the threatening power of lawsuits, to reduce overall IP lawsuits. So it has to say "Here is this stuff. You can use it as long as you promise not to sue me. I am only going to sue you, if you start pretending like you have the right to sue other folks over this stuff or anything you derive from it. You don't have the right to sue others for it, I made it, so please stop pretending and let's stop suing each other over this sort of thing."
Getting the entire legal nuance around that sort of counterfactual "I will only sue you if you try to pretend that you can sue others" is why they're more complex. And the simplest copyleft licenses like the Mozilla Public License have a very rigid notion of what "the software" is, so like for MPL it's "this file is gonna never be used in a lawsuit, you can edit it ONLY as long as you agree that this file must never be used by you to sue someone else, if you try to mutate it in a way that lets you sue someone else then that's against our agreement and we reserve the right to sue you."
Whereas for GPL it's actually kind of nebulous what "the software" is -- everything that feeds into the eventual compiled binary, basically -- and so the license itself needs to be a little bit airy-fairy, "let's first talk about what conveying the software means...", in various ways.
The interesting thing here is that as far as the courts are initially ruling, these from-scratch reimplementations are not human works and therefore are not copyrightable, which makes them all kind of public domain. Slapping the MIT license on it was an overstep. If that's how things go then Free Software has actually won its greatest sweep with LLM ubiquity.
I'm sure they wouldn't mind marking it as public domain. MIT is just the go-to license for things like this since it forces other people to notify others it came from an MIT repo if substantial parts of the original repo was used.
I agree with the thrust of this article, that norms and what we perceive as good or desirable extend considerably beyond the minimum established by law.
But a point that was not made strongly, which highlights this even more, is that this goes in every direction.
If this kind of reimplementation is legal, then I can take any permissive OSS and rebuild it as proprietary. I can take any proprietary software and rebuild it as permissive. I can take any proprietary software and rebuild it as my own proprietary software.
Either the law needs to catch up and prevent this kind of behavior, or we're going to enter an effectively post-copyright world with respect to software. Which ISN'T GOOD, because that will disincentivize any sort of open license at all, and companies will start protecting/obfuscating their APIs like trade secrets.
Companies can take open-source software and make a proprietary reimplementation. You can't take a proprietary software and make an open source GPL version.
I am absolutely certain that if you tried you would be sued to oblivion. But big company screwing up open source is not even news anymore. In fact I (still) believe that the fact that even though LLMs were trained on tons of GPL and AGPL or even unlicensed software it's considered ok to use LLM code in proprietary projects is example of just that.
From a strictly legal perspective the two are equivalent. The fact that there are structural injustices in the system is true, but that's not a question that any answer to "what should be legal" can fix.
I've been thinking this for over two years, that's why I stopped contributing to open source at that time - my work was only gonna be exploited to make rich people richer regardless of the license.
Crazy that only now we're seeing a bunch of articles coming to the same conclusion now.
I think copyright should still apply, but if it doesn't, we need new laws - ones which protect all human work, creative or not. Laws should serve and protect people, not algorithms and not corporations "owning" those algorithms.
I put owning in quotes because ownership should go to the people who did the work.
Buying/selling ownership of both companies and people's work should be illegal just like buying/selling whole humans is. Even if it took thousands of years to get here.
Money should not buy certain things because this is the root cause of inequality. Rich people are not getting richer at a faster rate by being more productive than everyone else but by "owning" other people's work and using it as leverage to extract even more from others.
Maybe LLM and mass unemployment of white collar workers will be the wakeup call needed for a reform. Or revolution.
Last time this happened was during the second industrial revolution and that's how communism got popular. We should do better this time because this is the last revolution which might be possible.
1) Legality and morality are obviously different and unrelated concepts. More people should understand that.
2) Copyright was the wrong mechanism to use for code from the start, LLMs just exposed the issue. The thing to protect shouldn't be creativity, it should be human work - any kind of work.
The hard part of programming isn't creativity, it's making correct decisions. It's getting the information you need to make them. Figuring out and understanding the problem you're trying to solve, whether it's a complex mathematical problem or a customer's need. And then evaluating solutions until you find the right one. (One constrains being how much time you can spend on it.)
All that work is incredibly valuable but once the solution exists, it's each easier to copy without replicating or even understanding the thought process which led to it. But that thought process took time and effort.
The person who did the work deserved credit and compensation.
And he deserves it transitively, if his work is used to build other works - proportional to his contribution. The hard part is quantifying it, of course. But a lot of people these days benefit from throwing their hands up and saying we can't quantify it exactly so let's make it finders keepers. That's exploitation.
3) Both LLM training and inference are derivative works by any reasonable meaning of those words. If LLMs are not derivative works of the training data then why is so much training data needed? Why don't they just build AI from scratch? Because they can't. They just claim they found a legal loophole to exploit other people's work without consent.
I am still hoping the legal people take time to understand how LLMs work, how other algorithms, such as synonym replacement or c2rust work, decide that calling it "AI" doesn't magically remove copyright and the huge AI companies will be forced to destroy their existing models and train new ones which respect the licenses.
I see this argument sometimes and it's annoying because:
1) People phrase it as a question even when they've already made up their mind (whether that's your case or not).
2) It implicitly assumes that humans and algorithms are the same. They are not - humans have rights and free will, algorithms don't. Humans cannot be bought or sold, etc.
To your question:
a) If you're asking whether teachers should get compensated according to how good a job they do, I think so. They are very often undervalued, especially the good ones - but of course that means the job attracts people who do it because they enjoy it (and are therefore more likely to be good at it) rather than those who chose jobs according to money and then do the bare minimum.
b) There's a critical difference - consent. Teachers consented to their knowledge being used by those they taught. I did not consent to my code being used for training LLMs. In fact I purposefully chose a licence (AGPL) which in any common sence interpretation prohibits this used unless the resulting model is licensed under the same license - you can use my work only if you give back. Maybe there's a hole in the law - then it should be closed.
I am now gonna pose a question to you in turn.
Do you think people should be compensated for the full transitive value of their work?
> It implicitly assumes that humans and algorithms are the same. They are not - humans have rights and free will, algorithms don't. Humans cannot be bought or sold, etc.
I don't think that's a necessary condition for that argument. You're making the implicit assumption that humans are special snowflakes and anything that we do cannot be replicated by computers, in any form. That's a very strong position to make without evidence. Is an LLM even an algorithm in the traditional sense? Is human cognition not an algorithm of some sort? I studied cogitative science decades ago and these questions weren't clear then, they're certainly even less clear now.
It's also somewhat begs the question of whether this is even relevant to what we are talking about.
Teachers are not relevant to conversation. You can learn by reading books, watching TV, using and reading software. Basically all of copyrighted and non-copyrighted human expression is available for you to consume and then creatively produce your own works using that knowledge.
> Do you think people should be compensated for the full transitive value of their work?
The short answer is no. Not everything that someone simply dreams up can or should be monetized. That sounds like a radical position but actually the current state of "intellectual property" has only existed for an extremely brief bit of human history. What has most greatly shaped our culture and knowledge has been effectively free for anyone to use, modify, and reproduce for hundreds of years.
That's not to say I don't support copyright as a means to support creative works but I would argue that it's an imperfect system. We're starving human minds of modern culture and knowledge often not even for someone's monetary gain but simply because the system demands it. It's ironic that artificial intelligence might actually free us from these constraints.
I purposefully choose a license (Apache) for my open source work to make it widely and freely available.
> an argument for protecting that test suite and API specification under copyleft terms.
If we protect API under copyright, it makes it easier to prevent interoperability. We obviously do NOT want that. It would give big companies even more power.
Now in the US, the Supreme Court that the output of an LLM is not copyrightable. So even a permissive licence doesn't work for that reimplementation: it should be public domain.
Disclaimer: I am all for copyleft for the code I write, but already without LLMs, one could rewrite a similar project and use the licence they please. LLMs make them faster at that, it's just a fact.
Now I wonder: say I vibe-code a library (so it's public domain in the US), I don't publish that code but I sell it to a customer. Can I prevent them from reselling it? I guess not, since it's public domain?
And as an employee writing code for a company. If I produce public domain code because it is written by an LLM, can I publish it, or can the company prevent me from doing it?
I think the direction we are going, the GPL is going to fade away. I think people will look at this like writing a book and claiming the ideas in the book cannot be copied. This debate is not that different from the ones going on in the music industry. I open sourced my latest software as Apache 2.0 after debating a lot about this. Unless the FSF wins in court in the next <=2-3 years, there is no coming back from this.
If an AI can license-wash open source software like this then the licenses become meaningless. Which is fascinating. Commercial software cloning that is simple enough for an average person to drive is next and the ultimate form of piracy, see an app for $10? Don’t fancy paying? Just ask ChatGPT for a clone. Future is going to be wild.
I take your point, but if the re-implementation looks the same, I would say it’s a form of copying. (Which I don’t think is a problem, I don’t think you should be able to own sequences of numbers.)
> If source code can now be generated from a specification, the specification is where the essential intellectual content of a GPL project resides.
Our foreparents fought for the right to implement works-a-like to corporate software packages, even if the so-called owners did not like it. We're ready to throw it all away, and let intellectual property owners get so much more control.
The implications will not end up being anti-large-corporation or pro-sharing. If you can prevent someone from re-implementing a spec or building a client that speaks your API or building a work-a-like, it will be the large corporations that exersize this power as usual.
>Our foreparents fought for the right to implement works-a-like to corporate software packages, even if the so-called owners did not like it
Our "foreparents" weren't competing with corporations with unlimited access to generative AI trained on their work. The times, they're-a-changin'.
You're rehashing the argument made in one of the articles which this piece criticizes and directly addresses, while ignoring the entirety of what was written before the conclusion that you quoted.
If anyone finds themselves agreeing with the comment I'm responding to, please, do yourself a favor and read the linked article.
I would do no justice to it by reiterating its points here.
I mean. Yeah. GPL's genius was that it used Copyright, which proprietary enterprise wouldn't dare dismantle, to secure for the public a permanent public good.
Pretty sure no one, (but me anyway) saw overt theft of IP by ignoring IP law through redefinition coming. Admittedly I couldn't articulate for you capital would skill transfer and commoditize it in the form of pay to play data centers, but give me a break, I was a teenager/twenty something at the time.
I believe the GP post is saying that if we react to the new AI-enabled environment by arbitrarily strengthening IP controls for IP owners, the greatest benefactors will almost certainly be lawyer-laden corporations, not communities, artists, or open source projects. That seems like a reasonable argument.
It seems like the answer is to adjust IP owner rights very carefully, if that's possible. It sounds very hard, though.
The article makes the same point; the quote was taken out of context.
The point the author was making was that the intent of GPL is to shift the balance of power from wealthy corporations to the commons, and that the spirit is to make contributing to the commons an activity where you feel safe in knowing that your contributions won't be exploited.
The corporations today have the resources to purchase AI compute to produce AI-laundered work, which wouldn't be possible without the commons the AI it got its training data from, and give nothing back to the commons.
This state of things disincentivizes contributing to the FOSS ecosystem, as your work will be taken advantage of while the commons gets nothing.
Share-alike clause of the GPL was the price that was set for benefitting from the commons.
Using LLMs trained on GPL code to x "reimplement" it creates a legal (but not a moral!) workaround to circumvent GPL and avoid paying the price for participation.
This means that the current iteration of GPL isn't doing its intended job.
GPL had to grow and evolve. The Internet services using GPL code to provide access to software without, technically, distributing it was a similar legal (but not moral) workaround which was addressed with an update in GPL.
The author argues that we have reached another such point. They don't argue what exactly needs to be updated, or how.
They bring up a suggestion to make copyrightable the input to the LLM which is sufficient to create a piece of software, because in the current legal landscape, creating the prompt is deemed equivalent to creating the output.
You can't have your cake and eat it too.
A vibe-coded API implementation created by an LLM trained on open source, GPL licensed code can only be considered one of two things:
— Derivative work, and therefore, subject to the requirement to be shared under the GPL license (something the legal system disagrees with)
— An original work of the person who entered the prompt into the LLM, which is a transformative fair use of the training set (the current position of the legal system).
In the later case, the input to the LLM (which must include a reference to the API) is effectively deemed to be equivalent to the output.
The vibe-coded app, the reasoning goes, isn't a photocopy of the training data, but a rendition of the prompt (even though the transformativeness came entirely from the machine and not the "author").
Personally, I don't see a difference between making a photocopy by scanning and printing, and by "reimplementing" API by vibe coding. A photocopy looks different under a microscope too, and is clearly distinguishable from the original. It can be made better by turning the contrast up, and by shuffling the colors around. It can be printed on glossy paper.
But the courts see it differently.
Consequently, the legal system currently decided that writing the prompt is where all the originality and creative value is.
Consequently, de facto, the API is the only part of an open source program that has can be protected by copyright.
The author argues that perhaps it should be — to start a conversation.
As for who the benefactors are from a change like that — that, too, is not clear-cut.
The entities that benefit the most from LLM use are the corporations which can afford the compute.
It isn't that cheap.
What has changed since the first days of GPL is precisely this: the cost of implementing an API has gone down asymmetrically.
The importance of having an open-source compiler was that it put corporations and contributors the commons on equal footing when it came to implementation.
It would take an engineer the same amount of time to implement an API whether they do it for their employer or themselves. And whether they write a piece of code for work or for an open-source project, the expenses are the same.
Without an open compiler, that's not possible. The engineer having access to the compiler at work would have an infinite advantage over an engineer who doesn't have it at home.
The LLM-driven AI today takes the same spot. It's become the tool that software engineers can and do use to produce work.
And the LLMs are neither open nor cheap. Both creating them as well as using them at scale is a privilege that only wealthy corporations can afford.
So we're back to the days before the GNU C compiler toolchain was written: the tools aren't free, and the corporations have effectively unlimited access to them compared to enthusiasts.
Consequently, locking down the implementation of public APIs will asymmetrically hurt the corporations more than it does the commons.
This asymmetry is at the core of GPL: being forced to share something for free doesn't at all hurt the developer who's doing it willingly in the first place.
Finally, looking back at the old days ignores the reality. Back in the day, the proprietary software established the APIs, and the commons grew by reimplementing them to produce viable substitutes.
The commons did not even have its own APIs worth talking about in the early 1990s. But the commons grew way, way past that point since then.
And the value of the open source software is currently not in the fact that you can hot-swap UNIX components with open source equivalents, but in the entire interoperable ecosystem existing.
The APIs of open source programs are where the design of this enormous ecosystem is encoded.
We can talk about possible negative outcomes from pricing it.
Meanwhile, the already happening outcome is that a large corporation like Microsoft can throw a billion dollars of compute on "creating" MSLinux and refabricating the entire FOSS ecosystem under a proprietary license, enacting the Embrace, Extend, Extinguish strategy they never quite abandoned.
It simply didn't make sense for a large corporation to do that earlier, because it's very hard to compete with free labor of open source contributors on cost. It would not be a justifiable expenditure.
What GPL had accomplished in the past was ensuring that Embracing the commons led to Extending it without Extinguishing, by a Midas touch clause. Once you embrace open source, you are it.
The author of the article asks us to think about how GPL needs to be modified so that today, embracing and extending open-source solutions wouldn't lead to commons being extinguished.
Which is exactly what happened in the case of the formerly-GPL library in question.
I think the article in fact reaches the exact opposite conclusion it should. I'm not really sure how useful it is to talk about sharing and commons and morals when the point raised was about what is possible. The prescription includes copyleft APIs. These are not possible under Oracle v Google. And you could point it out if I'm wrong but the article doesn't discuss what would happen if Congress acted to reverse Oracle v Google (IMO a cosmically bad idea).
I agree with the comment and find the linked article motivated reasoning at best. It's easy to find something "morally good" when it aligns with what you wanted. But plenty of people at Oracle, at IBM, at Microsoft, at Nintendo, at Sony and plenty of other companies whose moats have been commoditized by open source knockoffs don't find such happenings to be "morally good". And even if in general you think that "more freedom" justifies these sorts of unauthorized clones, then Oracle V. Google was at best a lateral move, as Java was hardly a closed ecosystem. One also wonders how far the idea of "more freedom" = "good" goes. How does (did if Qualcom's recent acquisition changes the position) the author feel about the various chinese knockoff clones of the Arduino boards and systems? Undeniably they were a financial good for hobbyists and the maker world alike, and they were well within the "legal" limits, and certainly they "opened" the ecosystem more. But were they "good"? Was the fact that they competed and undersold Arduino's work without contributing anything back and making it harder financially for Arduino to continue their work a "moral good"?
If "more freedom" is your goal, then this rewrite is inherently in that direction. It didn't "close" the old library down. The LGPL version remains under its license, for anyone to use and redistribute exactly as it always has. There is just now also an alternative that one can exercise different rights with. And that doesn't even get into the fact that "increased freedom" was never a condition of being allowed to clone a system from its interfaces in the first place. It might have been a fig leaf, but some major events in the legal landscape of all this came from closed reimplementations. Sony v. Connectix is arguably the defining case for dealing with cloning from public interfaces and behavior as it applies to emulators of all kinds, and Connectix Virtual Gamestation was very much NOT an open source or free product.
But to go a step further, the larger idea of AI assisted re-writes being "good", even if the human developers may have seen the original code seems to broadly increase freedoms overall. Imagine how much faster WINE development can go now that everyone that has seen any Microsoft source code can just direct Claude to implement an API. Retro gaming and the emulation scene is sure to see a boost from people pointing AIs at ay tests in source leaks and letting them go to town. No our "foreparents" weren't competing with corporations with unlimited access to AI trained on their work, they were competing with corporations with unlimited access to the real hardware and schematics and specifications. The playing field has always been un-level which was why fighting for the right to re-implement what you can see with your own eyes and measure with your own instruments was so important. And with the right AI tools, scrappy and small teams of developers can compete on that playing field in a way that previous developers could only dream of.
So no, I agree with the comment that you're responding to. The incredible mad dash to suddenly find strong IP rights very very important now that it's the open source community's turn to see their work commoditized and used in ways they don't approve of is off-putting and in my opinion a dangerous road to tread that will hand back years of hard fought battles in an attempt to stop the tides. In the end it will leave all of us in a weaker position while solidifying the hold large corporations have on IP in ways we will regret in the years to come.
Adding even more intellectual property nonsense isn't going to work. The real solution is to force AI companies to open up their models to all. We need free as in freedom LLMs that we can run locally on our own computers.
From the fact that copyright infringement is trivial and done at massive scales by pretty much everyone on a daily basis without people even realizing it. You infringe copyright every time you download a picture off of a website. You infringe copyright every time you share it with a friend. Everybody does stuff like this every single day. Nobody cares. It is natural.
> GPL itself was precisely the "intellectual property nonsense"
Yes. In response to copyright protection being extended towards software. It's a legal hack, nothing more. The ideal situation would have been to have no copyright to begin with. The corporation can copy your code but you can copy theirs too. Fair.
> Pray tell how you would do that without some "intellectual property nonsense".
Intellectual property is irrelevant to AI companies.
Intellectual property is built on top of a fundamental delusion: the idea that you can publish information and simultaneously control what people do with it. It's quite simply delusional to believe you can control what people do with information once it's out there and circulating. The tyranny required to implement this amounts to totalitarian dictatorships.
If you want to control information, then your only hope is to not publish it. Like cryptographic keys, the ideal situation is the one where only a single copy of the information exists in the entire universe.
AI companies are not publishing any information. They are keeping their models secret, under lock and key. They need exactly zero intellectual property protection. In fact such protections have negative value to them since it restricts the training of their models.
> We don't exactly get to hold Sam Altman at gunpoint to dictate our terms.
Sure you do. The whole point of government is to do just that. Literally pass some kind of law that forces the corporations to publish the model weights. And if the government refuses to do it, people can always rise up.
> Even if all models were open, we're not at the point where it would create an equal playing field.
So... People are going to rise up? What makes you think most of them have enough slack in their finances to pack up and haul off to D.C.? Only the Elites do, and they pay full time lobbyists to do exactly that to make sure laws like you mention never pass. Not saying it can't work. Just saying it the game is rigged against the very people you want to rise up and in favor of the ones who'd rather you stayed in bed.
If people don't rise up they will become soylent green. Over the long term, AI threatens to replace all human labor. It cannot remain locked away in corporate servers. This is an existential issue. The ultimate logic of capitalism is that unproductive people need not be kept alive since they add nothing but cost. So either we free AI, collapse the very idea of having an economy and transcend capitalism into a post-scarcity society, or we will be enslaved and genocided by those who control the AIs.
Hence why we see more and more pushes control communication on the internet. Going to be hard to free AI when a panopicon is turned against us to prevent exactly that.
You might be thinking of fair use, but that's an affirmative defence. Every time someone has copied someone elses artwork and modified it into a meme, that's copyright infringement and remains so even if is eventually ruled as fair use. If you make a fair use claim, you don't deny infringement, you make the claim that you were allowed to infringe.
Why don't they, there have been lawsuits over just these behaviors in the past. Hell, even the multiple representations of the picture in computer memory have had to have allowances.
Copyright is a gigantic fucking mess that the US has forced over a large chunk of the world.
I agree. But IMHO that ship has sailed. This should have been stop it when OpenAI went for-profit.
If you want to build a new world with out this, we can't do it while we are supporting the very companies that are creating the problem. The more power you give them, the strong they get and the weaker we become.
I think focus needs to shift completely off of for-profit companies. Although, not sure how that is going to happen..lol
And the whole Adobe pdf thing and the whole Microsoft word thing. And the whole ibm pc thing. Imagine if we were forced to keep using ibm from when they lost their way until now simply because anti-ai luddites were able to scare monger
I would wager that the vast majority of people commenting here about the pitfalls of AI, especially as it relates to governance and laws, are heavy users of AI, recognize the import and value it brings, and find ways to utilize it more themselves, so not sure using an ad-hominem dismissal of very valid objections are going to be effective.
(side bar: the phrase "anti-<whatever> luddites" is way, way overused, especially here. Let's get more creative, people!)
And yet the term luddite seems to fit the anti-ai crowd perfectly. They are largely concerned about employment (and more generally economic stability) and to that end seek measures intended to protect workers.
There's also some environmentalist concerns which the term luddite again fits perfectly. You just have to generalize, transferring laterally from economic wellbeing to environmental wellbeing.
So I don't think GP qualified as an ad hominem dismissal but rather an accurate description of the situation. Take what's being discussed (restrictions on specifications and interoperability), project it backwards in history, and imagine what an alternate present day would look like. I think it would be pretty bad.
>They are largely concerned about employment (and more generally economic stability) and to that end seek measures intended to protect workers.
Pffft no. Most of us think that AI is being used as a political trick - like firing unionized workers "to replace them with AI" and then hiring new un-unionized workers to replace them, 2 weeks later. Replace the AI with an empty cardboard box labeled "AI" in black marker, and nothing changes.
See also: using AI to launder pirated material, for big businesses.
>a political trick - like firing unionized workers
1. Since when have companies needed trillions of dollars of AI to do that? In the US they've been able to get away with getting rid of unions for decades now.
2. Since when has HN given a shit about unions. Posting about unions, at least till recently has been a great way of getting your comment downvoted to [dead] in one easy step. For longer than LLMs have existed the HN answer to unions was "They are just there to keep me as an SWE from making as much money as I can". Only now do we see a little bit of pushback now that their heads may be next on the chopping block.
We should be removing IP law entirely, not strengthening it to cover entire classes of problem even when implemented entirely differently. Same for anyone trying to claim "colorful monster creatures" as innately Pokemon IP. Just because someone climbed a mountain first doesn't mean they own it forever. Nobody should be honouring any of these claims.
Nor should we be treating AI models themselves as respected IP. They're built on everyone else's data. Throw away this whole class of law, it's irrelevant in this new world.
Well we could try fixing the forever part. Copyright is out of control. I’d like to see a world with much less power given to IP. Sometimes I even say I want it eradicated entirely. But realistically we should start by cutting things back. Maybe give software an especially short copyright period.
Reset it back to 20 years and make that a hard limit for both patents and copyright. No renewals. Zero exceptions. Let the market sort the rest out.
There's always going to be downsides and edgecases when granting any party a monopoly over anything. At least if it's limited to 2 decades any unintended consequences, philosophical objections, and etc are hopefully kept within reason.
That would be insane for aerospace software, where you might spend most of that time getting the code certified (required to break the $0 revenue threshold), let alone paying back your costs and then making an actual profit.
Meanwhile, there are cases where copyright of more than 2 years is overkill.
I don't know what, but it seems like we need some sort of mechanism for variable-length IP duration is needed.
Is copyright meaningful for aerospace software? I'm largely unfamiliar with that domain but I have trouble imagining that (for example) Boeing cares much about people redistributing or hacking on the control software for a 777. How would that impact their bottom line?
I could understand for medical devices maybe but even then it seems like the software is a tiny part of the overall cost of a given design. A competitor could already do a clean room reimplementation in that case.
But I guess it wouldn't be all that bad if there were a carefully crafted extension for government certified software that was explicitly tied to the length of the certification process.
The only problem with this certified software exception is I foresee they'll write the law as "expiration timer starts when software has finished certification" then some lobby group will get the regulatory departments to adopt a new process of partial certification where said software is usable in devices but the 'finished certification' never gets reached so the copyright gets dragged out forever.
Nope, it falls more under trade secrets than copyright.
If you do something that requires stealing the code (publishing it, selling it, etc) the company can legally fuck you up.
Now, once it's in tbe wind, it becomes almost impossible to pursue from a practical point of view, as any implementer can claim trade secrets to avoid showing you the code.
If certification is the actual cost, you don't need copyright, at all. SQLite is in the public domain. Your moat is the certification itself, not the code.
Have you seen the quality of regular software though? And the failure rate of regular physical items? The only reason I trust aircraft is because of the process.
Consider if you will that if some guy were to fly a drone the size of a car that he knocked together in his garage over a residential area people would not accept that. Yet private pilots in cessnas fly over neighborhoods constantly.
IMO the bigger question is how would you even tell if a work was generated by an LLM? There's a ton of code being written out there; the folks who generated it are going to claim they authored it for copyright purposes, and those who want to use it are going to claim it was LLM-generated. So what happens?
That code isn't going to be open source. And if you use someone else's closed source code you are violating laws that have nothing to do with copyright.
> What makes the leak illegal other than copyright? The occasional piece of software might be a trade secret, but a person downloading a preexisting leak isn't affected by those laws.
I'm not sure I understand. I'm not talking about stolen/leaked code here. I'm saying: imagine you claim you're the author of some piece of code. You may or may not have written it with an LLM, but even if so, assume you have the full rights to all the inputs. You post it publicly on GitHub. You don't attach a license, or perhaps you attach a restrictive license that doesn't permit much beyond viewing. Someone comes across your code, finds it brilliant, and wants to use it. If that code was non-copyrightable (such as generated via an LLM), then they're fine doing it without your permission, no? But if that code was copyrightable, then they're not permitted to do so, correct?
So now consider two questions:
1. You actually didn't use an LLM, but they believe & claim you did. Who has the burden of proof to show that you actually own the copyright, and how do they do so?
2. They write new code that you feel is based on yours. They claim they washed it through an LLM, but you don't believe so. Who has the burden of proof here and how do they do so?
The alleged author, when bringing a copyright infringement suit, will submit testimony claiming they wrote it. Parties to the suit will have a chance to present arguments and evidence. Then, the claim will be adjudicated by a judge and/or jury.
There was a recent case that everyone has been describing as "LLM output can't be copyrighted" but what it actually said was you can't register the AI as the author.
Not quite in my opinion. The output of an LLM from a simple prompt falls into the public domain, but if you also give a copyrighted work as input, the mechanistic transformation performed will not alter the original license (same as encoding a video does not change its license).
It would be interesting to see a court ruling that the output of LLMs trained on copyleft code are licensed under the GPL ... and all other viral licenses simultaneously
No, the copyright is the colour of the bits, and red bits with a comment saying "these bits are blue" are not blue bits, but you may be prosecuted for fraud.
It's new, fast-moving technology, and the courts are slow and expensive.
It would take two stubborn businesses with a lot of money deciding that it is better to battle it out than focus on their business. Something like IBM v SCO or Oracle v Google.
But we also know from other research that LLMs don't actually do mechanistic translations. Even when they are asked to and say that they did, they're basically rewriting the code from their training data
If that occurs and it’s a substantial enough body of output that it is itself copyrightable and not covered by fair use. Confluence of those conditions is intentionally rare.
I think it can be copyrighted or is a very complex legal issue. Coding support is used in commercial apps where copyrights are fully reserved. I cannot be feasibly determined if any output is purely LLM or not.
The problem here is that large companies can do whatever they want and regular people cannot. Don't worry, they won't be allowing you the same rights as these companies.
I would be okay with just keeping it but limiting it severely. If you release music and you can't sell enough albums in 20 years, that's not societies problem. A lot of artists release albums every 1 - 3 years anyway, so they're always selling some records, or were before streaming became the way to "own" music. Most make their money off of concerts anyway.
For movies and shows, charge and increasing fee to renew the copyright. Eventually studios will give up certain movies. The older the movie the more you pay.
We could also just have some of the rights go away after X amount of years.
Maybe after so much time it's still not legal to copy the original work, but it is legal to make a cover song, or a derivative work using the same character.
At another point maybe it's no longer to illegal to copy for free, but it is still illegal to sell without permission.
I personally think we should have shorter limits for non-creator owners of copyright, and for creators it should be like 20 years or death whichever comes last. I also think compulsory licensing should exist for everything.
Yeah, I really don't think we want APIs to be protected by IP. But in this case it isn't just the API, there were also tests involved. I think you could make a pretty strong argument that if you used a test suite to get an agent to implement some code, the code is a derivative product of the test code.
This is a really interesting edge case that's going to come up a lot more. If you feed an agent your test suite and it produces passing code, the tests shaped the output. That's closer to a spec than inspiration. The legal system is going to have a fun time drawing lines between "I described what I wanted" and "I provided a machine-readable contract for what the output had to be."
What exactly is the difference between "a machine-readable contract for what the output has to be" and "source code"?
What is the difference between an "agent" and a "compiler"?
For that matter, what is the difference between "I got an agent to provide a high level description" and a decompiler?
What is the difference between ["decompiling" a binary, editing the resulting source, recompiling, and redistributing] and [analyzing the behavior of a binary, feeding that description into an LLM, generating source code that replicates that behavior, editing that, recompiling and redistributing]?
Takeaway: we are now in a world where software tools can climb up and down the abstraction stack willy nilly and independently of human effort. Legal tools that attempt to track the "provenance" of "source code" were already shaky but are now crumbling entirely.
What if there was a special exemption for using a specification if you open source (or open hardware) the result for some definition roughly (or exacactly) equivalent to the OSI definition of open source, or FSFs definition of free software?
Although I think the chance of that happening is effectively zero.
It is not aout throwing the right to implement things away. As long as it is done according to the license of the works modified or copied, one can do that. What this is against is, that people wash away a license, that is meant to keep things open, transparent and free. It enables businesses to go back to completely proprietary systems, which will impact your rights.
I am for keeping the licenses in place, as long as there is any copyright at all on software. If we get rid of that, then we can get rid of copyleft licenses and all others too. But of course businesses and greedy people want to have their cake and eat it too. They want copyleft to disappear, but _their_ software, oh no, no one may copy that! Double standards at their best.
You're asking for exactly the same cake. You want for the GPL to pass through this process, but not the proprietary licenses that the original GNU tools were washing away.
(the paradox of copyleft is that it does tend to push free software advocates in a direction of copyright maximalism)
Copyright has always benefited those with power, down to the very first instance: Albrecht Durer bullying little children who wanted to make inferior copies of his prints so that their familities could enjoy the art. Durer insisted the art was only for nobles. Ab initio
I'm probably spitting in the wind, but stuff like this is why I removed all my hosted open source projects. I manage several niche projects that I have now converted to binary only releases (to almost no push back). It's niche enough that it's not very hard to get LLMs to output chunks of code that it managed to scrape before I took it offline. I don't see many people talking about this angle, but LLMs ripping off my work killed my open source efforts.
Both sides are wrong on this actually. Computer generated code has no copyright protection.
>The U.S. Copyright Office (USCO) and federal courts have consistently ruled that AI-generated works—where the expressive elements are determined by the machine, even in response to a human prompt—lack the necessary human creative input and therefore cannot be copyrighted.
All this code is public domain. Your employees can publish "your" AI generated code freely and it won't matter how many tokens you spent generating it. It is not covered by copyright.
It also erodes copyright. A decent amount of commercial software can be AI cloned with no copyright violation.
A lot of SaaS too, especially if AI can run a simple deploy.
We might be approaching a huge deflationary catastrophe in the cost of a lot of software. It’s not a catastrophe for the consumer but it is for the industry.
I largely agree with the author that AI can't just magically remove license agreements by rewriting code.
However, I take issue with his version of history:
>The history of the GPL is the history of licensing tools evolving in response to new forms of exploitation: GPLv2 to GPLv3, then AGPL.
GPLv3 set open source backwards: it wasn't an evolution to protect anything, it was a an overly paranoid failure. Don't believe me? Just count how many GPL3 vs. how many GPL2 projects have been started since GPL3 dropped.
Again, I'm very pro-OSS, but let's not pretend the community has always had a straight line of progress forward; some stuff is crazy Stallman stuff that set us back.
This take, which I've seen in a few different places now, seems 100% bonkers. A world where anyone can cheaply reimplement anyone else's software and use it on hardware of their own choosing in their own designs and for their own purposes is a free software utopia.
This isn't a problem, this is the goal. GNU was born when RMS couldn't use a printer the way he wanted because of an unmodifiable proprietary driver. That kind of thing just won't happen in the vibe coded future.
It's not going to be like that for proprietary software. All this future ends with is "totally free" software that companies will leech off of in their "totally locked down" software. I guarantee you that people wouldn't have had this reaction if someone had instead replicated Windows from leaked source. Well, other than Microsoft owners/employees.
Would software be more or less free in a world without copyright?
I argue more free. EULAs and restrictions on how+for what software can be used, like DRM, typically use copyright as their legal backing. GPL licenses turn that on it's head but that doesn't redeem the original, flawed, law.
This seems to follow the letter but not the spirit of the license. If this does pass legal muster, we can do the same to whatever proprietary software we wish, which makes a dramatically different but IMO better ecosystem in the end.
Way too many people think it’s copyright infringement to produce copyrighted material.
It’s not and never has been.
It’s not illegal for me to draw The Simpsons - whether or not I used AI. It’s illegal for me to sell it as my own.
To ban the very ability to produce it at all would be a dystopia. It would extend copyright to mean things it was never intended to mean - it would prevent you from physically uttering statements or depicting images, if these luddites who haven’t thought it through had their way.
There is a definite issue in terms of legitimacy and I also think there are some issues in the wording of certain open source licenses like MIT which give rights to 'Any person obtaining a copy of this software'.
Firstly, an AI agent is not a person. Secondly, the MIT license doesn't offer any rights to the code itself; it says a 'copy of the software' - That's what people are given the right to. It says nothing about the code and in terms of the software, it still requires attribution. Attribution of use and distribution of the software (or parts) is required regardless of the copyright aspect. AI agents are redistributing the software, not the code.
The MIT license makes a clear distinction between code and software. It doesn't cede any rights to the code.
And then, in the spirit of copyright; it was designed to protect the financial interests of the authors. The 'fair use' carve-out was meant for cases which do not have an adverse market impact on the author which it clearly does; at least in the cases highlighted in this article.
My view is that the current discourse surrounding AI reimplementation is trapped in an antiquated, atomistic model of authorship. What is fundamentally lacking in this debate is a systemic framework for trust, transparency, and the effective traceability of value creation.
Our legal and ethical frameworks including both copyleft and permissive licenses operate under the illusion of discrete, bounded attribution. They assume we can draw a clean perimeter around 'the code' and its 'author.' In reality, software production is a highly complex socio-technical network characterized by deep epistemic opacity. We are arguing over who holds the title to the final output while completely ignoring the vast, distributed network of inputs that made it possible.
Furthermore, because end-users face massive transaction costs and a general lack of incentive to evaluate the granular utility of their consumption, we have no reliable market mechanism to signal value back up the supply chain. Consequently, we fail to effectively compensate the true chain of biological and artificial contributors that facilitate downstream consumption.
In a rigorously mapped value-system, attribution would not stop at the keyboard; it would extend to all nodes of enablement. This includes what sociologists and economists term 'reproductive labor' or 'invisible labor' such as the developer’s partner who cooked them breakfast, thereby sustaining the biological and cognitive infrastructure necessary for the developer to contribute to the repository in the first place. The AI model is merely another node of aggregated external labor in this exact same web - both by its upward 'training' and downward utilization.
Until we develop an economic and technological ontology capable of tracing and rewarding this entire ecosystem of adjacent contributions, our debates over LGPL versus MIT will remain myopic. We are trying to govern a distributed, interconnected web of collective labor using property tools designed for solitary craftsmen.
"If you distribute modified code, or offer it as a networked service, you must make the source available under the same terms. This is not a restriction on sharing. It is a condition placed on sharing: if you share, you must share in kind."
-- This is, on any plain reading, a restriction on sharing. "You can share only under these conditions" is plainly more restrictive than "sure do whatever you want". You can argue that it's a restriction that ultimately leads to more sharing overall. But it is a restriction on sharing in any given case of sharing nevertheless.
It's just considering "sharing" or "freedom" out an extra step. "Total freedom" results in freedom for those who can protect it and no freedom for those who can't.
> Antirez does not address this directional difference. He invokes the GNU precedent, but that precedent is a counterexample to his conclusion, not a supporting one.
Morally - yes, technically - no. I think it's odd to be mad at someone doing the exact thing you praise in another case just because license isn't copyleft within license allowance. Make a better copyleft license?
> Blanchard's account is that he never looked at the existing source code directly. He fed only the API and the test suite to Claude and asked it to reimplement the library from scratch.
I don't see how it matters what he looked at. If I took a copyrighted code and run it through a script that replaces all variable names, and then claimed copyright on the result because it's an entirely new work and I did not look on the original work, I'd be ridiculed and sued, and would lose that lawsuit. AI is a more complex machine, but still a machine. If you feed somebody'd work into a machine, what comes out is a derivative work.
Test suite is a part of copyrighted code, is it not? If he used just the API description, preferably from a copyright-clean source, then we could claim new work (regardless of how it was produced, by using Claude or trained pigeons or by consuming magic mushrooms). But once parts of the copyrighted code had been used, it becomes derivative work.
> AI is a more complex machine, but still a machine. If you feed somebody'd work into a machine, what comes out is a derivative work.
I'm not sure that's true, legally speaking. If you fed it into a PRNG, the output seems to me like it would not be an obviously derivative work (i doubt you could copyright it but that's a separate question). So we have 1 machine that can transform something into non-derivative work, and another that leaves the result derivative. The line isn't likely going to be drawn as "did a machine do it or not", but on a fuzzy human line of how close the output seems to be to the original (IANAL).
FWIW, that case is not really relevant to what we are/were talking about here.
The question is whether you are truly an "author", or whether there was no (human) author.
The general legal consensus has been that generative AI output is not copyrightable (without some special facts of some sort, perhaps).
> If all of this code was somehow not copyrightable because someone wrote a prompt instead of directly editing the code, that would have pretty huge implications.
That's exactly it. Your act of applying the MIT license with your copyright notice
to code that you did not "directly edit" has enormous implications.
The case law is that a camera can't own a copyright, but a human can, even though all the pixels were produced by the camera with very little involvement at the pixel level by the human.
A camera doesn't use unlicensed IP from other sources to produce an image. The makers of the camera explicitly gave you a right to own the photograph taken with the parts used to assemble the camera.
Isn't the whole thing sidestepping another issue? If the code was rewritten with an AI, then it becomes a non-copyrightable work? Hasn't this already gone through the courts? So isn't the resulting library de facto public domain, even if the maintainer wants to try and attach a license to it?
Edit: looks like an IP lawyer had this exact issue on the GitHub and it was closed.
> The ethical force of that project did not come from its legal permissibility—it came from the direction it was moving, from the fact that it was expanding the commons. That is why people cheered.
How is this not just relitigating GPL vs MIT? By now you know which side of that argument you are in. The AI component is orthogonal.
The GPL's conditions are triggered only by distribution. If you distribute modified code, or offer it as a networked service, you must make the source available under the same terms.
Offering as a networked service is not distribution. That was why they had to make AGPL to put conditions on use in networked services.
A large part of our industry is experiencing significant cognitive dissonance and articles like this are a symptom of that. AI is not really changing things, it's simply forcing us to question a lot of things we took for granted.
One of those things is that we assumed that the code embodied most of the value it offered. That it was the code that contained the creativity and expressiveness and usefulness. And we thought only we could write code. And so we thought we only needed to protect the code to protect our efforts and investments. Which is also why we accepted copyright as an appropriate legal protection for software, or of enforcing an ethos of sharing, as with copyleft.
But the code itself was never the valuable aspect; it was the functionality it provided.
And now AI is making that starkly apparent, while undermining a lot of other presumptions. Including about copyright.
Copyright protection for software is a historical hack because people didn’t want to figure out an appropriate legal framework from scratch. You “wrote” books, you "wrote" code, let’s shoehorn software into copyright and go get lunch! Completely overlooking the fact that copyright explicitly does not cover functional aspects (that is the realm of patents) which is the entire raison d'etre of code.
Sure, copyright covers “expressive elements”, but again those are properties of the source code, not the functionality. In fact, expressiveness is BAD for code (cf “code should be boring”)! Copyright will protect whether you used a streams API or a for-loop for iteration, which is absolutely irrelevant to the technical functionality that actually solves user problems, which has always been the only thing users really cared about.
In fact, if you look at significant copyright-related cases for software now (e.g. Oracle vs Google), you'll realize they have twisted themselves into knots trying to apply laws intended for expressive creativity to issues that were essentially about technical creativity.
I have no hopes that we will figure out an appropriate IP framework for software, so I expect people will move towards other things like patents, trade secrets and trademarks. Which have their own problems, but at least they already exist and are more suitable than copyright, especially in the age of AI.
One main thing that this brings to mind is if an LLM can ever actually create a clean room implementation of a piece of open source software, given that there is a near certainty that the software was used in its training data. Therefore it has seen it and remembered it, and could if appropriately prompted recreate the code verbatim.
This can also apply to people, either if they have seen the code previously and therefore are ineligible to write the code for a clean-room implementation, or it gets murky when the same person writes the same code twice from their own knoeldge, as in the Oracle Java case.
Coming from a professional programming perspective I can totally see the desire to have more libraries written in permissive licences like BSD or MIT, as they allow one like myself to include them in commercial closed-source products without needing to open source the entire codebase.
However I find myself agreeing with the article in so far as this LLM generated implementation is breaking the social contract for a GPL/LGPL based library. The author could have easily implemented the new version as a separate project and there would not have been an outcry, but because they are replacing the GPL version with this new one it feels scummy to say the least.
> Blanchard's own claim—that he worked only from the test suite and API without reading the source—is, paradoxically, an argument for protecting that test suite and API specification under copyleft terms.
Ridiculous. I don't want specifications for proprietary APIs to be protected, and I don't want the free ones to be either. The software community seemed pretty certain as a whole that this would be very bad for competition [1].
Morally, I don't think there's anything wrong with re-implementing a technology with the same API as another, or running a test suite from a GPL licensed codebase. The code wasn't stolen, it was capitalized on. Like a business using a GPL code editor to write a new one.
> This is not a restriction on sharing. It is a condition placed on sharing
Also this doesn't make any logical sense. A condition on sharing cannot exist without corresponding restrictions.
If you can prove the LLM was not trained on the code it is reproducing, and has never seen the code as part of a spec to follow, I don't see a problem.
Proving this is going to be hard with current "open source" models.
Indeed, you have to prove that the LLM is generating code from a specification. Right now they don’t do that; what they do is regurgitate portions of their training data based on correlations with input tokens.
Put the programmer’s reference for the Digital Equipment DEQNA QBus Ethernet adapter in your favorite slop tool and tell it to make a C or C++ implementation for an emulator, and you know what you get? Code from SIMH. That’s not “generating,” that’s “copying.”
The four essential freedoms of the Free Software movement are ...
1. The freedom to run the program as you wish
2. The freedom to study how it works and modify it (which requires access to source code)
3. The freedom to redistribute copies to help others
4. The freedom to distribute modified versions, so the whole community benefits from your improvements
To my mind ... GenAI coding make all of these far more realizable, especially for "normal people", than CopyLeft ever has. Let's go through them ...
Want to run a program as you wish? Great! It's easier than ever to build a replacement. Proprietary or non-free software is just as vulnerable to reimplementation as Copyleft is.
Want to study a how a program works and to modify it? This is now much more achievable.
Want the freedom to redistribute copies to help others? Build your own version! It may not even be copyrightable if it's 100% generated (IANAL).
Want to distribute modified versions? yes! see previous.
I dunno; seems like generative coding can be as much a liberator as any kind of problem.
Sorry, but this seems to be so off-base (as well as naively optimistic) that I am having difficulty responding to this.
But I'll try nevertheless.
- >Want to run a program as you wish? Great! It's easier than ever to build a replacement.
Non-sequitur. Building a replacement does nothing for being able to run a program as you wish.
Nobody else is able to run your program as they wish unless you release it with a Copyleft license.
- >Want to study a how a program works and to modify it? This is now much more achievable.
Reverse engineering is more achievable.
Modifying a program, without having its source code, documentation, and a legal right to do so guaranteed by the license is (and always be) easier compared to not having those things.
- >Want the freedom to redistribute copies to help others? Build your own version! It may not even be copyrightable if it's 100% generated (IANAL).
So, that's not about redistributing copies. That's about building an alternative option.
I can download an Ubuntu image and get Libre Office on it with a click.
Go vibe-code me a Microsoft Excel running on Windows 11, please, and tell me it's easier.
- >Want to distribute modified versions? yes! see previous.
You're not even trying here.
One can't legally modify and redistribute copyrighted works without explicit permission to do so.
You keep saying "...but vibe coding allows anyone to create something else entirely instead and do whatever with it!" as if that is a substitute for checking out a repo, or simply downloading FOSS software to use as you wish.
- >I dunno; seems like generative coding can be as much a liberator as any kind of problem.
Now, that statement I fully agree with.
Generative coding is a liberator as much as any kind of problem is.
Headache, for example, is generally a problem. It's not a great liberator.
Neither is generative coding.
Now, you probably didn't intend to say what you wrote. And that's exactly why generative coding is not a panacea: the only way to say things that you mean to say is to write precisely what you mean to say.
Vibe-coding (like any vibe-writing) simply can't accomplish that, by design.
Unless your idea of software is reduced to the set of todo app, I don’t see how your points hold. AI won’t give you Blender, Inkscape, Kicad, Emacs,… And the algorithms behind those are not secrets, it’s the cohesive vision behind the whole system that is hard.
People will still pay for Matlab, SolidWorks, and Maya because no one who need those will vibe-code a solution. And there’s plenty of good OSS versions for the others.
Claude must be trained on chardet already, it worked on chardet's code to optimize or rewrite it to be much better. This is the textbook definition of derivative works.
There is fewer then 2% of code a copy of chardet. When the developer of chardet had done it without AI, whats then? He is trained on the same code too.
The practical tension I see: I build open source tools and use AI heavily in the process (Claude as a coding assistant). Every commit has "Co-Authored-By: Claude" in it. The code is MIT-licensed and genuinely mine in terms of architecture and intent, but the line-by-line generation is clearly AI-assisted.
This creates an odd situation where the "reimplementation via AI" concern cuts both ways. If someone feeds my MIT repo to an LLM and gets a copyleft-violating derivative, that's one problem. But if I use an LLM trained on copyleft code to write my MIT-licensed tool, am I the one laundering licenses without knowing it?
I think the article's core point holds: legitimacy and legality are diverging fast. The open source community built norms around intent and reciprocity, and those norms are now being stress-tested by tools that can reimplement anything from a spec. No license text can fully encode "don't be a free rider."
If specifications become IP? Reboot the pirate parties. Authoritarianism is what it is. The exploitation isn't coming from the tools, it's coming from the economic structures and forces of exploitation being brought to their natural limits. We should learn from the luddites, the actual luddites. They weren't anti-technology, they were against the insane consolidation of power. This proposal might seem radical but all it would really do is reify intellectual property at exactly the wrong moment, a game over moment. Feed local LLMs. Feed peoples' movements around these technologies, so that people bring agency into how we use the tech, so that we don't get dominated by the laws that form around it.
There's a long history of people creating clean room implementations of other people's software based on specifications, reverse engineering, etc. A lot of that software is even distributed under GPL. Most drivers in the Linux kernel are good examples. There are things like Dosbox. Databases, video encoders, etc.
So, you could argue that people are using double standards here a bit. It's fine when people take proprietary software and create GPL versions of it. But it's not OK when people take GPL software and create permissively licensed or proprietary versions of it. That's of course not how copyright actually works. The reason all of this is OK is that copyright allows you to do this thing. This isn't some kind of loophole that needs closing but an essential feature of copyright.
The friction here, and common misunderstanding about how copyright works is that you don't copyright ideas but the form or expression of something. Making a painting of a photograph is not a copyright violation. Same idea, different expression. Patents are for protecting ideas. Trademarks are for protecting brands. Some companies have managed to trademark certain color codes even, which is controversial.
There's a lot of legal history for interpretation of what is and isn't "fair use" under copyright of course. It gets much more complicated if you also consider international law and how copyright works in different countries. But people being able to make reasonable use of copyrighted material always was essential to the notion of having it to begin with.
The reason we can have music that uses samples from other people's music without that being a copyright violation is exactly this fair use. In the same way, you can quote from books and create funny memes based on movie fragments. Or create new theater plays, movies, etc. reinterpreting works of others. All legal, up to a point. If you copy too much it stops being fair use and starts being plagiarism.
With software copyright violations, you have to prove that substantial parts of the software were lifted verbatim. Lawyers and judges look at this in terms of how they would apply it to a plagiarism case. Literally - software doesn't get special treatment under copyright. Copyright long predates the existence of software and computers and did not change in any material way after that was invented.
Despite the tech layoffs and rise of AI, programmer hubris is alive and well, that is heartening.
Here we see three engineers writing — at length! — about a hugely complicated matter of law.
No one outside your bubble cares what you think. You are unqualified and your opinions irrelevant. You might as well be debating open heart surgery techniques.
You ask Gemini to make an Elsa and Anna Frozen-themed coloring book page. It says no, that would be copyright infringement. So you ask it to make something as close as possible but without infringing. It happily obliges.
Oracle v Google concluded that APIs could not be protected by either copyright or copyleft. It seemed to me at the time that most here supported that decision. Has anything changed?
No, APIs fall under copyright, but the Supreme Court found that Google's reimplementation of Java's API was falling under fair use. Fair use is decided case by case, one cannot use that decision as a precedent.
I've been thinking about the erosion of copyright as well. It's basically making software worthless.
Already, the IP protections which exist for software suck. Patents are expensive and you can't even use them for software most of the time anyway. Copyright doesn't protect innovative ideas or architectures; if someone can just copy your code, mix it with a bunch of other code (no functionality changes) and then use it as their own; then copyright provides no protection at all...
If this is the case, then why should anyone bother to write any quality software at all? It has no value since anyone can just appropriate any essential functionality that they didn't create for themselves. What's to prevent an employee from taking their employer's source code, rewriting it with an LLM (same functionally) and generate a clone of their company's software to use as their own to compete against their employer?
Without any IP protections, anyone who writes software becomes a complete loser. There's 0 benefit. One software developer would be doing all the work and then some marketing expert or someone with good social connections could just steal their work and sell it for billions... The software developer gets NOTHING.
I'm a bit shocked by how people are equating recreating from copyright with recreating from copyleft. It's not the same thing... copyleft code is out in the open on purpose, with the condition that you share back in kind. People are latching onto the intellectual property angle of this article, but the point is way simpler than that: "The terms of that compact were: if you take this and build on it, you share back under the same terms."
To me, there is a confusion of what "copying" and "using" means.
You can copy the idea and not use the source code. This has been ruled ok many times already and would be quite dangerous if that was not the case.
But this is not what this is.
To generate the new program, another program, the AI, must have an input which then becomes part of the program itself.
It does not really matter much if the generation does not contain the source code itself or a similar reimplementation. One could rewrite a full version of the Lord of the Rings changing all the words but having the same elements, it would still be plagiarism. No reason to think this is not the case here. It is evident that the source code was the base, hence, this is a derived work.
If anyone can go in, take a GPL project like chardet and reimplement it using LLMs, then the current maintainer just saved everyone time by making their implementation publicly available.
Our legal framework wasn't built for a situation where reimplementing complex software is trivial, much less almost completely automated.
> Blanchard's account is that he never looked at the existing source code directly. He fed only the API and the test suite to Claude and asked it to reimplement the library from scratch. The resulting code shares less than 1.3% similarity with any prior version, as measured by JPlag. His conclusion: this is an independent new work, and he is under no obligation to carry forward the LGPL. Mark Pilgrim, the library's original author, opened a GitHub issue to object. The LGPL requires that modifications be distributed under the same license, and a reimplementation produced with ample exposure to the original codebase cannot, in Pilgrim's view, pass as a clean-room effort.
Another question which as far as I can see isn't addressed in the article: even if you accept that the AI-driven reimplementation is an independent new work, can you (even as a maintainer) simply "hijack" the old LGPL-licensed project and overwrite it (if the new code is 98,7% different from the existing code, it's essentially overwriting) with your MIT-licensed code? You're free to start a new MIT-licensed project with your reimplementation, but putting the new code into the old project like some kind of cuckoo's egg seems wrong to me...
within 20 years, everyone will be developing software which will be copyrighted partly by AI and be behind walled gardens. Sure you'll be able to do things locally but everything (security clearance, walled garden, government's control etc) but it will forever remain at the level of "tinkering".
If you are 50 years old or more, the computing you were born with (you own the computer, you own the programs) will be gone. Copyleft only makes sense if you own the computer.
In Germany we call this 'Beißreflex'. It's all good, when someone reimplements something in Rust, no one asks for the license, but as soon someone uses AI to reimplement something, the search for something to complain about begins.
One thing I don't understand is what was so bad about having an LGPL license. You are allowed to do `import chardet` in a MIT-licensed or a proprietary program.
That is also my understanding. My personal theory is that many corporate compliance departments (or whoever is in charge of this at a particular place) just disallow any *GPL use in their company, regardless of whether it would actually cause problems, so this is an attempt to "unblock" the library for those. Instead of, you know, educating people about the nuances of different copyleft licenses.
I wonder what value gpl even has in a world where i can trivially reimplement whatever a company builds on a permissive license and does not share. I see still a place for things that are low level, algo heavy, real world test heavy and critical eg. kernel, cryptography, storage engine, filesystems all the rest of userland and web not so much
I'm specifically worried about gating trained data where publicly accessible information is blocked/opted-out. If we're opening up Pandora's Box with genAI and training data, we may as well give it what is accessible to the average user. Its going to end up having the same issues a user with implicit knowledge or memories would have anyway.
wccrawford | a day ago
This whole article is just complaining that other people didn't have the discussion he wanted.
Ronacher even acknowledged that it's a different discussion, and not one they were trying to have at the moment.
If you want to have it, have it. Don't blast others for not having it for you.
wizzwizz4 | a day ago
> But law only says what conduct it will not prevent—it does not certify that conduct as right. Aggressive tax minimization that never crosses into illegality may still be widely regarded as antisocial. A pharmaceutical company that legally acquires a patent on a long-generic drug and raises the price a hundredfold has not done something legal and therefore fine. Legality is a necessary condition; it is not a sufficient one.
amarant | a day ago
It might even be morally abhorrent to have such a discussion in the first place!
wizzwizz4 | 4 hours ago
ordu | a day ago
What AI are eroding is copyright. You can re-implement not just a GPL program, but to reverse engineer and re-implement a closed source program too, people have demonstrated it already, there were stories here on HN about it.
AI is eroding copyright, so there may no longer be a need for the GPL. GNU should stop and rethink its stance, chuck away the GPL as the main tool to fight evil software corporations and embrace LLM as the main weapon.
stebalien | a day ago
Unfortunately, there are cases where you simply can't just "re-implement" something. E.g., because doing so requires access to restricted tools, keys, or proprietary specifications.
rileymat2 | a day ago
It also grants one major right/feature to the creator, the ability to spread their work while keeping it as open as they intend.
ordu | a day ago
"So, I looked for a way to stop that from happening. The method I came up with is called “copyleft.” It's called copyleft because it's sort of like taking copyright and flipping it over. [Laughter] Legally, copyleft works based on copyright. We use the existing copyright law, but we use it to achieve a very different goal."
https://writings.hongminhee.org/2026/03/legal-vs-legitimate/
sarchertech | a day ago
dathinab | a day ago
i.e. mirroring it
> use it to achieve a very different goal."
"very different goal" isn't the same as "fundamentally destroying copyright"
the very different goal include to protect public code to stay public, be properly attributed, prevent companies from just "sizing" , motivate other to make their code public too etc.
and even if his goals where not like that, it wouldn't make a difference as this is what many people try to archive with using such licenses
this kind of AI usage is very much not in line with this goals,
and in general way cheaper to do software cloning isn't sufficient to fix many of the issues the FOSS movement tried to fix, especially not when looking at the current ecosystem most people are interacting with (i.e. Phones)
---
("sizing"): As in the typical MS embrace, extend and extinguish strategy of first embracing the code then giving it proprietary but available extensions/changes/bug fixes/security patches to then make them no longer available if you don't pay them/play by their rules.
---
Through in the end using AI as a "fancy complicated" photocopier for code is as much removing copyright as using a photocopier for code would. It doesn't matter if you use the photocopier blind folded and never looked at the thing you copied.
sjunot | 23 hours ago
For the right goal, he should have called it "rightcopy".
davidw | a day ago
LLM's - to date - seem to require massive capital expenditures to have the highest quality ones, which is a monumental shift in power towards mega corporations and away from the world of open source where you could do innovative work on your own computer running Linux or FreeBSD or some other open OS.
I don't think that's an exciting idea for the Free Software Foundation.
Perhaps with time we'll be able to run local ones that are 'good enough', but we're not there yet.
There's also an ethical/moral question that these things have been trained on millions of hours of people's volunteer work and the benefits of that are going to accrue to the mega corporations.
Edit: I guess the conclusion I come to is that LLM's are good for 'getting things done', but the context in which they are operating is one where the balance of power is heavily tilted towards capital, and open source is perhaps less interesting to participate in if the machines are just going to slurp it up and people don't have to respect the license or even acknowledge your work.
ordu | a day ago
Yeah, a bit of a conundrum. But I don't think that fighting for copyright now can bring any benefits for FOSS. GNU should bring Stallman back and see whether he can come with any new ideas and a new strategy. Alternatively they could try without Stallman. But the point is: they should stop and think again. Maybe they will find a way forward, maybe they won't but it means that either they could continue their fight for a freedom meaningfully, or they could just stop fighting and find some other things to do. Both options are better then fighting for copyright.
> There's also an ethical/moral question that these things have been trained on millions of hours of people's volunteer work and the benefits of that are going to accrue to the mega corporations.
I want a clarify this statement a bit. The thing with LLM relying on work of others are not against GPU philosophy as I understand it: algorithms have to be free. Nothing wrong with training LLMs on them or on programs implementing them. Nothing wrong with using these LLMs to write new (free) programs. What is wrong are corporations reaping all the benefits now and locking down new algorithms later.
I think it is important, because copyright is deemed to be an ethical thing by many (I think for most people it is just a deduction: abiding the law is ethical, therefore copyright is ethical), but not for GNU.
balamatom | a day ago
IMO the primary significant trend in AI. Doesn't get talked about nearly enough. Means the AI is working, I guess.
>GNU should bring Stallman back ... Alternatively they could try without Stallman.
Leave Britney alone >:(
>copyright is deemed to be an ethical thing by many (I think for most people it is just a deduction: abiding the law is ethical, therefore copyright is ethical)
I've busted out "intellectual property is a crime against humanity" at layfolk to see if that shortcuts through that entire little politico-philosophical minefield. They emote the requisite mild shock when such things as crimes against humanity are mentioned; as well as at someone making such a radical statement which seems to come from no familiar species of echo chamber; and then a moment later they begin to very much look like they see where I'm coming from.
Serenacula | 23 hours ago
bo1024 | 23 hours ago
zozbot234 | a day ago
There are near-SOTA LLM's available under permissive licenses. Even running them doesn't require prohibitive expenses on hardware unless you insist on realtime use.
walterbell | 23 hours ago
What async tasks could a local LLM accomplish on Intel 11th gen CPU with 32GB RAM?
jacquesm | a day ago
This was already the case and it just got worse, not better.
davidw | a day ago
Now they've just hoovered up all the free stuff into machines that can mix it up enough to spit it out in a way that doesn't even require attribution, and you have to pay to use their machine.
jacquesm | a day ago
Before we had RedHat and Ubuntu, who at least were contributing back, now we have Microsoft, Anthropic and OpenAI who are racing to lock the barn door around their new captive sheep. It's just a massive IP laundromat.
stalfie | 10 hours ago
jacquesm | 8 hours ago
thenewnewguy | a day ago
davidw | a day ago
It's nowhere near the order of magnitude of the kind of spending they're sinking into LLM's. The FSF and other groups were reasonably successful at enforcing the GPL, operating on a budget 1000's of times smaller than that of AI companies.
cloverich | a day ago
Being able to coat efficiently run frontier models is i think, not a high priced endeavor for an org (compared to an individual).
IMO the proposition is little fishy, but its not totally without merit and imo deserves investigation. If we are all worried about our jobs, even via building custom for sale software, there is likely something there that may obviate the need at least for end user applications. Again, im deeply skeptical, but it is interesting.
overfeed | a day ago
Running proprietary model would make you subject to whatever ToS the LLM companies choose on a particular day, and what you can produce with them, which circles back to the raison d'etre for the GPL and GNU.
Until all software copyright is dead and buried, there is no need for copyleft to change tack. Otherwise there rising tide may rise high enough to drown GPL, but not proprietary software.
Open source is easier to counterfeit/license-launder/re-implement using LLMs because source code is much lower-hanging fruit, and is understood by more people than closed-source assembly.
socalgal2 | 23 hours ago
shadowgovt | 23 hours ago
davidw | 22 hours ago
tmp10423288442 | 23 hours ago
When the FSF and GPL were created, I don't think this was really a consideration. They were perfectly happy with requiring Big Iron Unix or an esoteric Lisp Machine to use the software - they just wanted to have the ability to customize and distribute fixes and enhancements to it.
Aozora7 | 22 hours ago
Right now, we can get local models that you can run on consumer hardware, that match capabilities of state of the art models from two years ago. The improvements to model architecture may or may not maintain the same pace in the future, but we will get a local equivalent to Opus 4.6 or whatever other benchmark of "good enough" you have, in the foreseeable future.
cubefox | a day ago
A court ordered the first Nosferatu movie to be destroyed because it had too many similarities to Dracula. Despite the fact that the movie makes rather large deviations from the original.
If Claude was indeed asked to reimplement the existing codebase, just in Rust and a bit optimized, that could well be a copyright violation. Just like rephrasing A Song ot Ice and Fire a bit, and switching to a different language, doesn't remove its copyright.
zozbot234 | a day ago
cubefox | a day ago
Allegedly. There have been several people who doubted this story. So how to find out who is right? Well, just let Claude compare the sources. Coincidentally, Claude Opus 4.6 doesn't just score 75.6% on SWE-bench Verified but also 90.2% on BigLaw Bench.
It's like our copyright lawyer is conveniently also a developer. And possibly identical to the AI that carried out the rewrite/reimplemention in question in the first place.
Marsymars | 23 hours ago
There is some precedent for this, e.g. Alchemised is a recent best seller that had just enough changed from its Harry Potter fan fiction source in order to avoid copyright infringement: https://en.wikipedia.org/wiki/Alchemised
(I avoided the term “remove copyright” here because the new work is still under copyright, just not Harry Potter - related copyright.)
cubefox | 23 hours ago
Marsymars | 21 hours ago
cubefox | 19 hours ago
Marsymars | 16 hours ago
Translations are pretty much the textbook example of a derivative work in copyright.
Your jurisdiction may vary, of course, but it's pretty well established in mine (Canada) that "plot" is an idea, and can't be copyrighted, only the expression of the idea (e.g. the written novel) falls under copyright.
cubefox | 16 hours ago
webstrand | a day ago
Reducing it to "well you can clone the proprietary software you're forced to use by LLM" is really missing the soul of the GPL.
pocksuppet | a day ago
webstrand | a day ago
dathinab | a day ago
it's not that simple
yes, GPLs origins have the idea of "everyone should be able to use"
but it also is about attribution the original author
and making sure people can't just de-facto "size public goods"
the kind of AI usage is removing attribution and is often sizing public goods in a way far worse then most companies which just ignored the license did
so today there is more need then ever in the last few decades for GPL like licenses
amiga386 | a day ago
johnofthesea | a day ago
Is this LLM thing freely available or is it owned and controlled by these companies? Are we going to rent the tools to fight "evil software corporations"?
cozzyd | a day ago
vips7L | 15 hours ago
josephg | 23 hours ago
It’s probably only a matter of time before open models are as good as Claude code is today.
Aozora7 | 22 hours ago
Aozora7 | 22 hours ago
lkjdsklf | 19 hours ago
A year ago, the "state of the art" models were total turds. So this isn't exactly good news
Not to mention the performance of local LLMs makes them utterly unusable unless you have multiple tens of thousands to invest in hardware (and that was before the recent price spike). If you're using commodity hardware, they're just awful to use.
thomastjeffery | a day ago
Generative models (AI) are not really eroding copyright. They are calling its bluff. The very notion of intellectual property depends on a property line: some arbitrary boundary where the property begins and ends. Generative models blur that line, making it impractical to distinguish which property belongs to whom.
Ironically, these models are made by giant monopolistic corporations whose wealth is quite literally a market valuation (stock price) of their copyrights! If generative models ever become good enough to reimplement CUDA, what value will NVIDIA have left?
The reality is that generative models are nowhere near good enough to actually call the bluff. Copyright is still the winning hand, and that is likely to continue, particularly while IP holders are the primary authors of law.
---
This whole situation is missing the forest for the trees. Intellectual Property is bullshit. A system predicated on monopoly power can only result in consolidated wealth driving the consolidation of power; which is precisely what has happened. The words "starving artist" ring every bit as familiar today as any time in history. Copyright has utterly failed the very goals it was explicitly written with.
It isn't the GPL that needs changing. So long as a system of copyright rules the land, copyleft is the best way to participate. What we really need is a cohesive political movement against monopoly power; one that isn't conveniently ignorant of copyright as its most significant source.
pennomi | a day ago
re-thc | a day ago
At the moment it's people that are eroding copyright. E.g. in this case someone did something.
"AI" didn't have a brain, woke up and suddenly decided to do it.
Realistically nothing to do with AI. Having a gun doesn't mean you randomly shoot.
xantronix | a day ago
Peritract | a day ago
LLMs are one of the primary manifestations of 'evil software corporations' currently.
wolvesechoes | a day ago
Unless it is IP of the same big corpos that consumed all content available. Good luck with eroding them.
mikkupikku | a day ago
martin-t | 22 hours ago
paxys | 21 hours ago
sharkjacobs | a day ago
This feels sort of like saying "I just blindly threw paint at that canvas on the wall and it came out in the shape of Mickey Mouse, and so it can't be copyright infringement because it was created without the use of my knowledge of Micky Mouse"
Blanchard is, of course, familiar with the source code, he's been its maintainer for years. The premise is that he prompted Claude to reimplement it, without using his own knowledge of it to direct or steer.
re-thc | a day ago
> He fed only the API and the test suite to Claude and asked it
Difference being Claude looked; so not blind. The equivalent is more like I blindly took a photo of it and then used that to...
Technically did look.
amarant | a day ago
What he claimed, and what was interesting, was that Claude didn't look at the code, only the API and the test suite. The new implementation is all Claude. And the implementation is different enough to be considered original, completely different structure, design, and hey, a 48x improvement in performance! It's just API-compatible with the original. Which as per the Google Vs oracle 2021 decision is to be considered fair use.
mrgoldenbrown | a day ago
amarant | a day ago
re-thc | a day ago
Who opened the PR? Who co-authored the commits? It's clearly on Github.
> Blanchard was a chardet maintainer for years. Of course he had looked at its code!
So there you have it. If he looked, he co-authored then there's that.
kjksf | a day ago
Blanchard is very clear that he didn't write a single line of code. He isn't an author, he isn't a co-author.
Signing GitHub commit doesn't change that.
re-thc | a day ago
He used Claude to write it. Difference? The fact that I write on the notepad vs printed it out = I didn't do it?
> Signing GitHub commit doesn't change that.
That's the equivalent of me saying I didn't kill anyone. The fingerprints on the knife doesn't change that.
satvikpendem | a day ago
re-thc | 12 hours ago
I did say co-author didn't I? Even if you added 0.000000001% to something you did so technically, yes.
> By your logic I did apparently
If you take someone's email and forward it did you write that email? Instead of debating that imagine you took a trojan email and forwarded it to someone and they opened it - do you think you'd be held up in any way?
dathinab | a day ago
I would argue it's irrelevant if they looked or didn't look at the code. As well as weather he was or wasn't familiar with it.
What matters is, that they feed to original code into a tool which they setup to make a copy of it. How that tool works doesn't really matter. Neither does it make a difference if you obfuscate that it's an copy.
If I blindfold myself when making copies of books with a book scanner + printer I'm still engaging in copyright infringement.
If AI is a tool, that should hold.
If it isn't "just" a tool, then it did engage in copyright infringement (as it created the new output side by side with the original) in the same way an employee might do so on command of their boss. Which still makes the boss/company liable for copyright infringement and in general just because you weren't the one who created an infringing product doesn't mean you aren't more or less as liable of distributing it, as if you had done so.
spullara | a day ago
sigseg1v | a day ago
nicole_express | a day ago
I'm not sure how you square the circle of "it's alright to use the LLM to write code, unless the code is a rewrite of an open source project to change its license".
JoshTriplett | 23 hours ago
> I'm not sure how you square the circle of "it's alright to use the LLM to write code
You seem like you're on the cusp of stating the obvious correct conclusion: it isn't.
wizzwizz4 | a day ago
> But how far away from direct and explicit representations do we have to go before copyright no longer applies?
ghostpepper | a day ago
spullara | a day ago
spullara | a day ago
cubefox | 23 hours ago
bmcahren | a day ago
Then onto prompting: 'He fed only the API and (his) test suite to Claude'
This is Google v Oracle all over again - are APIs copyrightable?
satvikpendem | a day ago
Yes this is the best way to ask the question. If I take a public facing API and reimplement everything, whether it's by human or machine, it should be sufficient. After all, that's what Google did, and it's not like their engineers never read a single line of the Java source code. Even in "clean room" implementations, a human might still have remembered or recalled a previous implementation of some function they had encountered before.
thunderfork | 23 hours ago
azakai | 22 hours ago
About this specific point, it is unclear how much of a defect memorization actually is - there are also reasons to see it as necessary for effective learning. This link explains it well:
https://infinitefaculty.substack.com/p/memorization-vs-gener...
tw1984 | 16 hours ago
No, it is completely different.
Claude was trained on chardet, anything built by Claude would fail the clean-room reimplementation test.
LegionMammal978 | 15 hours ago
satvikpendem | a day ago
That's your opinion (since you said "IMO"), not the actual legal definition.
NSUserDefaults | a day ago
yorwba | a day ago
So when you clone the behavior of a program like chardet without referencing the original source code except by executing it to make sure your clone produces exactly the same output, you may still be infringing its copyright if that output reflects creative choices made in the design of chardet that aren't fully determined by the functional purpose of the program.
margalabargala | a day ago
Copyright infringement is a thing humans do. It's not a human.
Just like how the photos taken by a monkey with a camera have no copyright. Human law binds humans.
malicka | a day ago
margalabargala | a day ago
If we are saying AI is "more than a tool", which seems to be the case courts are leaning since they've ruled AI output without direct human involvement is not copyrightable[0], then the above seems like it would be entirely legal.
[0] https://www.copyright.gov/newsnet/2025/1060.html
Ekaros | 22 hours ago
Even if the final output doesn't have copyright protection it might still be copyright violation. I think it could be reasonable to have work that itself violates copyright when distributed even if it does not have copy right itself.
Legend2440 | a day ago
Well, no. They fed the spec (test cases, etc) into a tool which made a new program matching the spec. This is not a copy of the original code.
But also this feels like arguing over the color of the iceberg while the titanic sinks. If you have a tool that can make code to spec, what is the value in source code anymore? Even if your app is closed-source, you can just tell claude to write new code that does the same thing.
timeinput | a day ago
foresto | 23 hours ago
Yes...
> and Anthropic fed the code to the tool,
Presumably, as part of the massive amount of open-source code that must have been fed in to train their model.
> so Blanchard didn't do anything wrong, and Anthropic didn't do anything wrong. Nothing to see here.
This is meant as irony, right?
vbarrielle | 22 hours ago
derangedHorse | 17 hours ago
vbarrielle | 11 hours ago
logicprog | a day ago
sarchertech | a day ago
Anything you put out can and will be used by whatever giant company wants to use it with no attribution whatsoever.
Doesn’t that massively reduce the incentive to release the source of anything ever?
pocksuppet | a day ago
intrasight | a day ago
The non IP protection has largely been in the effort involved in replicating an application's behavior and that effort is dropping precipitously.
sarchertech | 22 hours ago
intrasight | 21 hours ago
satvikpendem | a day ago
It's the same question as, if an AI can generate "art", or photographers can capture a scene better than any (realistic) painter, then will people still create art? Obviously yes, and we see it of course after Stable Diffusion was released three years ago, people are still creating.
sarchertech | 22 hours ago
So ignoring people who are being paid by corporations directly to work on open source, in my experience the vast majority of contributors expect to be able to monetize their work eventually in a way that requires attribution. And out of the small number who don’t expect a monetary return of any kind, a still smaller number don’t expect recognition.
If this weren’t the case you’d see a much larger amount of anonymous contributions. There are people who anonymously donate to charity. The vast majority want some kind of recognition.
Obviously we still see art, if you greatly reduce the monetary benefit to producing art, you’ll see a lot less of it. This is especially true of non trivial open source software that unlike static artwork requires continual maintenance.
joshjob42 | 20 hours ago
So I'm not sure it matters whether a giant company uses it because random users can get the same thing for ~ free anyway.
sarchertech | 15 hours ago
atomicnumber3 | a day ago
jpc0 | a day ago
If I know it is legal to make a turn at a red light. And I know a court will uphold that I was in the right but a police officer will fine me regardless and I would need to go to actually pursue some legal remedy I'm unlikely to do it regardless of whether it is legal because it is expensive, if not in money but time.
In the case of copyright lawsuits they are notoriously expensive and long so even if a court would eventually deem it fine, why take the chance.
atomicnumber3 | 23 hours ago
sunshowers | a day ago
simonw | a day ago
esafak | a day ago
amarant | a day ago
Aurornis | a day ago
My understanding was that his claim was that Claude was not looking at the existing source code while writing it.
pklausler | a day ago
mrgoldenbrown | a day ago
duskdozer | 17 hours ago
He would have had a better argument if he created a matching spec from scratch using randomized names.
SpicyLemonZest | a day ago
babypuncher | a day ago
This would make it so relicensing with AI rewrites is essentially impossible unless your goal is to transition the work to be truly public domain.
I think this also helps somewhat with the ethical quandary of these models being trained on public data while contributing nothing of value back to the public, and disincentivize the production of slop for profit.
kjksf | a day ago
https://www.carltonfields.com/insights/publications/2025/no-...
> No Copyright Protection for AI-Assisted Creations: Thaler v. Perlmutter
> A recent key judicial development on this topic occurred when the U.S. Supreme Court declined to review the case of Thaler v. Perlmutter on March 2, 2026, effectively upholding lower court rulings that AI-generated works lacking human authorship are not eligible for copyright protection under U.S. law
pseudalopex | a day ago
This was AI summary? Those words were not in the article.
The courts said Thaler could not have copyright because he refused to list himself as an author.
idle_zealot | a day ago
That's not true at all. Anyone could follow these steps:
1. Have the LLM rewrite GPL code.
2. Do not publish that public domain code. You have no obligation to.
3. Make a few tweaks to that code.
4. Publish a compiled binary/use your code to host a service under a proprietary license of your choice.
axus | a day ago
In this case, we could theoretically prove that the new chardet is a clean reimplementation. Blanchard can provide all of the prompts necessary to re-implement again, and for the cost of the tokens anyone can reproduce the results.
NewsaHackO | 23 hours ago
IANAL, but that analogy wouldn't work because Mickey Mouse is a trademark, so it doesn't matter how it is created.
throwaway2027 | a day ago
observationist | a day ago
AI will destroy the current paradigm, completely and utterly, and there's nothing they can do to stop it. It's unclear if they can even slow it, and that's a good thing.
We will be forced to legislate a modern, digital oriented copyright system that's fair and compatible with AI. If producing any software becomes a matter of asking a machine to produce it - if things like AI native operating systems come about, where apps and media are generated on demand, with protocols as backbone, and each device is just generating its own scaffolding around the protocols - then nearly none of modern licensing, copyright, software patents, or IP conventions make any sense whatsoever.
You can't have horse and buggy traffic conventions for airplanes. We're moving in to a whole new paradigm, and maybe we can get legislation that actually benefits society and individuals, instead of propping up massive corporations and making lawyers rich.
casey2 | a day ago
If corporations are allowed to launder someone else work as their own people will simply stop working and just start endlessly remixing a la popular music.
throawayonthe | a day ago
- proprietary
- free
- slop-licensed
software?
megous | 23 hours ago
mfabbri77 | a day ago
One thing is certain, however: copyleft licenses will disappear: If I can't control the redistribution of my code (through a GPL or similar license), I choose to develop it in closed source.
bigyabai | a day ago
dwroberts | a day ago
The answer to that, I think, is that the authors wanted to squat an existing successful project and gain a platform from it. Hence we have news cycle discussing it.
Nobody cares about a new library using AI, but squash an existing one with this stuff, and you get attention. It’s the reputation, the GitHub stars, whatever
nicole_express | a day ago
Honestly it's a weird test case for this sort of thing. I don't think you'd see an equivalent in most open source projects.
intrasight | a day ago
delichon | a day ago
moi2388 | a day ago
vladms | a day ago
I did not study in detail if copyright "has always been nonsense", but I do agree that nowadays some of the copyright regulations are nonsense (for example the very long duration of life + 70 years)
intrasight | a day ago
I think the industry will realize that it made a huge mistake by leaning on copyright for protection rather than on patents.
joshmoody24 | a day ago
mbgerring | 23 hours ago
The idea that "information wants to be free" was always a lie, meant to transfer value from creators to platform owners. The result of that has been disastrous, and it's long past time to push the pendulum in the other direction.
logicprog | a day ago
This argument makes no sense. Are they arguing that because Vercel, specifically, had this attitude, this is an attitude necessitated by AI, reimplementation, and those who are in favor of it towards more permissive licenses? That certainly doesn't seem to be an accurate way to summarize what antirez or Ronacher believe. In fact, under the legal and ethical frameworks (respectively) that those two put forward, Vercel has no right to claim that position and no way to enforce it, so it seems very strange to me to even assert that this sort of thing would be the practical result of AI reimplementations. This seems to just be pointing towards the hypocrisy of one particular company, and assuming that this would be the inevitable universal, attitude, and result when there's no evidence to think so.
It's ironic, because antirez actually literally addresses this specific argument. They completely miss the fact that a lot of his blog post is not actually just about legal but also about ethical matters. Specifically, the idea he puts forward is that yes, corporations can do these kinds of rewrites now, but they always had the resources and manpower to do so anyway. What's different now is that individuals can do this kind of rewrites when they never have the ability to do so before, and the vector of such a rewrite can be from a permissive to copyleft or even from decompile the proprietary to permissive or copyleft. The fact that it hasn't been so far is a more a factor of the fact that most people really hate copyleft and find an annoying and it's been losing traction and developer mind share for decades, not that this tactic can't be used that way. I think that's actually one of the big points he's trying to make with his GNU comparison — not just that if it was legal for GNU to do it, then it's legal for you to do with AI, and not even just the fundamental libertarian ethical axiom (that I agree with for the most part) that it should remain legal to do such a rewrite in either direction because in terms of the fundamental axioms that we enforce with violence in our society, there should be a level playing field where we look at the action itself and not just whether we like or dislike the consequences, but specifically the fact that if GNU did it once with the ability to rewrite things, it can be done again, even in the same direction, it now even more easily using AI.
antirez | a day ago
Honestly I was confused about the summarization of my blog post into just a legal matter as well. I hope my blog post will be able to flash at least a short time in the HN front page so that the actual arguments it contain will get a bit more exposure.
Talanes | a day ago
throwaway2027 | a day ago
intrasight | a day ago
drnick1 | a day ago
wolvesechoes | a day ago
Everything for memory safety.
phendrenad2 | 20 hours ago
bananamogul | 17 hours ago
Within a relatively short time frame, expect everything in your Linux distro other than the kernel to be MIT-licensed because everything that is FSF-maintained will be rewritten in Rust with the MIT license.
The kernel will then be next, though it'll take a longer timeframe.
The GPL just didn't win in the marketplace of ideas.
wolvesechoes | 12 hours ago
Stallman's proposal is opposite of ideology, it is conscious political project. And thus it is failing.
nicole_express | a day ago
I like the article's point of legal vs. legitimate here, though; copyright is actually something of a strange animal to use to protect source code, it was just the most convenient pre-existing framework to shove it in.
dathinab | a day ago
which is the actual relevant part: they didn't do that dance AFIK
AI is a tool, they set it up to make a non-verbatim copy of a program.
Then they feed it the original software (AFIK).
Which makes it a side by side copy, as in the original source was used as reference to create the new program. Which tend to be seen as derived work even if very different.
IMHO They would have to:
1. create a specification of the software _without looking at the source code_, i.e. by behavior observation (and an interface description). I.e. you give the AI access to running the program, but not to looking into the insides of it. I really don't think they did it as even with AI it's a huge pain as you normally can't just brute force all combinations of inputs and instead need to have a scientific model=>test=>refine loop (which AI can do, but can take long and get stuck, so you want it human assisted, and the human can't have inside knowledge about the program).
2. then generate a new program from specification, And only from it. No git history, no original source code access, no program access, no shared AI state or anything like that.
Also for the extra mile of legal risk avoidance do both human assisted and use unrelated 3rd parties without inside knowledge for both steps.
While this does majorly cut cost of a clean room approach, it still isn't cost free. And still is a legal mine field if done by a single person, especially if they have enough familiarity to potentially remember specific peaces of code verbatim.
nicole_express | a day ago
So my understanding was that the original code was specifically not fed into Claude. But was almost certainly part of its training data, which complicates things, but if that's fair use then it's not relevant? If training's not fair use and taints the output, then new-chardet is a derivative of a lot of things, not just old-chardet...
This is all new legal ground. I'm not sure if anyone will go to court over chardet, though, but something that's an actual money-maker or an FSF flagship project like readline, on the other hand, well that's a lot more likely.
minimaltom | 20 hours ago
> But was almost certainly part of its training data, which complicates things
On this point specifically, my read of the Anthropic lawsuit was one of the precedents was that if it trains on something but does not regurgitate it, its fair use? Might help the argument that it was clean-room but ¯\_(ツ)_/¯
RaffaelCH | a day ago
My understanding is they did do the dance. From the article: "He fed only the API and the test suite to Claude and asked it to reimplement the library from scratch."
One could still make the argument that using the test suite was a critical contributing factor, but it is not a part of the resulting library. So in my uninformed opinion, it seems to me like the clean room argument does apply.
minimaltom | 21 hours ago
(1): my understanding was that a party _with access to copyrighted material_ made the functional spec, which was communicated to a party without access [1]. Under my understanding, theres no requirement for the authors of the functional spec to be 'clean'.
(2) Afaict, they limited the AI to access of just the functional spec and audited that it did not see the original source.
Edit: Not sure if sharing the 'test suite' matters, probably something for the courts in the unlikely event this ever gets there.
[1] Following the definition of clean room re implementation as it relates to US precedent, ie that described in the wikipedia page.
grahamlee | a day ago
anonymous_sorry | a day ago
armchairhacker | a day ago
grahamlee | a day ago
enriquto | a day ago
no, it isn't. The point of the GPL is to grant users of the software four basic freedoms (run, study, modify and redistribute). There's no restriction to distribution per se, other than disallowing the removal of these freedoms to other users.
largbae | a day ago
1. The cost continues to trend to 0, and _all_ software loses value and becomes immediately replaceable. In this world, proprietary, copyleft and permissive licenses do not matter, as I can simply have my AI reimplement whatever I want and not distribute it at all.
2. The coding cost reduction is all some temporary mirage, to be ended soon by drying VC money/rising inference costs, regulatory barriers, etc. In that world we should be reimplementing everything we can as copyleft while the inferencing is good.
anonymous_sorry | a day ago
dathinab | a day ago
but AI assisted code has an author and claiming it's AI assisted even if it is fully AI build is trivial (if you don't make it public that you didn't do anything)
also some countries have laws which treat it like a tool in the sense that the one who used it is the author by default AFIK
aoeusnth1 | 5 hours ago
sarchertech | a day ago
casey2 | a day ago
largbae | a day ago
1. An LLM recreating a piece of software violates its copyright and is illegal, in which case LLM output can never be legally used because someone somewhere probably has a copyright on some portion of any software that an LLM could write.
2. You read my example as "copying a project without distributing it", vs. "having an LLM write the same functionality just for me"
beepbooptheory | a day ago
More and more I am drawn to these kinds of ideas lately, perhaps as a kind of ethical sidestep, but still:
- https://wiki.xxiivv.com/site/permacomputing.html
- https://permacomputing.net/
It's not going to solve any general issue here, but the one thing these freaks need that can't be generated by their models is energy, tons of it. So, the one thing I can do as an individual and in my (digital) community is work to be, in a word, self-sustainable. And depending on my company I guess, if I was a CEO I would hope I was wise enough to be thinking on the same lines.
Everyone is making beautiful mountains from paper and wire. I will just be happy to make a small dollhouse of stone, I think it will be worth it. How can we see not just at least some small-level of hubris otherwise?
rstuart4133 | 10 hours ago
There would be no GPL if anybody could have cheaply and trivially reproduced the software for printers and Lisp machines Stallman was denied access to. There is no reason to force someone to give you the source code if takes no effort to reproduce.
Mind you, that isn't what happened here. The effort involved in getting a LLM to write software comes from three things: writing a clear unambiguous spec that also gives you a clean exported API, more clean unambiguous specs for the APIs you use, and a test suite the LLM can use to verify it has implemented the exported API correctly. Dan got them all for free, from the previous implementation which I'm sure included good documentation. That means his contribution to this new code consisted of little more than pressing the button.
Sadly, if you wrote some GPL software with excellent documentation, a thorough test suite, clean API, and implemented using well understood library the cost of creating a cleanroom reproduction has indeed gone to near zero over the past 24 months. The GPL licence is irrelevant.
Welcome to the brave new world.
PS: Sqlite keeping their test suite proprietary is looking like a prescient masterstroke.
PPS: The recent ruling that an API isn't copyrightable just took on a whole new dimension.
t43562 | a day ago
I'm glad we can fork things at a point and thumb our noses at those who wish to cash in on other's work.
warkdarrior | a day ago
t43562 | a day ago
wvenable | 20 hours ago
righthand | a day ago
sayrer | a day ago
That's what something like AGPL does.
kazinator | a day ago
Think about it; the license says that copies of the work must be reproduced with the copyright notice and licensing clauses intact. Why would anyone obey that, knowing it came from AI?
Countless instances of such licenses were ignored in the training data.
moralestapia | a day ago
harshreality | a day ago
A lego sculpture is copyrighted. Lego blocks are not. The threshold between blocks and sculpture is not well-defined, but if an AI isn't prompted specifically to attempt to mimic an existing work, its output will be safely on the non-copyrighted side of things.
A derivative work is separately copyrightable, but redistribution needs permission from the original author too. Since that usually won't be granted or would be uneconomical, the derivative work can't usually be redistributed.
AI-produced material is inherently not copyrightable, but not because it's a derivative work.
kazinator | 23 hours ago
I dispute the idea that token sequences reproduced from the model are not derived works.
I predict, no pun intended, that a time is coming when the idea that it's not a derived work will be challenged in mainstream law.
The slop merchants are getting a free ride for the time being.
harshreality | 21 hours ago
As you said, it's lossy. Try it with any other distinctive but non-famous passage, and you won't get a correct prediction for the immediately following clause, much less for multiple sentences or paragraphs.
That's the case even when an LLM correctly identifies which book the prompted text is from. It still won't accurately continue on from some arbitrary passage. By the time you ask it to reproduce hundreds of words, you're into brand new book territory. Even when it's slop content, it's distinct slop.
The exceptions are cases where a significant number of humans would also know a particular quote from memory. Then, chances are, a frontier LLM will too.
You know how else you can reproduce a quote? Search for it on google, and search the resulting top hits; if it's a significant quote, multiple people have probably quoted it -- legally. You can also search a pirate library for the actual book, and search the book for the quote; while illegal, it's very simple to do, so unless you propose to make the free and open internet illegal, I'd suggest that banning LLMs for being "derivative work" creation engines is not so different from destroying the internet.
> I predict, no pun intended, that a time is coming when the idea that it's not a derived work will be challenged in mainstream law.
If judges have any sense whatsoever, LLM generations (without specific prompt crafting to mimic existing works) will be judged to not be derived works and therefore not be violating copyright, in the same sense that you can live and breathe Taylor Swift's music, create new music in the same style, and still not be violating copyright.
The Stability AI case, and how Judge Orrick deals with it, will be interesting and uninteresting at the same time. It deals primarily with the fact that after specific prompting, an image-generation AI can generate something fairly close to existing copyrighted images. That doesn't say anything more about whether LLMs are inherently producers of [only or primarily] derivative works, just as the fact that a human can violate copyright doesn't say anything about whether humans primarily or exclusively output derivative works.
More likely, perhaps, is that everything will be so infused with LLM output that copyright ceases to be relevant, or forces copyright law to be rewritten from the ground up.
Copyright requirements, even prior to LLMs, weren't well-specified. There's no objective threshold for how close something has to be to a previous work before the new one violates copyright. It's whatever a judge thinks, refering to the 4-factor test but ultimately making subjective judgements about each of those prongs. It's all a house of cards, and LLMs may just be what topples it.
skybrian | a day ago
Copyleft could be seen as an attempt to give Free Software an edge in this competition for users, to counter the increased resources that proprietary systems can often draw on. I think success has been mixed. Sure, Linux won on the server. Open source won for libraries downloaded by language-specific package managers. But there’s a long tail of GPL apps that are not really all that appealing, compared to all the proprietary apps available from app stores.
But if reimplementing software is easy, there’s just going to be a lot more competition from both proprietary and open source software. Software that you can download for free that has better features and is more user-friendly is going to have an advantage.
With coding agents, it’s likely that you’ll be able to modify apps to your own needs more easily, too. Perhaps plugin systems and an AI that can write plugins for you will become the norm?
jacquesm | a day ago
It was due to access.
casey2 | a day ago
If you have software your testsuite should be your testsuite, you do dev with a testsuite and then mit without releasing one. Depending on the test-suite it may break clean room rules, especially for ttd codebases.
strongpigeon | a day ago
It does feel like open source is about to change. My hunch is that commercial open source (beyond the consultation model) risks disappearing. Though I'd be happy to be proven wrong.
kccqzy | a day ago
That’s just your subjective opinion which many other people would disagree. I bet Armin Ronacher would agree that an MIT licensed library is even freer than an LGPL licensed library. To them, the vector is running from free to freer.
bjt | a day ago
This is an interesting reversal in itself. If you make the specification protected under copyright, then the whole practice of clean room implementations is invalid.
dleslie | a day ago
There was an issue where Google did something similar with the JVM, and ultimately it came down to whether or not Oracle owned the copyright to the header files containing the API. It went all the way to the US supreme court, and they ruled in Google's favour; finding that the API wasn't the implementation, and that the amount of shared code was so minimal as to be irrelevant.
They didn't anticipate that in less than half a decade we'd have technology that could _rapidly_ reimplement software given a strong functional definition and contract enforcing test suite.
mwkaufma | a day ago
ineedasername | a day ago
The fundamental problem is that once you take something outside the realm of law and rule of law in its many facets as the legitimizing principal, you have to go a whole lot further to be coherent and consistent.
You can’t just leave things floating in a few ambiguous things you don’t like and feel “off” to you in some way- not if you’re trying to bring some clarity to your own thoughts, much less others. You don’t have to land on a conclusion either. By all means chew over things, but once you try to settle, things fall apart if you haven’t done the harder work of replacing the framework of law with that of another conceptual structure.
You need to at least be asking “to what ends? What purpose is served by the rule?” Otherwise you’re stuck in things where half the time you end up arguing backwards in ways that put purpose serving rules, the maintenance of the rule with justifications ever further afield pulled in when the rule is questioned and edge cases reached. If you’re asking, essentially, “is the spirit of the rule still there?” You’ve got to stop and fill in what that spirit is or you or people that want to control you or have an agenda will sweep in with their own language and fill the void to their own ends.
kelseyfrog | a day ago
Sec has a deny by default policy. Eng has a use-more-AI policy. Any code written in-house is accepted by default. You can see where this is going.
We've been using AI to reimplement tooling that security won't approve. The incentives conspired in the worst outcome, yet here we are. If you want a different outcome, you need to create different incentives.
kemitchell | 22 hours ago
There is a fundamental corpo-cognitive dissonance, to boot. If "AI" is cheap enough and good enough to implement security-relevant software from `git init` repeatedly, why isn't it also cheap enough and good enough to assess and approve the security of third-party software at pace with internal adoption? Is there some basis to believe LLMs' leverage on production differs from its leverage on analysis of existing code?
ticulatedspline | a day ago
It also doesn't talk about the far more interesting philosophical queston. Does what Blanchard did cover ALL implementations from Claude? What if anyone did exactly what he did, feed it the test cases and say "re-implement from scratch", ostensibly one would expect the results to be largely similar (technically under the right conditions deterministically similar)
could you then fork the project under your own name and a commercial license? when you use an LLM like this, to basically do what anyone else could ask it to do how do you attach any license to it? Is it first come first serve?
If an agent is acting mostly on its own it feels like if you found a copy of Harry Potter in the fictional library of Babel, you didn't write it, just found it amongst the infinite library, but if you found it first could you block everyone else that stumbles on a near-identical copy elsewhere in the library? or does each found copy represent a "Re-implementation" that could be individually copyrighted?
danbruc | a day ago
PaulDavisThe1st | a day ago
If he is claiming to have been somehow substantively "enough" involved to make the code copyrightable, then his own familiarity with the previous LGPL implementation makes the new one almost certainly a derivative of the original.
sigmar | a day ago
The "clean room rewrite" is just an extreme way to have a bulletproof shield against litigation. Not doing it that way doesn't automatically make all new code he writes derivative solely because he saw how the code worked previously.
PaulDavisThe1st | 23 hours ago
And if he was in fact more involved (which he appears to deny) that it's a bit weak to say that someone with huge familiarity with chardet could choose to reimplement chardet without the result being derivative.
serial_dev | 23 hours ago
vbarrielle | 22 hours ago
heavyset_go | 15 hours ago
hexyl_C_gut | a day ago
AndriyKunitsyn | a day ago
ddellacosta | a day ago
tmp10423288442 | 22 hours ago
Looks like Wikipedia has an example of Traditional Chinese vertical layout with the Latin letters rotated as in TFA's layout (https://en.wikipedia.org/wiki/Horizontal_and_vertical_writin...)
Khaine | a day ago
krater23 | 8 hours ago
mh2266 | a day ago
moralestapia | a day ago
svilen_dobrev | a day ago
But what happens with the new things? Has the era of software-making (or creating things at large) finished, and from now on everything will be re-(gurgitated|implemented|polished) old stuff?
Or all goes back to proprietary everything.. Babylon-tower style, noone talks to noone?
edit: another view - is open-source from now on only for resume-building? "see-what-i've-built" style
t43562 | 23 hours ago
So of course we feel that something wrong has happened even if it's not easy to put one's finger on it.
zmmmmm | 23 hours ago
So: once it's not "hard" any more, does IP even make sense at all? Why grant monopoly rights to something that required little to no investment in the first place? Even with vestigial IP law - let's say, patents: it just becomes and input parameter that the AI needs to work around the patents like any other constraints.
palmotea | 23 hours ago
I think it still does: IIRC, the current legal situation is AI-output does not qualify for IP protections (at least not without substantial later human modification). IP protections are solely reserved for human work.
And I'm fine with that: if a person put in the work, they should have protections so their stuff can't be ripped off for free by all the wealthy major corporations that find some use for it. Otherwise: who cares about the LLMs.
robmccoll | 22 hours ago
palmotea | 22 hours ago
Then fix that instead of blowing it up. Because IP law is also literally the only thing that protects the little guy's work in many cases.
Arguments like yours are kinda unfathomably incomplete to me, almost like they're the remnants of some propaganda campaign. It's constructed to appeal to the defense of the little guy, but the actual effect would be to disempower him and further empower the wealthy major corporations with "big enough warchest[s]."
I mean, one thing I think the RIAA would love is to stop paying royalties to every artist ever. And the only thing they'd be worried about is an even bigger fish (like Amazon, Apple, or Spotify) no longer paying royalties to them. But as you said, they have a big enough war chest that they probably could force a deal somehow. All the artists without a war chest? Left out in the cold.
_aavaa_ | 21 hours ago
esrauch | 20 hours ago
It definitely does some of both, and we have no obvious measure or counterfactual to know otherwise.
You also have to take into account not just if optimal reform or optimal dismantle is better, but the realistic likelihood of each, and the risk of the bad outcomes from each.
Protect even more conceptual product ideas seems pretty strongly like it will result in more of a tool for big guys only, it's patents on crack and patents are already nearly exclusively "big guy crushes small guy" tool, versus copyright is at least debatably mixed.
palmotea | 15 hours ago
It's super obvious, unless your perspective basically stems from someone who was mad they couldn't BitTorrent a ton of movies.
I mean, FFS, copyright is the literal foundation for open source licenses like the GPL.
My sense is a lot of the radically anti-IP fervor ultimately stems from people who were outraged they could be sued for seeding an MP3 (though it's accreted other complaints to justify that initial impulse, and it's likely some where indoctrinated from secondary argumentation somewhat obscured from the core impulse).
That's not to say that there are not actors who abuse IP or there aren't meaningful reforms that could be done, but the "burn it all down" impulse is not thought through.
esrauch | 13 hours ago
Yes it is a genius move that copy left used copyright to achieve their goal. But the name is literally reflecting the judo going on in that case. Copyleft licenses also does have a lot benefits to big companies as well too so it's not strictly a David vs Goliath victory.
I don't think it's a commonly held belief that copyright benefits small YouTube creators more than it hurts them as a concrete example, they seem to live in constant fear of being destroyed in an asymmetrical system where copyright can take away they livelihood at any moment while not doing anything to meaningfully protect it.
jph00 | 12 hours ago
jbergqvist | 22 hours ago
reverius42 | 21 hours ago
eru | 20 hours ago
Because some photographer somewhere can claim to have put in a lot of effort, we all get IP protection for photographs by default.
reverius42 | 17 hours ago
eru | 16 hours ago
shagie | 16 hours ago
https://en.wikipedia.org/wiki/Sweat_of_the_brow
https://en.wikipedia.org/wiki/Copyright_law_of_the_United_St...
reverius42 | 15 hours ago
JAlexoid | 13 hours ago
nkmnz | 21 hours ago
I beg to differ. AI-output did not entitle the person creating the prompt for IP protections, so far – but my objection is not directed towards the "so far", but towards your omission of "the person creating the prompt", because if an AI outputs copyrighted material from the training data, that material is still copyrighted. AI is not a magical copyright removal machine.
reverius42 | 21 hours ago
What this means in practice is that (currently), all output of an LLM is legally considered to not be copyrightable (to the extent that it's an original work). If it happens to regurgitate an existing copyrighted work, though, is that infringement? I'm not sure we have a legal precedent on that question yet.
jazzyjackson | 21 hours ago
reverius42 | 20 hours ago
I don't think this means the same thing as whether or not LLM output can infringe on someone else's copyright though (that does pose an interesting question -- can something non-copyrightable in general infringe on something copyrighted?).
nkmnz | 20 hours ago
JAlexoid | 13 hours ago
Muskwalker | 16 hours ago
I believe there are other cases where AI-generated works were found uncopyrightable but Thaler is not a good example* of them.
bandrami | 17 hours ago
scheeseman486 | 10 hours ago
That also applies to generative AI, pure output may not be copyrightable but as soon as you do something beyond type some words and press a button, like doing area-specific infills and paintovers, which involve direct and deliberate choices by a human, the copyrighted human-driven arrangement becomes so deeply intertwined with the generative work that it's effectively inseperable.
rlpb | 21 hours ago
See also: https://en.wikipedia.org/wiki/Sweat_of_the_brow
eru | 20 hours ago
chrischen | 20 hours ago
eru | 20 hours ago
spwa4 | 23 hours ago
Company incorporates GPL code in their product? Never once have courts decided to uphold copyright. HP did that many times. Microsoft got caught doing it. And yet the GPL was never applied to their products. Every time there was an excuse. An inconsistent excuse.
Schoolkid downloads a movie? 30,000 USD per infraction PLUS armed police officer goes in and enforces removal of any movies.
Or take the very subject here. AI training WAS NOT considered fair use when OpenAI violated copyright to train. Same with Anthropic, Google, Microsoft, ... They incorporated harry potter and the linux kernel in ChatGPT, in the model itself. Undeniable. Literally. So even if you accept that it's changed now, OpenAI should still be forced to redistribute the training set, code, and everything needed to run the model for everything they did up to 2020. Needless to say ... courts refused to apply that.
So just apply "the law", right. Courts' judgement of using AI to "remove GPL"? Approved. Using AI to "make the next Disney-style movie"? SEND IN THE ARMY! Whether one or the other violates the law according to rational people? Whatever excuse to avoid that discussion is good enough.
js8 | 23 hours ago
With AI, a similar process is happening - publicly available information becomes enclosed by the model owners. We will probably get a "vestigial" intellectual property in the form of model ownership, and everyone will pay a rent to use it. In fact, companies might start to gatekeep all the information to only their own LLM flavor, which you will be required to use to get to the information. For example, product documentation and datasheets will be only available by talking to their AI.
nradov | 23 hours ago
reverius42 | 22 hours ago
That also seems relevant for this whole discussion, actually -- if a work can't be copyrighted it certainly can't have a changed license, or any license at all. (I guess it's effectively public domain to the extent that it's public at all?)
nradov | 21 hours ago
reverius42 | 21 hours ago
"Lower courts upheld a U.S. Copyright Office decision that the AI-crafted visual art at issue in the case was ineligible for copyright protection because it did not have a human creator."
Not eligible for copyright protection does not mean it can be copyrighted "under the human creator's name". It means there is no creative work at all. No copyright.
reverius42 | 21 hours ago
nradov | 20 hours ago
reverius42 | 17 hours ago
zmmmmm | 22 hours ago
I guess the state of play will be that for new drugs the original manufacturer will already have done that and ensured that literally anything that could be found as a workaround is included in the scope of the patent. But I feel like it will not be possible to keep that wartertight.
nradov | 22 hours ago
paxys | 21 hours ago
newyankee | 23 hours ago
hyperman1 | 22 hours ago
Patents came along when farmers started making city goods, threatening guilds secrets. Copyright came when the printing press made copying and translating the bible easy and accessible to all. (Trademark admittedly does not fit this view, but doesn't seem all that damaging either)
To Protect The Arts, and To Time Limit Trade Secrets were just the Protect The Children of old times, a way to confuse people who didn't look too hard at actual consequences.
This means that the future of IP depends on what lets the powers that be pull up the ladder behind them. Long term I'd expect e.g. copyright expansion and harder enforcement, just because cloning by AI gets easy enough to threaten the status quo.
cobbzilla | 22 hours ago
Isn’t trademark the only thing keeping a certain cartoon mouse out of the public domain, despite the fact that his earliest animations are out of copyright? Not sure if you’d consider that damaging, or if anyone has yet tested the boundaries of the House of Mouse’s patience here.
jazzyjackson | 21 hours ago
satvikpendem | 22 hours ago
rfw300 | 22 hours ago
It is entirely possible, however, that human beings will not be the primary drivers of progress on those problems.
paxys | 21 hours ago
A company spends a decade and billions of dollars to develop a groundbreaking drug and patents it.
I think of a cool new character called "Mr Poop" and publish a short story about him with an hour of work.
Both of us get the exact same protection under the law (yes yes I know copyright vs patent etc., but ultimately they are all about IP protection).
AlienRobot | 21 hours ago
eru | 20 hours ago
Copyright might rest on 'creativity is hard'. But patents and trademarks do not.
DonsDiscountGas | 19 hours ago
eru | 17 hours ago
bandrami | 17 hours ago
eru | 12 hours ago
I mean, why are patent trolls not getting patents for all compounds under the sun for all conceivable medical uses?
bandrami | 17 hours ago
keeda | 20 hours ago
godd2 | 20 hours ago
matheusmoreira | 19 hours ago
Sure, it's disgusting and hypocritical how these corporations enshrined all this nonsense into law only to then ignore it all the second LLMs were invented. It's ultimately a good thing though. The model weights are all that matters. All we need to do is wait for the models to hit diminishing returns, then somehow find a way to leak them so that everyone has access. If they refuse, then just force them. By law or by revolution.
treyd | 18 hours ago
I have been saying this for years. Intellectual property is based on the concept that ideas can be owned, which is fundamentally a contradiction with how reality operates. We've been able to write laws that paper over that contradiction by introducing concepts like "fair use", but it doesn't resolve it.
AI is just making the conflict arising out of that contradiction more intense in new ways and forcing us to reckon with it in this new technological landscape. You can follow two perfectly reasonable lines of logic and end up with contradictory solutions. So how are we going to get out of this mess? I don't know, not without rolling back (at least parts of) what intellectual property is in the first place.
kindkang2024 | 18 hours ago
That's the reason I like the idea of DUKI/dju:ki/ — Decentralized Universal Kindness Income, similar to UBI but driven by voluntary kindness and sincere marketing rather than taxation. If AI makes creation trivially easy and IP loses its justification, the question becomes: how do we ensure a tiny part of the wealth generated flows back to everyone?
LelouBil | 17 hours ago
https://www.vice.com/en/article/musicians-algorithmically-ge...
Two musicians generated every possible melody within an octave, and published them as creative Commons Zero.
I never heard about this again though.
gnopgnip | 17 hours ago
Also copyright can protect something normally not eligible when the author chooses what information to include and exclude
Eridrus | 16 hours ago
Not all protections have to be ones that give total control like copyright.
I think it's a mistaken assumption that costs will fall to zero. The low hanging fruit will get picked, and then we'll be doing expensive combined AI/wetlab search for new drugs.
If there is any meaningful headroom we will keep doing expensive things to make progress.
matheusmoreira | 14 hours ago
Then why are corporations allowed to milk successful works for all eternity? Why do we have Disney monopolizing films made half a century ago? Why do we have Nintendo selling people the exact same Mario ROMs from the 80s every single console generation?
They should have like 10 years of copyright so they can turn a profit. Once it expires it's over and the work enters the public domain where it belongs. If they want to keep profiting they should have to keep creating new things. They shouldn't be able to turn shared culture into eternal intellectual property portfolios that they monopolize and then sit on like dragons.
Eridrus | 7 hours ago
I am somewhat curious what you think shortening the copyright window would do that's so great for the culture though. We already have more than enough IP slop that's just licensed.
utopiah | 13 hours ago
Any example of that? So far I haven't seen any but maybe I'm looking at the wrong places.
I've see a lot of :
- "solving" math proofs that were properly formalized, with often numerous documented past attempts, re-verified by proper mathematicians, without necessarily any interesting results
- haven't seen any designed trust, most I've seen was (again with entire teams of experts behind) finding slight optimizations
Basically all outputs I've seen so far have been both following existing trends (basically low hanging fruits without any paradigm shift) and never ever alone but rather as search supports for teams of World class experts. None of these that would quality IMHO as knowledge creation. Whenever such results were published the publication seemed mostly to be promotion about the workflow itself more than the actual results. DeepMind seems to be the prime example for that.
PS: for the epistemological distinction you can see a few past comments of mine (e.g. https://news.ycombinator.com/item?id=47011884 )
prohobo | 11 hours ago
In terms of math and biochemistry the cost of generating candidates has collapsed, but the cost of validating them hasn't.
mbgerring | 23 hours ago
No, AI does not mean the end of either copyright or copyleft, it means that the laws need to catch up. And they should, and they will.
munk-a | 23 hours ago
NewsaHackO | 23 hours ago
munk-a | 23 hours ago
NewsaHackO | 22 hours ago
jazzyjackson | 21 hours ago
“”” Section 107 calls for consideration of the following four factors in evaluating a question of fair use:
Purpose and character of the use, including whether the use is of a commercial nature or is for nonprofit educational purposes: Courts look at how the party claiming fair use is using the copyrighted work, and are more likely to find that nonprofit educational and noncommercial uses are fair. This does not mean, however, that all nonprofit education and noncommercial uses are fair and all commercial uses are not fair; instead, courts will balance the purpose and character of the use against the other factors below. Additionally, “transformative” uses are more likely to be considered fair. Transformative uses are those that add something new, with a further purpose or different character, and do not substitute for the original use of the work.
Nature of the copyrighted work: This factor analyzes the degree to which the work that was used relates to copyright’s purpose of encouraging creative expression. Thus, using a more creative or imaginative work (such as a novel, movie, or song) is less likely to support a claim of a fair use than using a factual work (such as a technical article or news item). In addition, use of an unpublished work is less likely to be considered fair.
Amount and substantiality of the portion used in relation to the copyrighted work as a whole: Under this factor, courts look at both the quantity and quality of the copyrighted material that was used. If the use includes a large portion of the copyrighted work, fair use is less likely to be found; if the use employs only a small amount of copyrighted material, fair use is more likely. That said, some courts have found use of an entire work to be fair under certain circumstances. And in other contexts, using even a small amount of a copyrighted work was determined not to be fair because the selection was an important part—or the “heart”—of the work.
Effect of the use upon the potential market for or value of the copyrighted work: Here, courts review whether, and to what extent, the unlicensed use harms the existing or future market for the copyright owner’s original work. In assessing this factor, courts consider whether the use is hurting the current market for the original work (for example, by displacing sales of the original) and/or whether the use could cause substantial harm if it were to become widespread. “””
https://www.copyright.gov/fair-use/
NewsaHackO | 18 hours ago
crazygringo | 23 hours ago
But agreed that we're waiting for a court case to confirm that. Although really, the main questions for any court cases are not going to be around the principle of fair use itself or whether training is transformative enough (it obviously is), but rather on the specifics:
1) Was any copyrighted material acquired legally (not applicable here), and
2) Is the LLM always providing a unique expression (e.g. not regurgitating books or libraries verbatim)
And in this particular case, they confirmed that the new implementation is 98.7% unique.
gspr | 22 hours ago
Some might hold that we've granted persons certain exemptions, on account of them being persons. We do not have to grant machines the same.
> In copyright terms, it's such an extreme transformative use that copyright no longer applies.
Has the model really performed an extreme transformation if it is able to produce the training data near-verbatim? Sure, it can also produce extremely transformed versions, but is that really relevant if it holds within it enough information for a (near-)verbatim reproduction?
crazygringo | 22 hours ago
No we don't have to, but so far we do, because that's the most legally consistent. If you want to change that, you're going to need to pass new laws that may wind up radically redefining intellectual property.
> Has the model really performed an extreme transformation if it is able to produce the training data near-verbatim?
Of course it has, if the transformation is extreme, as it appears to be here. If I memorize the lyrics to a bunch of love songs, and then write my own love song where every line is new, nobody's going to successfully sue me just because I can sing a bunch of other songs from memory.
Also, it's not even remotely clear that the LLM can produce the training data near-verbatim. Generally it can't, unless it's something that it's been trained on with high levels of repetition.
munk-a | 21 hours ago
> you're going to need to pass new laws that may wind up radically redefining intellectual property
You're correct that this is one route to resolving the situation, but I think it's reasonable to lean more strongly into the original intent of intellectual property laws to defend creative works as a manner to sustain yourself that would draw a pretty clear distinction between human creativity and reuse and LLMs.
crazygringo | 21 hours ago
But you're missing the other half of copyright law, which is the original intent to promote the public good.
That's why fair use exists, for the public good. And that's why the main legal argument behind LLM training is fair use -- that the resulting product doesn't compete directly with the originals, and is in the public good.
In other words, if you write an autobiography, you're not losing significant sales because people are asking an LLM about your life.
NewsaHackO | 22 hours ago
I feel as though, from an information-theoretic standpoint, it can't be possible that an LLM (which is almost certainly <1 TB big) can contain any substantial verbatim portion of its training corpus, which includes audio, images, and videos.
gspr | 14 hours ago
It doesn't need to for my argument to make sense. It's a problem if it reproduces a single copyrighted work (near)-verbatim. Which we have plenty of examples of.
NewsaHackO | 11 hours ago
gspr | 9 hours ago
NewsaHackO | 4 hours ago
Copyrightest | 22 hours ago
BTW in 2023 I watched ChatGPT spit out hundreds of lines of F# verbatim from my own GitHub. A lot of people had this experience with GitHub Copilot. "98.7% unique" is still a lot of infringement.
satvikpendem | 22 hours ago
crazygringo | 22 hours ago
That's not relevant, because you can still sue the person using the LLM and publishing the repository. Legal liability is completely unchanged.
alterom | 21 hours ago
It's changed completely, from your own example.
If you comission art from an artist who paints a modified copy of Warhol's work, the artist is liable (even if you keep that work private, for personal use).
If you commission it from OpenAI (by sending a query to their ChatGPT API), by your argument, you are the person liable — and OpenAI is off the hook even if that work is distributed further.
I'm not going to argue about the merits of creativity here, or that someone putting a prompt into ChatGPT considers themselves an artist.
That's irrelevant. The work is created on OpenAI servers, by the LLMs hosted there, and is then distributed to whoever wrote the prompt.
Models run locally are distributed by whoever trained them.
If you train a model on whatever data you legally have access to, and produce something for yourself, it's one thing.
Distribution is where things start to get different.
crazygringo | 20 hours ago
Let's distinguish two different scenarios here:
1) Your prompt is copyright-free, but the LLM produces a significant amount of copyrighted content verbatim. Then the LLM is liable, and you too are liable if you redistribute it.
2) Your prompt contains copyrighted data, and the LLM transforms it, and you distribute it. Then if the transformation is not sufficient, you are liable for redistributing it.
The second example is what I'm referring to, since the commercial LLM's are now very good about not reproducing copyrighted content verbatim. And yes, OpenAI is off the hook from everything I understand legally.
Your example of commissioning an artist is different from LLM's, because the artist is legally responsible for the product and is selling the result to you as a creative human work, whereas an LLM is a software tool and the company is selling access to it. So the better analogy is if you rent a Xerox copier to copy something by Warhol. Xerox is not liable if you try to redistribute that copy. But you are. So here, Xerox=OpenAI. They are not liable for your copyrighted inputs turning into copyrighted outputs.
Copyrightest | 20 hours ago
crazygringo | 18 hours ago
In scenario (1) the LLM is plagiarizing. But that's not the scenario we're discussing. And I already said, this is where the LLM is liable. Whether a user should be too is a different question.
But scenario (2) is what I'm discussing, as I already explained, and it's very possible to tell, because you yourself submitted the copyrighted content. All you need to do is look at whether the output is too similar to the input.
If there's some scenario where you input copyrighted material and it transforms it into different material that is also copyrighted by someone else... that is a pretty unlikely edge case.
alterom | 18 hours ago
It isn't.
One analogy in that case would be going to a FedEx copy center and asking the technician to produce a bunch of copies of something.
They absolve themselves of liability by having you sign a waiver certifying that you have complete rights to the data that serves as input to the machine.
In case of LLMs, that includes the entire training set.
madeofpalk | 22 hours ago
Training an LLM inherently requires making a copy of the work. Even the initial act of loading it from the internet and copying it into memory to then train the LLM is a copy that can be governed by its license and copyright law
crazygringo | 22 hours ago
But that's not relevant here. Because the copyleft license does not prohibit that (and it's not even clear that any license can prohibit it, as courts may confirm it's fair use, as most people are currently assuming). That's why I noted under (1) that it's not applicable here.
munk-a | 21 hours ago
LLM training involves ingesting works (in a potentially transformative process) and partially reproduce them - that's a generally restricted action when it comes to licensing.
crazygringo | 21 hours ago
Sure, but that's not what LLM's generally do, and it's certainly not what they're intended to do.
The LLM companies, and many other people, argue that training falls under fair use. One element of fair use is whether the purpose/character is sufficiently transformative, and transforming texts into weights without even a remote 1-1 correspondence is the transformation.
And this is why LLM companies ensure that partial reproduction doesn't happen during LLM usage, using a kind of copyrighted-text filter as a last check in case anything would unintentionally get through. (And it doesn't even tend to occur in the first place, except when the LLM is trained on a bunch of copies of the same text.)
munk-a | 21 hours ago
crazygringo | 21 hours ago
strogonoff | 12 hours ago
duskdozer | 12 hours ago
cortesoft | 22 hours ago
madeofpalk | 19 hours ago
> The court held that making RAM copies as an essential step in utilizing software was permissible under §117 of the Copyright Act even if they are used for a purpose that the copyright holder did not intend.
https://en.wikipedia.org/wiki/Vault_Corp._v._Quaid_Software_....
kg | 18 hours ago
IIRC this exact argument was made in the Blizzard vs bnetd case, wasn't it? Though I can't find confirmation on whether that argument was rejected or not...
joquarky | 16 hours ago
jazzyjackson | 21 hours ago
If you’ve used copyrighted books and turned them into a free write-a-book machine, you are suddenly using the authors own works against them, in a way that a judge might rule is not very fair.
“ Effect of the use upon the potential market for or value of the copyrighted work: Here, courts review whether, and to what extent, the unlicensed use harms the existing or future market for the copyright owner’s original work. In assessing this factor, courts consider whether the use is hurting the current market for the original work (for example, by displacing sales of the original) and/or whether the use could cause substantial harm if it were to become widespread.”
https://www.copyright.gov/fair-use/
crazygringo | 20 hours ago
This is for the same reason that search results or search snippets aren't deemed to harm creators according to copyright. Yes there might be some percentage lost of sales. And truly, people may be buying less JavaScript tutorial books now that LLM's can teach you JavaScript or write it for you. But the relation is so indirect, there's very little chance a court would accept the argument.
Because what the LLM is doing is reading tons of JavaScript and JavaScript tutorials and resources online, and producing its own transformed JavaScript. And the effect of any single JavaScript tutorial book in its training set is so marginal to the final result, there's no direct effect.
And the reason this makes sense is that it's no different from a teacher reading 20 books on JavaScript and then writing their own that turns out to be a best-seller. Yes, it takes away from the previous best-sellers. But that's fine, because they're not copying any of the previous works directly. They're transforming the facts they learned into a new synthesis.
pessimizer | 18 hours ago
This is just an assertion that you're making. There's no argument here. I'm aware that this is also an assertion that some judges have made.
My claim is that LLMs are not human, therefore when you apply words like "training" to them, you're only doing it metaphorically. It's no more "training" than copying code to a different hard drive is training that hard drive. And it's no more "transformative" than rar'ing or zipping the code, then unzipping it. I can't sell my jpgs of pngs I downloaded from Getty.
I have no idea how LLMs can be considered transformative work that immunizes me from owing the least bit of respect to the source material, but if I sample 2-6 second snatches from 10 different songs, put them through over 9000 filters and blend them into a new work, I owe money to everyone involved. I might even owe money to the people who wrote the filters, depending on the licensing.
> 98.7% unique.
This doesn't mean anything. This is a meaningless arrangement of words. The way we figure out things are piracy is through provenance, not bizarre ad hoc measurements. If I read a book in Spanish and rewrite it in English, it doesn't suddenly become mine even though it's 96.6492387% unique. Not even if I drop a few chapters, add in a couple of my own, and change the ending.
paxys | 21 hours ago
The test for infringement is if the output is transformative enough, and that is what NYT vs OpenAI etc. are arguing.
steve_gh | 14 hours ago
arjie | 23 hours ago
wvenable | 20 hours ago
iberator | 23 hours ago
Add something like this to NEW gpl /bsd/mit licenses:
'you are forbidden from reimplementing it with AI'
or just:
'all clones, reimpletetions with ai etc must still be GPL'
foresto | 23 hours ago
> He fed only the API and the test suite to Claude and asked it to reimplement the library from scratch.
From GPL2:
> The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable.
Is a project's test suite not considered part of its source code? When I make modifications to a project, its test cases are very much a part of that process.
If the test suite is part of this library's source code, and Claude was fed the test suite or interface definition files, is the output not considered a work based on the library under the terms of LGPL 2.1?
tty456 | 23 hours ago
vbarrielle | 23 hours ago
Regarding chardet, I'm not sure "I wanted to circumvent the license" is a good way to argue fair use.
crazygringo | 23 hours ago
Legally, using the tests to help create the reimplementation is fine.
However, it seems possible you can't redistribute the same tests under the MIT license. So the reimplementation MIT distribution could need to be source code only, not source code plus tests. Or, the tests can be distributed in parallel but still under LGPL, not MIT. It doesn't really matter since compiled software won't be including the tests anyways.
foresto | 22 hours ago
I'm not following your logic there, and I don't see any mention of "transformative" in the license. Can you explain what you mean?
crazygringo | 22 hours ago
And so, a work being sufficiently transformative is one way in which copyright no longer applies, but that's not the case here specifically. The specific case here is essentially just a clean-room reimplementation (though technically less "clean", but still presumably the same legally). But the end result is still a completely different expression of underlying non-copyrightable ideas.
And in both cases, it doesn't matter what the original license was. If a resulting work is sufficiently transformative or a reimplementation, copyright no longer applies, so the license no longer applies.
foresto | 22 hours ago
The library's test suite and interfaces were apparently used directly, not transformed. If either of those are considered part of the library's source code, as the license's wording seems to suggest, then I think output from their use could be considered a work based on the library as defined in the license.
crazygringo | 21 hours ago
Google LLC v Oracle America assumed (though didn't establish) that API's are copyrightable... BUT that developing against them falls under fair use, as long as the function implementations are independent.
Test suites are again generally considered copyrightable... but the behavior being tested is not.
So no, it's not considered to be a work based on the library. This seems pretty clear-cut in US law by now.
Also, the LGPL text doesn't say "work based on the library". It says "If you modify a copy of the Library", and this is not a "combined work" either. And the whole point is that this is not a modified copy -- it's a reimplementation.
In theory, a license could be written to prevent running its tests from being run against software not derived from the original, i.e. clean-room reimplementations. In practice, it remains dubious whether any court would uphold that. And it would also be trivial to then get around it, by taking advantage of fair use to re-implement the tests in e.g. plain English (or any specification language), and then re-implementing those back into new test code. Because again, test behaviors are not copyrightable.
foresto | 21 hours ago
It does, about a dozen times.
Are you perhaps referring to LGPL3? I think the license under discussion here is LGPL2.1.
https://github.com/chardet/chardet/blob/6.0.0/LICENSE
I'm not well versed in copyright case law, so I won't argue with the rest of what you wrote. Thanks for elaborating on your thoughts.
crazygringo | 21 hours ago
heavyset_go | 15 hours ago
That was only one prong of the four fair use considerations in that case. Look at Breyer's opinion, it does not say that copying APIs is fair use if implementations are independent, just that Google's specific usage in that instance met the four fair use considerations.
There are likely situations in which copying APIs is not fair use even if function implementations are independent, Breyer looked at substantiality of the code copied from Java, market effects and purpose and character of use.
If your goal is to copy APIs, and those APIs make up a substantial amount of code, and reimplement functions in order to skirt licenses and compete directly against the source work, or replace it, those three considerations might not be met and it might not be fair use. Breyer said Google copied a tiny fraction of code (<1%), its purpose was not to compete directly with Oracle but to build a mobile OS platform, and Google's reimplementation was not considered a replacement for Java.
GardenLetter27 | 21 hours ago
Software patents would work as you describe, but not copyright.
animitronix | 23 hours ago
kanemcgrath | 23 hours ago
I downloaded both 6.0 and 7.0 and based on only a light comparison of a few key files, nothing would suggest to me that 7.0 was copied from 6.0, especially for a 41x faster implementation. It is a lot more organized and readable in my armature opinion, and the code is about 1/10th the size.
Gigachad | 23 hours ago
Aboutplants | 23 hours ago
amelius | 22 hours ago
VorpalWay | 22 hours ago
IshKebab | 22 hours ago
But also software patents and design patents are totally different things.
throw-qqqqq | 21 hours ago
https://en.wikipedia.org/wiki/Software_patents_under_the_Eur...
amelius | 21 hours ago
https://en.wikipedia.org/wiki/Design_patent
peacebeard | 22 hours ago
LPisGood | 22 hours ago
shagie | 21 hours ago
https://www.nolo.com/legal-encyclopedia/protecting-fictional...
https://en.wikipedia.org/wiki/Copyright_protection_for_ficti...
wvenable | 20 hours ago
When it comes to software, again it's the expression that matters -- literally the actual source code. Software that does the same thing but uses entirely different code to do it is not the same expression. Like with the tracing example above, if you read the original source code then it's harder to claim that it isn't the same expression. This is why clean room implementations are necessary.
alex1sa | 11 hours ago
robmccoll | 22 hours ago
Gigachad | 22 hours ago
reverius42 | 22 hours ago
I'm not sure there should be, but I think there is.
NewsaHackO | 22 hours ago
paxys | 22 hours ago
In all of these cases an AI model is taking a copyrighted source, reading it, jumbling the bytes and storing it in its memory as vectors.
Later a query reads these vectors and outputs them in a form which may or may not be similar to the original.
SatvikBeri | 21 hours ago
I don't know of any rulings on the context window, but it's certainly possible judges would rule that would not qualify as transformative.
derangedHorse | 17 hours ago
alpaca128 | 21 hours ago
NewsaHackO | 20 hours ago
RhythmFox | 20 hours ago
Gigachad | 18 hours ago
phendrenad2 | 20 hours ago
NiloCK | 22 hours ago
AI 1: - (reads the source), creates a spec + acceptance criteria
AI 2: - implements from spec
AI 1 is in the position of the maintainer who facilitated the license swap.
smsm42 | 20 hours ago
yunnpp | 19 hours ago
As far as I know, you can as long as you own a copy of the original. In other words, you can't redistribute the assets, but you can distribute the code that works with them. This is literally how every free/libre game remake works. The copyright of your new, from-scratch code, is in no way linked to that of the assets.
martin-t | 22 hours ago
The original implementation would still have the upper hand here. OTOH if I as a nobody create something cool, there's nothing stopping a huge corporation from "reimplementing" (=stealing) it and and using their huge advertising budget to completely overshadow me.
And that's how they like it.
Gigachad | 22 hours ago
u1hcw9nx | 22 hours ago
> That question is this: does legal mean legitimate?
Just because something is legal does not mean it's moral thing to do.
larodi | 22 hours ago
is it legitimate for millions of people to exploit, expound on knowledge that was perhaps, to begin with, not legitimate to use? well they did already, who's to judge the commons now?
mirashii | 21 hours ago
larodi | 19 hours ago
to me is superb ridiculous to shun the comment though. but we'll be having this split for a while, that for sure.
fruitworks | 21 hours ago
On top of all of this, there are the attempts at binary decompilation using LLMs and other new tools that have been discussed on this site recently.
GuB-42 | 20 hours ago
gowld | 20 hours ago
miggol | 22 hours ago
When I first read about the chardet situation, I was conflicted but largely sided on the legal permissibility side of things. Uncomfortably I couldn't really fault the vibers; I guess I'm just liberal at heart.
The argument from the commons has really invoked my belief in the inherent morality of a public good. Something being "impermissible" sounds bad until you realize that otherwise the arrow of public knowledge suddenly points backwards.
Seeing this example play out in real life has had retroactive effects on my previously BSD-aligned brain. Even though the argument itself may have been presented before, I now understand the morals that a GPL license text underpins better.
crdrost | 20 hours ago
BSD-type stuff is very simple because it says "here is this stuff. you can use it as long as you promise not to sue me. I promise not to sue you too."
Very simple.
GPL-type stuff is intrinsically more complex because it's trying to use the threatening power of lawsuits, to reduce overall IP lawsuits. So it has to say "Here is this stuff. You can use it as long as you promise not to sue me. I am only going to sue you, if you start pretending like you have the right to sue other folks over this stuff or anything you derive from it. You don't have the right to sue others for it, I made it, so please stop pretending and let's stop suing each other over this sort of thing."
Getting the entire legal nuance around that sort of counterfactual "I will only sue you if you try to pretend that you can sue others" is why they're more complex. And the simplest copyleft licenses like the Mozilla Public License have a very rigid notion of what "the software" is, so like for MPL it's "this file is gonna never be used in a lawsuit, you can edit it ONLY as long as you agree that this file must never be used by you to sue someone else, if you try to mutate it in a way that lets you sue someone else then that's against our agreement and we reserve the right to sue you."
Whereas for GPL it's actually kind of nebulous what "the software" is -- everything that feeds into the eventual compiled binary, basically -- and so the license itself needs to be a little bit airy-fairy, "let's first talk about what conveying the software means...", in various ways.
The interesting thing here is that as far as the courts are initially ruling, these from-scratch reimplementations are not human works and therefore are not copyrightable, which makes them all kind of public domain. Slapping the MIT license on it was an overstep. If that's how things go then Free Software has actually won its greatest sweep with LLM ubiquity.
derangedHorse | 17 hours ago
lukev | 22 hours ago
But a point that was not made strongly, which highlights this even more, is that this goes in every direction.
If this kind of reimplementation is legal, then I can take any permissive OSS and rebuild it as proprietary. I can take any proprietary software and rebuild it as permissive. I can take any proprietary software and rebuild it as my own proprietary software.
Either the law needs to catch up and prevent this kind of behavior, or we're going to enter an effectively post-copyright world with respect to software. Which ISN'T GOOD, because that will disincentivize any sort of open license at all, and companies will start protecting/obfuscating their APIs like trade secrets.
integralid | 22 hours ago
Companies can take open-source software and make a proprietary reimplementation. You can't take a proprietary software and make an open source GPL version.
I am absolutely certain that if you tried you would be sued to oblivion. But big company screwing up open source is not even news anymore. In fact I (still) believe that the fact that even though LLMs were trained on tons of GPL and AGPL or even unlicensed software it's considered ok to use LLM code in proprietary projects is example of just that.
lukev | 22 hours ago
martin-t | 22 hours ago
Crazy that only now we're seeing a bunch of articles coming to the same conclusion now.
I think copyright should still apply, but if it doesn't, we need new laws - ones which protect all human work, creative or not. Laws should serve and protect people, not algorithms and not corporations "owning" those algorithms.
I put owning in quotes because ownership should go to the people who did the work.
Buying/selling ownership of both companies and people's work should be illegal just like buying/selling whole humans is. Even if it took thousands of years to get here.
Money should not buy certain things because this is the root cause of inequality. Rich people are not getting richer at a faster rate by being more productive than everyone else but by "owning" other people's work and using it as leverage to extract even more from others.
Maybe LLM and mass unemployment of white collar workers will be the wakeup call needed for a reform. Or revolution.
Last time this happened was during the second industrial revolution and that's how communism got popular. We should do better this time because this is the last revolution which might be possible.
winstonwinston | 22 hours ago
That’s a weird statement while releasing the new version of the same project. Maybe just release it as a new project, chardet-ai v1.0 or whatever.
martin-t | 22 hours ago
2) Copyright was the wrong mechanism to use for code from the start, LLMs just exposed the issue. The thing to protect shouldn't be creativity, it should be human work - any kind of work.
The hard part of programming isn't creativity, it's making correct decisions. It's getting the information you need to make them. Figuring out and understanding the problem you're trying to solve, whether it's a complex mathematical problem or a customer's need. And then evaluating solutions until you find the right one. (One constrains being how much time you can spend on it.)
All that work is incredibly valuable but once the solution exists, it's each easier to copy without replicating or even understanding the thought process which led to it. But that thought process took time and effort.
The person who did the work deserved credit and compensation.
And he deserves it transitively, if his work is used to build other works - proportional to his contribution. The hard part is quantifying it, of course. But a lot of people these days benefit from throwing their hands up and saying we can't quantify it exactly so let's make it finders keepers. That's exploitation.
3) Both LLM training and inference are derivative works by any reasonable meaning of those words. If LLMs are not derivative works of the training data then why is so much training data needed? Why don't they just build AI from scratch? Because they can't. They just claim they found a legal loophole to exploit other people's work without consent.
I am still hoping the legal people take time to understand how LLMs work, how other algorithms, such as synonym replacement or c2rust work, decide that calling it "AI" doesn't magically remove copyright and the huge AI companies will be forced to destroy their existing models and train new ones which respect the licenses.
wvenable | 20 hours ago
If you went to school for 12-16 years, that's a lot of training. Does that mean anything you produce is a derivative work?
martin-t | 5 hours ago
1) People phrase it as a question even when they've already made up their mind (whether that's your case or not).
2) It implicitly assumes that humans and algorithms are the same. They are not - humans have rights and free will, algorithms don't. Humans cannot be bought or sold, etc.
To your question:
a) If you're asking whether teachers should get compensated according to how good a job they do, I think so. They are very often undervalued, especially the good ones - but of course that means the job attracts people who do it because they enjoy it (and are therefore more likely to be good at it) rather than those who chose jobs according to money and then do the bare minimum.
b) There's a critical difference - consent. Teachers consented to their knowledge being used by those they taught. I did not consent to my code being used for training LLMs. In fact I purposefully chose a licence (AGPL) which in any common sence interpretation prohibits this used unless the resulting model is licensed under the same license - you can use my work only if you give back. Maybe there's a hole in the law - then it should be closed.
I am now gonna pose a question to you in turn.
Do you think people should be compensated for the full transitive value of their work?
wvenable | 4 hours ago
I don't think that's a necessary condition for that argument. You're making the implicit assumption that humans are special snowflakes and anything that we do cannot be replicated by computers, in any form. That's a very strong position to make without evidence. Is an LLM even an algorithm in the traditional sense? Is human cognition not an algorithm of some sort? I studied cogitative science decades ago and these questions weren't clear then, they're certainly even less clear now.
It's also somewhat begs the question of whether this is even relevant to what we are talking about.
Teachers are not relevant to conversation. You can learn by reading books, watching TV, using and reading software. Basically all of copyrighted and non-copyrighted human expression is available for you to consume and then creatively produce your own works using that knowledge.
> Do you think people should be compensated for the full transitive value of their work?
The short answer is no. Not everything that someone simply dreams up can or should be monetized. That sounds like a radical position but actually the current state of "intellectual property" has only existed for an extremely brief bit of human history. What has most greatly shaped our culture and knowledge has been effectively free for anyone to use, modify, and reproduce for hundreds of years.
That's not to say I don't support copyright as a means to support creative works but I would argue that it's an imperfect system. We're starving human minds of modern culture and knowledge often not even for someone's monetary gain but simply because the system demands it. It's ironic that artificial intelligence might actually free us from these constraints.
I purposefully choose a license (Apache) for my open source work to make it widely and freely available.
palata | 22 hours ago
If we protect API under copyright, it makes it easier to prevent interoperability. We obviously do NOT want that. It would give big companies even more power.
Now in the US, the Supreme Court that the output of an LLM is not copyrightable. So even a permissive licence doesn't work for that reimplementation: it should be public domain.
Disclaimer: I am all for copyleft for the code I write, but already without LLMs, one could rewrite a similar project and use the licence they please. LLMs make them faster at that, it's just a fact.
Now I wonder: say I vibe-code a library (so it's public domain in the US), I don't publish that code but I sell it to a customer. Can I prevent them from reselling it? I guess not, since it's public domain?
And as an employee writing code for a company. If I produce public domain code because it is written by an LLM, can I publish it, or can the company prevent me from doing it?
josalhor | 22 hours ago
makerofthings | 21 hours ago
paxys | 21 hours ago
makerofthings | 21 hours ago
pphysch | 20 hours ago
makerofthings | 13 hours ago
j-bos | 21 hours ago
Nothing was stolen, not even copied, lamest piracy I've heard of.
makerofthings | 21 hours ago
wvenable | 20 hours ago
https://pbs.twimg.com/media/ENE01g6X0AA7w5r?format=jpg
Are they copies? Can all these car companies sue each other?
jrochkind1 | 21 hours ago
Our foreparents fought for the right to implement works-a-like to corporate software packages, even if the so-called owners did not like it. We're ready to throw it all away, and let intellectual property owners get so much more control.
The implications will not end up being anti-large-corporation or pro-sharing. If you can prevent someone from re-implementing a spec or building a client that speaks your API or building a work-a-like, it will be the large corporations that exersize this power as usual.
alterom | 21 hours ago
Our "foreparents" weren't competing with corporations with unlimited access to generative AI trained on their work. The times, they're-a-changin'.
You're rehashing the argument made in one of the articles which this piece criticizes and directly addresses, while ignoring the entirety of what was written before the conclusion that you quoted.
If anyone finds themselves agreeing with the comment I'm responding to, please, do yourself a favor and read the linked article.
I would do no justice to it by reiterating its points here.
salawat | 21 hours ago
Pretty sure no one, (but me anyway) saw overt theft of IP by ignoring IP law through redefinition coming. Admittedly I couldn't articulate for you capital would skill transfer and commoditize it in the form of pay to play data centers, but give me a break, I was a teenager/twenty something at the time.
hathawsh | 21 hours ago
It seems like the answer is to adjust IP owner rights very carefully, if that's possible. It sounds very hard, though.
alterom | 18 hours ago
The point the author was making was that the intent of GPL is to shift the balance of power from wealthy corporations to the commons, and that the spirit is to make contributing to the commons an activity where you feel safe in knowing that your contributions won't be exploited.
The corporations today have the resources to purchase AI compute to produce AI-laundered work, which wouldn't be possible without the commons the AI it got its training data from, and give nothing back to the commons.
This state of things disincentivizes contributing to the FOSS ecosystem, as your work will be taken advantage of while the commons gets nothing.
Share-alike clause of the GPL was the price that was set for benefitting from the commons.
Using LLMs trained on GPL code to x "reimplement" it creates a legal (but not a moral!) workaround to circumvent GPL and avoid paying the price for participation.
This means that the current iteration of GPL isn't doing its intended job.
GPL had to grow and evolve. The Internet services using GPL code to provide access to software without, technically, distributing it was a similar legal (but not moral) workaround which was addressed with an update in GPL.
The author argues that we have reached another such point. They don't argue what exactly needs to be updated, or how.
They bring up a suggestion to make copyrightable the input to the LLM which is sufficient to create a piece of software, because in the current legal landscape, creating the prompt is deemed equivalent to creating the output.
You can't have your cake and eat it too.
A vibe-coded API implementation created by an LLM trained on open source, GPL licensed code can only be considered one of two things:
— Derivative work, and therefore, subject to the requirement to be shared under the GPL license (something the legal system disagrees with)
— An original work of the person who entered the prompt into the LLM, which is a transformative fair use of the training set (the current position of the legal system).
In the later case, the input to the LLM (which must include a reference to the API) is effectively deemed to be equivalent to the output.
The vibe-coded app, the reasoning goes, isn't a photocopy of the training data, but a rendition of the prompt (even though the transformativeness came entirely from the machine and not the "author").
Personally, I don't see a difference between making a photocopy by scanning and printing, and by "reimplementing" API by vibe coding. A photocopy looks different under a microscope too, and is clearly distinguishable from the original. It can be made better by turning the contrast up, and by shuffling the colors around. It can be printed on glossy paper.
But the courts see it differently.
Consequently, the legal system currently decided that writing the prompt is where all the originality and creative value is.
Consequently, de facto, the API is the only part of an open source program that has can be protected by copyright.
The author argues that perhaps it should be — to start a conversation.
As for who the benefactors are from a change like that — that, too, is not clear-cut.
The entities that benefit the most from LLM use are the corporations which can afford the compute.
It isn't that cheap.
What has changed since the first days of GPL is precisely this: the cost of implementing an API has gone down asymmetrically.
The importance of having an open-source compiler was that it put corporations and contributors the commons on equal footing when it came to implementation.
It would take an engineer the same amount of time to implement an API whether they do it for their employer or themselves. And whether they write a piece of code for work or for an open-source project, the expenses are the same.
Without an open compiler, that's not possible. The engineer having access to the compiler at work would have an infinite advantage over an engineer who doesn't have it at home.
The LLM-driven AI today takes the same spot. It's become the tool that software engineers can and do use to produce work.
And the LLMs are neither open nor cheap. Both creating them as well as using them at scale is a privilege that only wealthy corporations can afford.
So we're back to the days before the GNU C compiler toolchain was written: the tools aren't free, and the corporations have effectively unlimited access to them compared to enthusiasts.
Consequently, locking down the implementation of public APIs will asymmetrically hurt the corporations more than it does the commons.
This asymmetry is at the core of GPL: being forced to share something for free doesn't at all hurt the developer who's doing it willingly in the first place.
Finally, looking back at the old days ignores the reality. Back in the day, the proprietary software established the APIs, and the commons grew by reimplementing them to produce viable substitutes.
The commons did not even have its own APIs worth talking about in the early 1990s. But the commons grew way, way past that point since then.
And the value of the open source software is currently not in the fact that you can hot-swap UNIX components with open source equivalents, but in the entire interoperable ecosystem existing.
The APIs of open source programs are where the design of this enormous ecosystem is encoded.
We can talk about possible negative outcomes from pricing it.
Meanwhile, the already happening outcome is that a large corporation like Microsoft can throw a billion dollars of compute on "creating" MSLinux and refabricating the entire FOSS ecosystem under a proprietary license, enacting the Embrace, Extend, Extinguish strategy they never quite abandoned.
It simply didn't make sense for a large corporation to do that earlier, because it's very hard to compete with free labor of open source contributors on cost. It would not be a justifiable expenditure.
What GPL had accomplished in the past was ensuring that Embracing the commons led to Extending it without Extinguishing, by a Midas touch clause. Once you embrace open source, you are it.
The author of the article asks us to think about how GPL needs to be modified so that today, embracing and extending open-source solutions wouldn't lead to commons being extinguished.
Which is exactly what happened in the case of the formerly-GPL library in question.
sobellian | 20 hours ago
tpmoney | 20 hours ago
If "more freedom" is your goal, then this rewrite is inherently in that direction. It didn't "close" the old library down. The LGPL version remains under its license, for anyone to use and redistribute exactly as it always has. There is just now also an alternative that one can exercise different rights with. And that doesn't even get into the fact that "increased freedom" was never a condition of being allowed to clone a system from its interfaces in the first place. It might have been a fig leaf, but some major events in the legal landscape of all this came from closed reimplementations. Sony v. Connectix is arguably the defining case for dealing with cloning from public interfaces and behavior as it applies to emulators of all kinds, and Connectix Virtual Gamestation was very much NOT an open source or free product.
But to go a step further, the larger idea of AI assisted re-writes being "good", even if the human developers may have seen the original code seems to broadly increase freedoms overall. Imagine how much faster WINE development can go now that everyone that has seen any Microsoft source code can just direct Claude to implement an API. Retro gaming and the emulation scene is sure to see a boost from people pointing AIs at ay tests in source leaks and letting them go to town. No our "foreparents" weren't competing with corporations with unlimited access to AI trained on their work, they were competing with corporations with unlimited access to the real hardware and schematics and specifications. The playing field has always been un-level which was why fighting for the right to re-implement what you can see with your own eyes and measure with your own instruments was so important. And with the right AI tools, scrappy and small teams of developers can compete on that playing field in a way that previous developers could only dream of.
So no, I agree with the comment that you're responding to. The incredible mad dash to suddenly find strong IP rights very very important now that it's the open source community's turn to see their work commoditized and used in ways they don't approve of is off-putting and in my opinion a dangerous road to tread that will hand back years of hard fought battles in an attempt to stop the tides. In the end it will leave all of us in a weaker position while solidifying the hold large corporations have on IP in ways we will regret in the years to come.
matheusmoreira | 19 hours ago
autoexec | 18 hours ago
alterom | 18 hours ago
[citation needed]
Where does your confidence come from?
GPL itself was precisely the "intellectual property nonsense" adding which made FOSS (free as in freedom) software possible.
The copyright law was awfully broken in the 1980s too. Adding "nonsense" then was the only solution that proved viable.
Historically, nothing but adding "more IP nonsense" has ever worked.
>The real solution is to force AI companies to open up their models to all.
Sure. Pray tell how you would do that without some "intellectual property nonsense".
We don't exactly get to hold Sam Altman at gunpoint to dictate our terms.
>We need free as in freedom LLMs that we can run locally on our own computers
Oh, on that note.
LLMs take a fuckton of compute to train and to even run.
Even if all models were open, we're not at the point where it would create an equal playing field.
My home computer and my dev machine at work have the same specs. But I don't have a compute farm to run a ChatGPT on.
matheusmoreira | 17 hours ago
From the fact that copyright infringement is trivial and done at massive scales by pretty much everyone on a daily basis without people even realizing it. You infringe copyright every time you download a picture off of a website. You infringe copyright every time you share it with a friend. Everybody does stuff like this every single day. Nobody cares. It is natural.
> GPL itself was precisely the "intellectual property nonsense"
Yes. In response to copyright protection being extended towards software. It's a legal hack, nothing more. The ideal situation would have been to have no copyright to begin with. The corporation can copy your code but you can copy theirs too. Fair.
> Pray tell how you would do that without some "intellectual property nonsense".
Intellectual property is irrelevant to AI companies.
Intellectual property is built on top of a fundamental delusion: the idea that you can publish information and simultaneously control what people do with it. It's quite simply delusional to believe you can control what people do with information once it's out there and circulating. The tyranny required to implement this amounts to totalitarian dictatorships.
If you want to control information, then your only hope is to not publish it. Like cryptographic keys, the ideal situation is the one where only a single copy of the information exists in the entire universe.
AI companies are not publishing any information. They are keeping their models secret, under lock and key. They need exactly zero intellectual property protection. In fact such protections have negative value to them since it restricts the training of their models.
> We don't exactly get to hold Sam Altman at gunpoint to dictate our terms.
Sure you do. The whole point of government is to do just that. Literally pass some kind of law that forces the corporations to publish the model weights. And if the government refuses to do it, people can always rise up.
> Even if all models were open, we're not at the point where it would create an equal playing field.
Hopefully we will be, in the future.
salawat | 17 hours ago
matheusmoreira | 16 hours ago
pixl97 | 5 hours ago
throwaway290 | 13 hours ago
respectfully yoy have no idea what you are talking about here.
matheusmoreira | 12 hours ago
scheeseman486 | 11 hours ago
pixl97 | 5 hours ago
Copyright is a gigantic fucking mess that the US has forced over a large chunk of the world.
throwaway290 | 3 hours ago
How did they turn out?
trinsic2 | 18 hours ago
If you want to build a new world with out this, we can't do it while we are supporting the very companies that are creating the problem. The more power you give them, the strong they get and the weaker we become.
I think focus needs to shift completely off of for-profit companies. Although, not sure how that is going to happen..lol
RobRivera | 21 hours ago
halJordan | 21 hours ago
dimitrios1 | 21 hours ago
(side bar: the phrase "anti-<whatever> luddites" is way, way overused, especially here. Let's get more creative, people!)
MrDarcy | 20 hours ago
There isn’t much of a middle ground anymore.
fc417fc802 | 16 hours ago
There's also some environmentalist concerns which the term luddite again fits perfectly. You just have to generalize, transferring laterally from economic wellbeing to environmental wellbeing.
So I don't think GP qualified as an ad hominem dismissal but rather an accurate description of the situation. Take what's being discussed (restrictions on specifications and interoperability), project it backwards in history, and imagine what an alternate present day would look like. I think it would be pretty bad.
Qwertious | 12 hours ago
Pffft no. Most of us think that AI is being used as a political trick - like firing unionized workers "to replace them with AI" and then hiring new un-unionized workers to replace them, 2 weeks later. Replace the AI with an empty cardboard box labeled "AI" in black marker, and nothing changes.
See also: using AI to launder pirated material, for big businesses.
pixl97 | 5 hours ago
1. Since when have companies needed trillions of dollars of AI to do that? In the US they've been able to get away with getting rid of unions for decades now.
2. Since when has HN given a shit about unions. Posting about unions, at least till recently has been a great way of getting your comment downvoted to [dead] in one easy step. For longer than LLMs have existed the HN answer to unions was "They are just there to keep me as an SWE from making as much money as I can". Only now do we see a little bit of pushback now that their heads may be next on the chopping block.
pyrale | 7 hours ago
Who doesn't enjoy interesting times™
dogcomplex | 18 hours ago
Nor should we be treating AI models themselves as respected IP. They're built on everyone else's data. Throw away this whole class of law, it's irrelevant in this new world.
teaearlgraycold | 17 hours ago
Well we could try fixing the forever part. Copyright is out of control. I’d like to see a world with much less power given to IP. Sometimes I even say I want it eradicated entirely. But realistically we should start by cutting things back. Maybe give software an especially short copyright period.
fc417fc802 | 16 hours ago
There's always going to be downsides and edgecases when granting any party a monopoly over anything. At least if it's limited to 2 decades any unintended consequences, philosophical objections, and etc are hopefully kept within reason.
Qwertious | 12 hours ago
Meanwhile, there are cases where copyright of more than 2 years is overkill.
I don't know what, but it seems like we need some sort of mechanism for variable-length IP duration is needed.
fc417fc802 | 12 hours ago
I could understand for medical devices maybe but even then it seems like the software is a tiny part of the overall cost of a given design. A competitor could already do a clean room reimplementation in that case.
But I guess it wouldn't be all that bad if there were a carefully crafted extension for government certified software that was explicitly tied to the length of the certification process.
pixl97 | 5 hours ago
PowerElectronix | 5 hours ago
If you do something that requires stealing the code (publishing it, selling it, etc) the company can legally fuck you up.
Now, once it's in tbe wind, it becomes almost impossible to pursue from a practical point of view, as any implementer can claim trade secrets to avoid showing you the code.
ncruces | 10 hours ago
Xirdus | 8 hours ago
fc417fc802 | 7 hours ago
Consider if you will that if some guy were to fly a drone the size of a car that he knocked together in his garage over a residential area people would not accept that. Yet private pilots in cessnas fly over neighborhoods constantly.
marcus_holmes | 17 hours ago
grensley | 17 hours ago
lurk2 | 17 hours ago
dataflow | 15 hours ago
terminalshort | 15 hours ago
marcus_holmes | 14 hours ago
LtWorf | 14 hours ago
Dylan16807 | 13 hours ago
The occasional piece of software might be a trade secret, but a person downloading a preexisting leak isn't affected by those laws.
dataflow | 12 hours ago
I think 18 U.S.C. § 1832 (a) (3) might answer your question? https://www.law.cornell.edu/uscode/text/18/1832
dataflow | 13 hours ago
So now consider two questions:
1. You actually didn't use an LLM, but they believe & claim you did. Who has the burden of proof to show that you actually own the copyright, and how do they do so?
2. They write new code that you feel is based on yours. They claim they washed it through an LLM, but you don't believe so. Who has the burden of proof here and how do they do so?
greyface- | 10 hours ago
AnthonyMouse | 13 hours ago
bdowling | 8 hours ago
wk_end | 4 hours ago
vbarrielle | 11 hours ago
xyzal | 10 hours ago
It would be interesting to see a court ruling that the output of LLMs trained on copyleft code are licensed under the GPL ... and all other viral licenses simultaneously
Frieren | 9 hours ago
It is quantum legality, to use copyright input is legal or illegal depending on the observer.
N7lo4nl34akaoSN | 6 hours ago
Saline9515 | 8 hours ago
taneq | 8 hours ago
direwolf20 | 8 hours ago
cj | 7 hours ago
No one knows.
dec0dedab0de | 5 hours ago
It would take two stubborn businesses with a lot of money deciding that it is better to battle it out than focus on their business. Something like IBM v SCO or Oracle v Google.
conartist6 | 5 hours ago
red_admiral | 10 hours ago
raggi | 7 hours ago
raxxorraxor | 9 hours ago
LtWorf | 14 hours ago
jongjong | 13 hours ago
If we remove IP laws, we should remove all private property laws!
giancarlostoro | 7 hours ago
For movies and shows, charge and increasing fee to renew the copyright. Eventually studios will give up certain movies. The older the movie the more you pay.
dec0dedab0de | 5 hours ago
I personally think we should have shorter limits for non-creator owners of copyright, and for creators it should be like 20 years or death whichever comes last. I also think compulsory licensing should exist for everything.
thayne | 17 hours ago
devonkelley | 15 hours ago
AnthonyMouse | 13 hours ago
dTal | 9 hours ago
What is the difference between an "agent" and a "compiler"?
For that matter, what is the difference between "I got an agent to provide a high level description" and a decompiler?
What is the difference between ["decompiling" a binary, editing the resulting source, recompiling, and redistributing] and [analyzing the behavior of a binary, feeding that description into an LLM, generating source code that replicates that behavior, editing that, recompiling and redistributing]?
Takeaway: we are now in a world where software tools can climb up and down the abstraction stack willy nilly and independently of human effort. Legal tools that attempt to track the "provenance" of "source code" were already shaky but are now crumbling entirely.
direwolf20 | 7 hours ago
thayne | 17 hours ago
Although I think the chance of that happening is effectively zero.
zelphirkalt | 10 hours ago
I am for keeping the licenses in place, as long as there is any copyright at all on software. If we get rid of that, then we can get rid of copyleft licenses and all others too. But of course businesses and greedy people want to have their cake and eat it too. They want copyleft to disappear, but _their_ software, oh no, no one may copy that! Double standards at their best.
rcxdude | 10 hours ago
(the paradox of copyleft is that it does tend to push free software advocates in a direction of copyright maximalism)
noemit | 9 hours ago
red_admiral | 9 hours ago
az226 | 7 hours ago
mikkupikku | 6 hours ago
randyrand | 21 hours ago
stagger87 | 21 hours ago
panny | 21 hours ago
>The U.S. Copyright Office (USCO) and federal courts have consistently ruled that AI-generated works—where the expressive elements are determined by the machine, even in response to a human prompt—lack the necessary human creative input and therefore cannot be copyrighted.
All this code is public domain. Your employees can publish "your" AI generated code freely and it won't matter how many tokens you spent generating it. It is not covered by copyright.
api | 21 hours ago
A lot of SaaS too, especially if AI can run a simple deploy.
We might be approaching a huge deflationary catastrophe in the cost of a lot of software. It’s not a catastrophe for the consumer but it is for the industry.
hungryhobbit | 21 hours ago
However, I take issue with his version of history:
>The history of the GPL is the history of licensing tools evolving in response to new forms of exploitation: GPLv2 to GPLv3, then AGPL.
GPLv3 set open source backwards: it wasn't an evolution to protect anything, it was a an overly paranoid failure. Don't believe me? Just count how many GPL3 vs. how many GPL2 projects have been started since GPL3 dropped.
Again, I'm very pro-OSS, but let's not pretend the community has always had a straight line of progress forward; some stuff is crazy Stallman stuff that set us back.
ajross | 20 hours ago
This isn't a problem, this is the goal. GNU was born when RMS couldn't use a printer the way he wanted because of an unmodifiable proprietary driver. That kind of thing just won't happen in the vibe coded future.
duskdozer | 17 hours ago
ball_of_lint | 20 hours ago
I argue more free. EULAs and restrictions on how+for what software can be used, like DRM, typically use copyright as their legal backing. GPL licenses turn that on it's head but that doesn't redeem the original, flawed, law.
This seems to follow the letter but not the spirit of the license. If this does pass legal muster, we can do the same to whatever proprietary software we wish, which makes a dramatically different but IMO better ecosystem in the end.
duskdozer | 17 hours ago
humannutsack | 20 hours ago
It’s not and never has been.
It’s not illegal for me to draw The Simpsons - whether or not I used AI. It’s illegal for me to sell it as my own.
To ban the very ability to produce it at all would be a dystopia. It would extend copyright to mean things it was never intended to mean - it would prevent you from physically uttering statements or depicting images, if these luddites who haven’t thought it through had their way.
jongjong | 20 hours ago
Firstly, an AI agent is not a person. Secondly, the MIT license doesn't offer any rights to the code itself; it says a 'copy of the software' - That's what people are given the right to. It says nothing about the code and in terms of the software, it still requires attribution. Attribution of use and distribution of the software (or parts) is required regardless of the copyright aspect. AI agents are redistributing the software, not the code.
The MIT license makes a clear distinction between code and software. It doesn't cede any rights to the code.
And then, in the spirit of copyright; it was designed to protect the financial interests of the authors. The 'fair use' carve-out was meant for cases which do not have an adverse market impact on the author which it clearly does; at least in the cases highlighted in this article.
effank | 20 hours ago
Our legal and ethical frameworks including both copyleft and permissive licenses operate under the illusion of discrete, bounded attribution. They assume we can draw a clean perimeter around 'the code' and its 'author.' In reality, software production is a highly complex socio-technical network characterized by deep epistemic opacity. We are arguing over who holds the title to the final output while completely ignoring the vast, distributed network of inputs that made it possible.
Furthermore, because end-users face massive transaction costs and a general lack of incentive to evaluate the granular utility of their consumption, we have no reliable market mechanism to signal value back up the supply chain. Consequently, we fail to effectively compensate the true chain of biological and artificial contributors that facilitate downstream consumption.
In a rigorously mapped value-system, attribution would not stop at the keyboard; it would extend to all nodes of enablement. This includes what sociologists and economists term 'reproductive labor' or 'invisible labor' such as the developer’s partner who cooked them breakfast, thereby sustaining the biological and cognitive infrastructure necessary for the developer to contribute to the repository in the first place. The AI model is merely another node of aggregated external labor in this exact same web - both by its upward 'training' and downward utilization.
Until we develop an economic and technological ontology capable of tracing and rewarding this entire ecosystem of adjacent contributions, our debates over LGPL versus MIT will remain myopic. We are trying to govern a distributed, interconnected web of collective labor using property tools designed for solitary craftsmen.
joshjob42 | 20 hours ago
zakki | 20 hours ago
duskdozer | 17 hours ago
dataflow | 15 hours ago
0x457 | 20 hours ago
Morally - yes, technically - no. I think it's odd to be mad at someone doing the exact thing you praise in another case just because license isn't copyleft within license allowance. Make a better copyleft license?
smsm42 | 20 hours ago
I don't see how it matters what he looked at. If I took a copyrighted code and run it through a script that replaces all variable names, and then claimed copyright on the result because it's an entirely new work and I did not look on the original work, I'd be ridiculed and sued, and would lose that lawsuit. AI is a more complex machine, but still a machine. If you feed somebody'd work into a machine, what comes out is a derivative work.
Test suite is a part of copyrighted code, is it not? If he used just the API description, preferably from a copyright-clean source, then we could claim new work (regardless of how it was produced, by using Claude or trained pigeons or by consuming magic mushrooms). But once parts of the copyrighted code had been used, it becomes derivative work.
metalcrow | 20 hours ago
I'm not sure that's true, legally speaking. If you fed it into a PRNG, the output seems to me like it would not be an obviously derivative work (i doubt you could copyright it but that's a separate question). So we have 1 machine that can transform something into non-derivative work, and another that leaves the result derivative. The line isn't likely going to be drawn as "did a machine do it or not", but on a fuzzy human line of how close the output seems to be to the original (IANAL).
justinclift | 19 hours ago
Sure, but neither of those is an IP Lawyer.
The actual IP Lawyer who turned up and tried to engage, Richard Fontana, had his issue closed:
https://github.com/chardet/chardet/issues/334
Richard's point was this (quoted below):
---
FWIW, that case is not really relevant to what we are/were talking about here.
The question is whether you are truly an "author", or whether there was no (human) author.
The general legal consensus has been that generative AI output is not copyrightable (without some special facts of some sort, perhaps).
> If all of this code was somehow not copyrightable because someone wrote a prompt instead of directly editing the code, that would have pretty huge implications.
That's exactly it. Your act of applying the MIT license with your copyright notice to code that you did not "directly edit" has enormous implications.
RcouF1uZ4gsC | 19 hours ago
I think it is more like photography.
The case law is that a camera can't own a copyright, but a human can, even though all the pixels were produced by the camera with very little involvement at the pixel level by the human.
waterTanuki | 19 hours ago
ryukoposting | 13 hours ago
https://www.reuters.com/legal/government/us-supreme-court-de...
Prompting generally does not constitute authorship under US law.
RcouF1uZ4gsC | 7 hours ago
Sleaker | 19 hours ago
Edit: looks like an IP lawyer had this exact issue on the GitHub and it was closed.
internet2000 | 19 hours ago
> The ethical force of that project did not come from its legal permissibility—it came from the direction it was moving, from the fact that it was expanding the commons. That is why people cheered.
How is this not just relitigating GPL vs MIT? By now you know which side of that argument you are in. The AI component is orthogonal.
tzs | 19 hours ago
Offering as a networked service is not distribution. That was why they had to make AGPL to put conditions on use in networked services.
tzs | 17 hours ago
keeda | 18 hours ago
One of those things is that we assumed that the code embodied most of the value it offered. That it was the code that contained the creativity and expressiveness and usefulness. And we thought only we could write code. And so we thought we only needed to protect the code to protect our efforts and investments. Which is also why we accepted copyright as an appropriate legal protection for software, or of enforcing an ethos of sharing, as with copyleft.
But the code itself was never the valuable aspect; it was the functionality it provided.
And now AI is making that starkly apparent, while undermining a lot of other presumptions. Including about copyright.
Copyright protection for software is a historical hack because people didn’t want to figure out an appropriate legal framework from scratch. You “wrote” books, you "wrote" code, let’s shoehorn software into copyright and go get lunch! Completely overlooking the fact that copyright explicitly does not cover functional aspects (that is the realm of patents) which is the entire raison d'etre of code.
Sure, copyright covers “expressive elements”, but again those are properties of the source code, not the functionality. In fact, expressiveness is BAD for code (cf “code should be boring”)! Copyright will protect whether you used a streams API or a for-loop for iteration, which is absolutely irrelevant to the technical functionality that actually solves user problems, which has always been the only thing users really cared about.
In fact, if you look at significant copyright-related cases for software now (e.g. Oracle vs Google), you'll realize they have twisted themselves into knots trying to apply laws intended for expressive creativity to issues that were essentially about technical creativity.
I have no hopes that we will figure out an appropriate IP framework for software, so I expect people will move towards other things like patents, trade secrets and trademarks. Which have their own problems, but at least they already exist and are more suitable than copyright, especially in the age of AI.
daemin | 18 hours ago
This can also apply to people, either if they have seen the code previously and therefore are ineligible to write the code for a clean-room implementation, or it gets murky when the same person writes the same code twice from their own knoeldge, as in the Oracle Java case.
Coming from a professional programming perspective I can totally see the desire to have more libraries written in permissive licences like BSD or MIT, as they allow one like myself to include them in commercial closed-source products without needing to open source the entire codebase.
However I find myself agreeing with the article in so far as this LLM generated implementation is breaking the social contract for a GPL/LGPL based library. The author could have easily implemented the new version as a separate project and there would not have been an outcry, but because they are replacing the GPL version with this new one it feels scummy to say the least.
derangedHorse | 18 hours ago
Ridiculous. I don't want specifications for proprietary APIs to be protected, and I don't want the free ones to be either. The software community seemed pretty certain as a whole that this would be very bad for competition [1].
Morally, I don't think there's anything wrong with re-implementing a technology with the same API as another, or running a test suite from a GPL licensed codebase. The code wasn't stolen, it was capitalized on. Like a business using a GPL code editor to write a new one.
> This is not a restriction on sharing. It is a condition placed on sharing
Also this doesn't make any logical sense. A condition on sharing cannot exist without corresponding restrictions.
[1] https://www.reddit.com/r/Android/comments/mklieg/supreme_cou...
ori_b | 17 hours ago
Proving this is going to be hard with current "open source" models.
eschaton | 17 hours ago
Put the programmer’s reference for the Digital Equipment DEQNA QBus Ethernet adapter in your favorite slop tool and tell it to make a C or C++ implementation for an emulator, and you know what you get? Code from SIMH. That’s not “generating,” that’s “copying.”
colmmacc | 16 hours ago
1. The freedom to run the program as you wish 2. The freedom to study how it works and modify it (which requires access to source code) 3. The freedom to redistribute copies to help others 4. The freedom to distribute modified versions, so the whole community benefits from your improvements
To my mind ... GenAI coding make all of these far more realizable, especially for "normal people", than CopyLeft ever has. Let's go through them ...
Want to run a program as you wish? Great! It's easier than ever to build a replacement. Proprietary or non-free software is just as vulnerable to reimplementation as Copyleft is.
Want to study a how a program works and to modify it? This is now much more achievable.
Want the freedom to redistribute copies to help others? Build your own version! It may not even be copyrightable if it's 100% generated (IANAL).
Want to distribute modified versions? yes! see previous.
I dunno; seems like generative coding can be as much a liberator as any kind of problem.
alterom | 16 hours ago
But I'll try nevertheless.
- >Want to run a program as you wish? Great! It's easier than ever to build a replacement.
Non-sequitur. Building a replacement does nothing for being able to run a program as you wish.
Nobody else is able to run your program as they wish unless you release it with a Copyleft license.
- >Want to study a how a program works and to modify it? This is now much more achievable.
Reverse engineering is more achievable.
Modifying a program, without having its source code, documentation, and a legal right to do so guaranteed by the license is (and always be) easier compared to not having those things.
- >Want the freedom to redistribute copies to help others? Build your own version! It may not even be copyrightable if it's 100% generated (IANAL).
So, that's not about redistributing copies. That's about building an alternative option.
I can download an Ubuntu image and get Libre Office on it with a click.
Go vibe-code me a Microsoft Excel running on Windows 11, please, and tell me it's easier.
- >Want to distribute modified versions? yes! see previous.
You're not even trying here.
One can't legally modify and redistribute copyrighted works without explicit permission to do so.
You keep saying "...but vibe coding allows anyone to create something else entirely instead and do whatever with it!" as if that is a substitute for checking out a repo, or simply downloading FOSS software to use as you wish.
- >I dunno; seems like generative coding can be as much a liberator as any kind of problem.
Now, that statement I fully agree with.
Generative coding is a liberator as much as any kind of problem is.
Headache, for example, is generally a problem. It's not a great liberator.
Neither is generative coding.
Now, you probably didn't intend to say what you wrote. And that's exactly why generative coding is not a panacea: the only way to say things that you mean to say is to write precisely what you mean to say.
Vibe-coding (like any vibe-writing) simply can't accomplish that, by design.
skydhash | 16 hours ago
People will still pay for Matlab, SolidWorks, and Maya because no one who need those will vibe-code a solution. And there’s plenty of good OSS versions for the others.
tw1984 | 16 hours ago
krater23 | 8 hours ago
antonio-mello | 16 hours ago
This creates an odd situation where the "reimplementation via AI" concern cuts both ways. If someone feeds my MIT repo to an LLM and gets a copyleft-violating derivative, that's one problem. But if I use an LLM trained on copyleft code to write my MIT-licensed tool, am I the one laundering licenses without knowing it?
I think the article's core point holds: legitimacy and legality are diverging fast. The open source community built norms around intent and reciprocity, and those norms are now being stress-tested by tools that can reimplement anything from a spec. No license text can fully encode "don't be a free rider."
blurbleblurble | 15 hours ago
jillesvangurp | 15 hours ago
So, you could argue that people are using double standards here a bit. It's fine when people take proprietary software and create GPL versions of it. But it's not OK when people take GPL software and create permissively licensed or proprietary versions of it. That's of course not how copyright actually works. The reason all of this is OK is that copyright allows you to do this thing. This isn't some kind of loophole that needs closing but an essential feature of copyright.
The friction here, and common misunderstanding about how copyright works is that you don't copyright ideas but the form or expression of something. Making a painting of a photograph is not a copyright violation. Same idea, different expression. Patents are for protecting ideas. Trademarks are for protecting brands. Some companies have managed to trademark certain color codes even, which is controversial.
There's a lot of legal history for interpretation of what is and isn't "fair use" under copyright of course. It gets much more complicated if you also consider international law and how copyright works in different countries. But people being able to make reasonable use of copyrighted material always was essential to the notion of having it to begin with.
The reason we can have music that uses samples from other people's music without that being a copyright violation is exactly this fair use. In the same way, you can quote from books and create funny memes based on movie fragments. Or create new theater plays, movies, etc. reinterpreting works of others. All legal, up to a point. If you copy too much it stops being fair use and starts being plagiarism.
With software copyright violations, you have to prove that substantial parts of the software were lifted verbatim. Lawyers and judges look at this in terms of how they would apply it to a plagiarism case. Literally - software doesn't get special treatment under copyright. Copyright long predates the existence of software and computers and did not change in any material way after that was invented.
anonnon | 14 hours ago
Has anyone else lost almost all respect for Antirez because of stuff like this?
eduction | 14 hours ago
Here we see three engineers writing — at length! — about a hugely complicated matter of law.
No one outside your bubble cares what you think. You are unqualified and your opinions irrelevant. You might as well be debating open heart surgery techniques.
waterproof | 14 hours ago
xbar | 14 hours ago
vbarrielle | 9 hours ago
jongjong | 14 hours ago
Already, the IP protections which exist for software suck. Patents are expensive and you can't even use them for software most of the time anyway. Copyright doesn't protect innovative ideas or architectures; if someone can just copy your code, mix it with a bunch of other code (no functionality changes) and then use it as their own; then copyright provides no protection at all...
If this is the case, then why should anyone bother to write any quality software at all? It has no value since anyone can just appropriate any essential functionality that they didn't create for themselves. What's to prevent an employee from taking their employer's source code, rewriting it with an LLM (same functionally) and generate a clone of their company's software to use as their own to compete against their employer?
Without any IP protections, anyone who writes software becomes a complete loser. There's 0 benefit. One software developer would be doing all the work and then some marketing expert or someone with good social connections could just steal their work and sell it for billions... The software developer gets NOTHING.
niemandhier | 13 hours ago
For SQLite the actual product is the test-suite and the audits.
Sure you can use the code all you like, but you only ever get past quality gates if you use the audited and provably tested version.
This becomes just more relevant in the age of ai coding, where an agent might be able to reimplement your specs.
Keep your code open, but consider moving your tests.
primenum | 12 hours ago
motbus3 | 11 hours ago
You can copy the idea and not use the source code. This has been ruled ok many times already and would be quite dangerous if that was not the case.
But this is not what this is. To generate the new program, another program, the AI, must have an input which then becomes part of the program itself. It does not really matter much if the generation does not contain the source code itself or a similar reimplementation. One could rewrite a full version of the Lord of the Rings changing all the words but having the same elements, it would still be plagiarism. No reason to think this is not the case here. It is evident that the source code was the base, hence, this is a derived work.
pu_pe | 10 hours ago
Our legal framework wasn't built for a situation where reimplementing complex software is trivial, much less almost completely automated.
rob74 | 10 hours ago
Another question which as far as I can see isn't addressed in the article: even if you accept that the AI-driven reimplementation is an independent new work, can you (even as a maintainer) simply "hijack" the old LGPL-licensed project and overwrite it (if the new code is 98,7% different from the existing code, it's essentially overwriting) with your MIT-licensed code? You're free to start a new MIT-licensed project with your reimplementation, but putting the new code into the old project like some kind of cuckoo's egg seems wrong to me...
wiz21c | 9 hours ago
If you are 50 years old or more, the computing you were born with (you own the computer, you own the programs) will be gone. Copyleft only makes sense if you own the computer.
That makes me sad.
krater23 | 8 hours ago
orthoxerox | 7 hours ago
spiffyk | 5 hours ago
pie_flavor | 6 hours ago
jFriedensreich | 6 hours ago
youknownothing | 5 hours ago
jpauline | 5 hours ago
waffletower | 4 hours ago