I think this mixes up the 'how' with the 'why.' FOSS isn't the end in itself, I think that for most people it's just the tool that lets us work together, share what we've built, and get something back from the community.
If this is suddenly being weaponised against us, I don't see how that's not a problem.
For a lot of people, FOSS is also very much the why. It’s not just a practical tool—it represents core principles like freedom, transparency, and collaboration. Those values are the reason many contribute in the first place.
Emphasis on the freedom, especially the freedom to use by anyone for any purpose.
If it took some people in the FOSS space this long that it also includes people, companies or purposes they disagree with, then I don't know what to tell them.
You are correct but in the context of free software, the FSF has been explicit about this ("The freedom to run the program as you wish, for any purpose"). Publishing software under a FOSS license imply that you agree with this definition of freedom.
That's like saying "I have the freedom to kill you".
Saying that you can create something, then you reserve the 'freedom' to limit what everyone else does for it really doesn't fall under the word freedom at all.
The interpretation is simple and the complete opposite of "I have the freedom to kill you".
The software creator (human or AI) must give the user of its software the same freedoms it has received.
If it has received the freedom to view the original, readable, source code, then users should have the freedom to view the original, readable, source code.
If it has received the freedom to modify the source code, then users should have the freedom to modify the source code.
Etc.
It's not hard to follow for people who want to do the moral thing.
It's VERY hard to follow for people who want to make money (and ideally lots of it, very quickly).
Have you actually read one a Free/Open-Source license? Like for example the MIT[1] license:
Permission is hereby granted, free of charge, to any person obtaining a copy of this software [...] to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software [...]
Or the FSF's definition[2] of Free Software
The freedom to run the program as you wish, for any purpose (freedom 0).
Or the OSI's definition[3] of open source.
5. No Discrimination Against Persons or Groups
6. No Discrimination Against Fields of Endeavor
It's almost as if this concept is at the very core of FOSS.
If you consider that the people weaponizing code are not honest, I as a FOSS producer am unworried. There may not be a lot of people out there able to use my code compared to LLMs scraping it, but I'm giving a leg up to other humans trying to do what I do.
If what I'm doing is interesting or unusual, LLMs will firstly not recognize that it's different, secondly will screw up when blindly combining it with stuff that isn't different, and thirdly if it's smart enough to not screw that up, it will ignore my work in favor of stealing from CLOSED source repos it gains access to, on the rationale that those are more valuable because they are guarded.
And I'm pretty sure that they're scraping private repos already because that seems the maximally evil and greedy thing to do, so as a FOSS guy I figure I'm already covered, protected by a counterproductive but knowingly evil behavior.
These are not smart systems, but even more they are not wise systems, so even if they gain smarts that doesn't mean they become a problem for me. More likely they become a problem for people who lean on intellectual property and privacy, and I took a pretty substantial pay cut to not have to lean on those things.
I think you'll find, especially within the tech community, people struggle with purity and semantics. They see that supporting and promoting FOSS is to be okay with its use for war, oppression, or whatever mental gymnastics they need to just not care or promote bad things. They will argue about what "free and open" means and get mixed up in definitions, political alignments, etc.
It is pretty obvious to me, that being blase about whomever using FOSS for adversarial reasons is not very "open" or "free". Somewhere in the thread there is an argument about the paradox of intolerance and I don't really care to argue with people on the internet about it because it is hard to assume the debate is in good faith.
My point is this: Throw away all your self described nuance and ask this yourself whether or not you think any malicious, war-monger, authoritarian, or hyper-capitalist state would permit a free and open source software environment? If the objective of a business, government, or billionaire is power, control, and/or exclusivity then, well, your lofty ideals behind FOSS have completely collapsed.
No I am not. Your response proves my point in regards to getting bogged down in semantics. In a nutshell, my point is that if we do not care or do nothing when it comes to malicious use of FOSS, you very well may lose FOSS or at least the ability to develop in a FOSS environment. It is the paradox of intolerance of a different flavor.
Nothing wrong with a GPL-like viral license for the AI era.
Training on my code / media / other data? No worries, just make sure the weights and other derived artifacts are released under similarly permissive license.
Wouldn't you want the code generated by those models be released under those permissive licenses as well? Is that what you mean by other derived artifacts?
If model training is determined to be fair use under US copyright law—either legislated by Congress or interpreted by Federal courts—then no license text can remove the right to use source code that way.
RMS is probably greatly behind the technical news at this point. I mean, he's surfing the web via a email summary of some websites. Even if he doesn't condone of how the internet is evolving, he can't really keep up with technology if he doesn't "mingle".
He's also 72, we can't expect him to save everyone. We need new generations of FOSS tech leaders.
I am gen-z and I am part of the foss community (I think) and one of the issues about new generations of FOSS tech leaders is that even if one tries to do so.
Something about Richard stallman really is out of this world where he made people care about Open source in the first place.
I genuinely don't know how people can relicate it. I had even tried and gone through such phase once but the comments weren't really helpful back then on hackernews
As much as RMS meant for the world, he’s also a pretty petty person. He’s about freedom but mostly about user freedom, not creators freedom. I also went through such a phase but using words like “evil” is just too black and white. I don’t think he is a nice person to be around.l, judging from some podcasts and videos.
If there is one thing Stallman knows well is the way he uses words and I can assure you if he calls something "evil" that is exactly the word he meant to use.
> user freedom, not creators freedom
In his view users are the creators and creators are the users. The only freedom he asks you to give up is the freedom to limit the freedom of others.
RMS asks you to give something up: Your right to share a thing you made, under your conditions (which may be conditions even the receiving party agree on), nobody is forced in this situation, and then he calls that evil. I think that is wrong.
I love FOSS, don't get me wrong. But people should be able to say: I made this, if you want to use it, it's under these condition or I won't share it.
Again, imho the GPL is a blessing for humanity, and bless the people that choose it freely.
You can follow him on https://stallman.org/
What is he doing? I believe still giving talks and taking stance on current day political issues.
Additionally I believe the last few years where quite turbulent so I assume he is taking life at his own pace.
Well, I would say it should be like that already & no new license is needed. Basically if a LLM was ever based on GPL code, its output should be also GPL licensed. As simple as that.
That is a complete fools errand. If it ever passes it would just mean the death of Open Source AI models. All the big companies would just continue to collect whatever data they like, license it if necessary or pay the fine if illegal (see Antropic paying $1.5 billion for books). While every Open Source model would be starved for training data within its self enforced rules and easy to be shut down if ever a incorrectly licenses bit slips into the models.
The only way forward is the abolishment of copyright.
I don't follow. If the model was open-sourced under this GPL-like license (or a compatible license), then it would follow the GPL-like license. If the model was closed, it would violate the license. In other words, it would not affect open-source models at all.
Similarly, I could imagine carving out an exception when training on copyrighted material without licence, as long as the resulting model is open-sourced.
> If the model was closed, it would violate the license.
Training is fair use. The closed models wouldn't be impacted. Even if we assume laws gets changed and lawsuits happened, they just get settled and the closed source models would progress as usual (see Bartz v. Anthropic).
Meanwhile if somebody wants to go all "GPL AI" and only train their models on GPL compatible code, they'd just be restricting themselves. The amount of code they can train on shrinks drastically, the model quality ends up being garbage and nothing was won.
Further, assuming laws got changed, those models would now be incredible easy to attack, since any slip up in the training means the models need to be scraped. Unlike the big companies with their closed models, Open Source efforts do not have the money to license data nor the billions needed to settle lawsuits. It would mean the end of open models.
Licenses like GPL are built on top of an enforcement mechanism like copyright. Without an enforced legal framework preventing usage unless a license is agreed to, a license is just a polite request.
Essentially LLMs are recontextualizing their training data. So on one hand, one might argue that training is like a human reading books and then inference is like writing something novel, (partially) based on the reading experience. But the contract between humans considers it plagiarism when we recite some studied text and then claim it as your own. So for example, books attribute citations with footnotes.
With source code we used to either re-used a library as-is, in which case the license terms would apply OR write our own implementation from scratch. While this LLM recontextualization purports to be like the latter, it is sometimes evident that the original license or at least some attribution, comment or footnote should apply. If only to help with future legibility maintenance.
> But the part about FOSS being used in a project not aligned with the creator's values seams hypocritical
I agree with you.
Imagine a parallel Earth where there was a free OS that the majority in the world used called GNU/Felix.
Felix (it/its), who wrote GNU/Felix and who was the project’s strong but kind leader, one day had a head injury that somehow decreased its empathy but raised its IQ.
Subordinates of Felix on the council of leadership noticed that it was adding features that would track all user data to use in some nefarious plan.
In this case, most would agree that for both the freedom and good of all, Felix should no longer lead this effort.
However, they would want to be sure that even the Will Bates’ great company Bikerosoft didn’t lead the project either, because despite its wonderful and ubiquitous Bikerosoft Office apps and Ezure cloud tools and infrastructure, it was a profit-based company.
Reading this felt like the official obituary for the 90s techno-optimism many of us grew up on.
The "end of history" hangover is real. We went about building the modern stack assuming bad actors were outliers, not state-sponsored standard procedure. But trying to legislate good use into licenses? I don't know how you would realistically implement it and to what extent? That solution implies we have to move toward zero-trust architectures even within open communities.
As an example: formal proofs and compartmentalization are unsexy but they're a solid way we survive the next decade of adversarial noise.
I remember reading a quote somewhere that stuck with me. Paraphrasing, "If the architecture of my code doesn't enforce privacy and resistance to censorship by default, we have to assume it will be weaponized".
I am out of ideas, practical ones, lots sound good on paper and in theory. It's a bit sad tbh. Always curious to hear more on this issue from smarter people.
It's also questionable to which extent restrictive licenses for open source software stay that relevant in the first place, as you can now relatively easily run an AI code generator that just imitates the logic of the FOSS project, but with newly generated code, so that you don't need to adhere to a license's restrictions at all.
> That solution implies we have to move toward zero-trust architectures even within open communities
Zero trust cannot exist as long as you interact with the real world.
The problem wasn't trust per se, but blind trust.
The answer isn't to eschew trust (because you can't) but to organize it with social structures, like what people did with “chain of trust” certificates back then before it became commoditized by commercial providers and cloud giants.
The Internet was the “Wild West”, and I mean that in the most kind, brutal, and honest way, both like a free fantasy (everyone has a website), genocide (replacement of real world), and an emerging dystopia (thieves/robbers, large companies, organizations, and governments doing terrible things).
Which, if you think about it, is a mostly uplifting timeline.
Back in 1770 there were basically 0 democracies on the planet. In 1790 there were 2. Now there are about 70 with about 35 more somewhere in between democracy and autocracy. So most of the world's population is living under a form of democracy. I know that things are degrading for many big democracies, but it wouldn't be the first time (the period between WW1 until the end of WW2 was a bad time for democracies).
I have no idea how we get from here to a civilized internet, though.
How does one make sure the implementation is sufficient and complete? It feels like assuming total knowledge of the world, which is never true. How many false positives and false negatives do we tolerate? How does it impact a person?
I'm not sure. We can use LLMs to try
out different settings/algorithms and see what it is like to have it on a social level before we implement it for real.
Perhaps but I am not entirely optimistic about LLM's in this context but hey perhaps freedom to do this and then doing it might make a dent after all, one can never know until they experiment I guess
Fair, I don't know how valuable it would be. I think LLMs would only get you so far. They could be tried in games or small human contexts . We would need a funding model that rewarded this though.
Things like that should not be handled on software level, you will always loose and run out of resources. You basically have to force politicians (fat chance)
Politicians aren't generally leaders, but rather followers. To force politicians to do something, lead where people follow you. But of course, paradoxically, this will by definition make you a practitioner of politics yourself... To quote from The Hunt for Red October, "Listen, I'm a politician, which means I'm a cheat and liar. When I'm not kissin' babies I'm stealin' their lollipops. But! It also means I keep my options open."
> If the architecture of my code doesn't enforce privacy and resistance to censorship by default
which is impossible.
- No code is feasibly guaranteed to be secure
- All code can be weaponized, though not all feasibly; password vaults, privacy infrastructure, etc. tend to show holes.
- It’s unrealistic to assume you can control any information; case-in-point the garden of Eden test: “all data is here; I’m all-powerful and you should not take it”.
I’m not against regulation and protective measures. But, you have to be prioritize carefully. Do you want to spend most of the world’s resources mining cryptocurrency and breaking quantum cryptography, or do you want to develop games and great software that solves hunger and homelessness?
No code architecture will enforce privacy or guarantee security.
Some code architectures make privacy and security structurally impossible from the beginning.
As technologists, we should hold ourselves responsible for ensuring the game isn't automatically lost before the software decisions even leave our hands.
I don't get why you conflate privacy and resistance to censorship.
I think privacy is essential for freedom.
I'm also fine with lots of censorship, on publicly accessible websites.
I don't want my children watching beheading videos, or being exposed to extremists like (as an example of many) Andrew Tate. And people like Andrew Tate are actively pushed by YouTube, TikTok, etc. I don't want my children to be exposed to what I personally feel are extremist Christians in America, who infest children's channels.
I think anyone advocating against censorship is incredibly naive to how impossible it's become for parents. Right now it's a binary choice:
1. No internet for your children
2, Risk potential, massive, life-altering, harm as parental controls are useless, half-hearted or non-existent. Even someone like Sony or Apple make it almost impossible to have a choice in what your children can access. It's truly bewildering.
And I think you should have identify yourself. You should be liable for what you post to the internet, and if a company has published your material but doesn't know who you are, THEY should be liable for the material published.
Safe harbor laws and anonymous accounts should never have been allowed to co-exist. It should have been one or the other. It's a preposterous situation we're in.
A hangout for 11-16 year olds often seems to devolve into a bunch of kids all watching their own phones. It's really depressing to watch, though they do seem to play as well.
We have had several arguments about no social media and we're only 1 out of 6-ish years in to the too naïve to look after yourself on the internet phase, and the eldest already figured out how to download some chat app I'd never even heard of without permission.
Voluntary “censorship” (not being shown visceral media you don’t ask) and censorship for children are very important.
Bad “censorship” is involuntarily denying or hiding from adults what they want to see. IMO, that power tends to get abused, so it should only be applied in specific, exceptional circumstances (and probably always temporarily, if only because information tends to leak, so there should be a longer fix that makes it unnecessary).
I agree with you that children should be protected from beheading and extremism; also, you should be able to easily avoid that yourself. I disagree in that, IMO, anonymous accounts and “free” websites should exist and be accessible to adults. I believe that trusted locked-down websites should also exist, which require ID and block visceral media; and bypassing the ID requirement or filter (as a client) or not properly enforcing it (as a server operator) should be illegal. Granting children access to unlocked sites should also be illegal (like giving children alcohol, except parents are allowed to grant their own children access).
This is a horrible comment and is exactly what we're trying to avoid on HN. The guidelines make it clear we're trying for something better here. HN is only a place where people want to participate because enough people take care to make their contributions far more substantive than this. Please do your part by reading the guidelines and making an effort to observe them in future.
These ones in particular are relevant:
Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
Please don't fulminate. Please don't sneer, including at the rest of the community.
Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
Eschew flamebait. Avoid generic tangents. Omit internet tropes.
Please don't use Hacker News for political or ideological battle. It tramples curiosity.
Yeah, reminds me of the "Security" xkcd (https://xkcd.com/538/) - a threat from a good ol' 5-dollar wrench defeating state-of-the-art encryption.
Never estimate how state actors can use violence (or merely the threat of it) to force people to do things. The only way to respond to that is not through code or algorithms or protocols, but through political action (whether it be violent or non-violent)
Soatok Dreamseeker is working on a more xkcd-538-proof system: https://soatok.blog/2025/08/09/improving-geographical-resili...https://github.com/soatok/freeon. Fundamentally, though, it's built on the assumption that geographical resilience is possible – that a group can be distributed such that no one organisation can perform $5-wrench attacks against enough of them to break the cryptography. (Given that the attack's impossible, a sensible attacker would avoid tipping their hand by attempting it, thus sparing contributors from violence.)
Shamir's secret sharing. In that scenario, capturing me alone isn't going to get you anything even if I divulge my piece of the secret. You'd still need to find out who has the other pieces, find them, and convince them to divulge as well.
Maybe there's 3 of us, and the 4th part of the password/secret/private key is on a server of mine somewhere. If I don't check in for x duration, it wipes itself.
Yeah it means my Monero is gone now, but at least my attacker didn't get it.
You can design in ways such that there isn't a password to give up in the first place. Maybe the key is distributed and you need all x number of people to decrypt. Sure, maybe the state can capture everyone but it becomes significantly harder than targeting a single person and threatening them with torture.
Combine that with rate limiting and a dead man's switch.
Hard power still matters. It gets access to things like fiber closets, upstream dependencies, subtle flaws in encryption schemes that take years to figure out, information linking your networks, and more than I can think of.
Cute tech can slow them down until they go through the effort of controlling most of TOR's exit nodes and point the Eye of Sauron at you.
Power. Real power. The power to kill you, take your property, harm your family, tell lies about you on the news, etc.
I've always been surprised by the naivety of tech people with respect to this question. The only possible solution to power is power itself. Software can be a small part of that, but the main part of it is human organization: credible power to be used against other organized holders of power. No amount of technology will let you go it alone safely. At best, you may hope to hide away from power with the expectation that its abuse will just skip over you. That is the best you could hope for if all you want are software solutions.
Text files don't have power. Appealing to old power institutions to give them power is not the way to create new power either. Legacy systems with entrenched power have tended to insulate those at the top, killing social mobility and enabling those institutions to act against broad interests.
Open source has always been a force of social mobility. You could learn from reading high quality code. Anyone could provide service for a program. You could start a company not bound by bad decision makers who held the keys.
Open source always outmaneuvers inefficiency. Those who need to organize are not beholden to legacy systems. We need technically enabled solutions to organize and create effective decision making. The designs must preserve social mobility within to avoid becoming what they seek to replace. I'm building the technically enabled solutions for at https://positron.solutions
This is the real issue. FOSS was born out of a utopian era in 60's-2000s' where the US was still a beacon of hope. That is fundamentally impossible in todays world of ultra-shark-world-eat-you capitalism and global race to the bottom.
If it didn't already exist, FOSS would not be able to get off the ground today. FOSS couldn't start and survive today. Its survival is in jeopardy.
FOSS was born because the cost of sharing information rapidly approached nothing. BBS and Usenet were loaded with shared software, simply because it was easy to share and there was incredible demand for it.
FOSS doesn't need the US or 1980s counterculture to succeed. It just needs cheap disk space and someone willing to share their code. The price of storage and internet continues to fall, and I think FOSS will be fine as long as that continues.
Sure, and there will continue to be hackers that love programming and want to share.
But the article was kind of about how to control access, the licensing, and bad actors. And that is out the window, anybody can steal your code regardless of the license, and North Korea can use it in missiles if they want, nothing can stop it if it is openly shared.
"As an example: formal proofs and compartmentalization are unsexy but they're a solid way we survive the next decade of adversarial noise."
I think you are on to something there. Noise is really signal divorced from tone. Our current consensus protocols are signal based. They demonstrate control, but not rightful ownership. Pairing a tone keypair with a matching signal keypair in a multisig configuration would be compatible with current networks, but also allow a bottom-up federated trust network to potentially emerge?
> NGI Zero, a family of research programmes including NGI0 Entrust, NGI0 Core and NGI0 Commons Fund, part of the Next Generation Internet initiative.
with the Next Generation Internet thing at the end receiving money/financing from the political supra-state entity called the EU [1] . So I guess said speech-holder is not happy because political entities which are seen by the EU as adversarial are also using open-source code? Not sure how war plays into this, as I’m sure he must be aware of the hundreds of billions of euros the EU has allocated for that.
One way war plays into FOSS is that enemy nations are no longer supposed to be contributing to the same projects, being from nationality XYZ is now as relevant as programming skills one has to offer, likewise open source software from specific countries might no longer be allowed.
I imagine anytime the people that control the war resources decide to use them, there are plenty of other people not interested or involved in the destruction. If the UK declares war on an African nation tomorrow, since the US is an ally you would say those other people in the US should disallow devs from the target African nation from contributing to their project?
Michiel is indeed one of the driving forces behind NLNet's NGI0 program. That said, just because they're distributing money they received from the EU, that doesn't mean that they're intimately aware of the full EU budget.
> that doesn't mean that they're intimately aware of the full EU budget.
There's no "intimate" knowledge required in order to be aware of the EU spending tens to hundreds of billions of euros on the war close to its Eastern border, it has been one of the main topics of discussion in the media for a good time now. Unless this speech holder has lived under a rock since February 2022, which doesn't seem to be the case (he was the one mentioning the "war" thing).
I suppose this is relevant to a subset of HN audience who attend FOSDEM. Even the talk abstract is worth discussion as it highlights an important side effect of FOSS goals and the current state of the world.
>AI in its current form has no actual sense of truth or ethics.
This is untrue. It does have sense of truth and ethics. Although it does get few things wrong from time to time but you can't reliably get it to say something blatantly incorrect (at least with thinking enabled). I would say it is more truthful than any human on average. Ethically I don't think you can get it to do or say something unethical.
the burden of proof lies on someone making a positive assertion. why do you think it's possible for "AI" to be either of those things at this point in time (let alone whether it's possible at all).
"if AI tells the truth, please give an example of AI telling the truth blatantly."
you can't. an LLM chatbot has no concept of truth, or reasonableness, or blatantness. you're willing to accept a distillation of tokens that may or may not form a truthful statement based on probability, with no intent or understanding behind it.
I'm not saying there's no point to this discussion, but it would bring focus if this was decoupled from the broader topic of open source.
There has never been any inherent political or economic value in open source software. Those things come from deliberate decisions by authors and users such as licensing and mass adoption.
Open source is not synonymous with the GPL and most businesses try to avoid open source software when implementing their core competency.
throwfaraway135 | 17 hours ago
But the part about FOSS being used in a project not aligned with the creator's values seams hypocritical:
IMO FOSS is a gift to humanity and as such:
"A gift should be given freely, without obligation or expectation, as a true expression of love and kindness"
poszlem | 17 hours ago
If this is suddenly being weaponised against us, I don't see how that's not a problem.
breezykoi | 16 hours ago
juliangmp | 16 hours ago
If it took some people in the FOSS space this long that it also includes people, companies or purposes they disagree with, then I don't know what to tell them.
duskdozer | 15 hours ago
breezykoi | 15 hours ago
pixl97 | 14 hours ago
That's like saying "I have the freedom to kill you".
Saying that you can create something, then you reserve the 'freedom' to limit what everyone else does for it really doesn't fall under the word freedom at all.
oblio | 14 hours ago
The software creator (human or AI) must give the user of its software the same freedoms it has received.
If it has received the freedom to view the original, readable, source code, then users should have the freedom to view the original, readable, source code.
If it has received the freedom to modify the source code, then users should have the freedom to modify the source code.
Etc.
It's not hard to follow for people who want to do the moral thing.
It's VERY hard to follow for people who want to make money (and ideally lots of it, very quickly).
juliangmp | 8 hours ago
[1]: https://mit-license.org/ [2]: https://www.gnu.org/philosophy/free-sw.html#four-freedoms [3]: https://opensource.org/osd
Applejinx | 16 hours ago
If what I'm doing is interesting or unusual, LLMs will firstly not recognize that it's different, secondly will screw up when blindly combining it with stuff that isn't different, and thirdly if it's smart enough to not screw that up, it will ignore my work in favor of stealing from CLOSED source repos it gains access to, on the rationale that those are more valuable because they are guarded.
And I'm pretty sure that they're scraping private repos already because that seems the maximally evil and greedy thing to do, so as a FOSS guy I figure I'm already covered, protected by a counterproductive but knowingly evil behavior.
These are not smart systems, but even more they are not wise systems, so even if they gain smarts that doesn't mean they become a problem for me. More likely they become a problem for people who lean on intellectual property and privacy, and I took a pretty substantial pay cut to not have to lean on those things.
dannersy | 14 hours ago
It is pretty obvious to me, that being blase about whomever using FOSS for adversarial reasons is not very "open" or "free". Somewhere in the thread there is an argument about the paradox of intolerance and I don't really care to argue with people on the internet about it because it is hard to assume the debate is in good faith.
My point is this: Throw away all your self described nuance and ask this yourself whether or not you think any malicious, war-monger, authoritarian, or hyper-capitalist state would permit a free and open source software environment? If the objective of a business, government, or billionaire is power, control, and/or exclusivity then, well, your lofty ideals behind FOSS have completely collapsed.
breezykoi | 13 hours ago
dannersy | 13 hours ago
Palmik | 17 hours ago
Training on my code / media / other data? No worries, just make sure the weights and other derived artifacts are released under similarly permissive license.
breezykoi | 16 hours ago
s1mplicissimus | 16 hours ago
teekert | 15 hours ago
twoodfin | 14 hours ago
RobotToaster | 10 hours ago
At least in the US.
Quite what happens if another country ordered, say chatGPT, to be released under the AGPL since it was trained on AGPL code, who knows.
oblio | 14 hours ago
He's also 72, we can't expect him to save everyone. We need new generations of FOSS tech leaders.
Imustaskforhelp | 13 hours ago
Something about Richard stallman really is out of this world where he made people care about Open source in the first place.
I genuinely don't know how people can relicate it. I had even tried and gone through such phase once but the comments weren't really helpful back then on hackernews
https://news.ycombinator.com/item?id=45558430 (Ask HN: Why are most people not interested in FOSS/OSS and can we change that)
teekert | 13 hours ago
trashb | 13 hours ago
> user freedom, not creators freedom
In his view users are the creators and creators are the users. The only freedom he asks you to give up is the freedom to limit the freedom of others.
teekert | 12 hours ago
I love FOSS, don't get me wrong. But people should be able to say: I made this, if you want to use it, it's under these condition or I won't share it.
Again, imho the GPL is a blessing for humanity, and bless the people that choose it freely.
trashb | 13 hours ago
[OP] maelito | 15 hours ago
m4rtink | 15 hours ago
rubymamis | 14 hours ago
grumbel | 14 hours ago
The only way forward is the abolishment of copyright.
Palmik | 10 hours ago
Similarly, I could imagine carving out an exception when training on copyrighted material without licence, as long as the resulting model is open-sourced.
grumbel | 10 hours ago
Training is fair use. The closed models wouldn't be impacted. Even if we assume laws gets changed and lawsuits happened, they just get settled and the closed source models would progress as usual (see Bartz v. Anthropic).
Meanwhile if somebody wants to go all "GPL AI" and only train their models on GPL compatible code, they'd just be restricting themselves. The amount of code they can train on shrinks drastically, the model quality ends up being garbage and nothing was won.
Further, assuming laws got changed, those models would now be incredible easy to attack, since any slip up in the training means the models need to be scraped. Unlike the big companies with their closed models, Open Source efforts do not have the money to license data nor the billions needed to settle lawsuits. It would mean the end of open models.
BlarfMcFlarf | 5 hours ago
gentooflux | 16 hours ago
pipo234 | 14 hours ago
Essentially LLMs are recontextualizing their training data. So on one hand, one might argue that training is like a human reading books and then inference is like writing something novel, (partially) based on the reading experience. But the contract between humans considers it plagiarism when we recite some studied text and then claim it as your own. So for example, books attribute citations with footnotes.
With source code we used to either re-used a library as-is, in which case the license terms would apply OR write our own implementation from scratch. While this LLM recontextualization purports to be like the latter, it is sometimes evident that the original license or at least some attribution, comment or footnote should apply. If only to help with future legibility maintenance.
fweirdo | 16 hours ago
I agree with you.
Imagine a parallel Earth where there was a free OS that the majority in the world used called GNU/Felix.
Felix (it/its), who wrote GNU/Felix and who was the project’s strong but kind leader, one day had a head injury that somehow decreased its empathy but raised its IQ.
Subordinates of Felix on the council of leadership noticed that it was adding features that would track all user data to use in some nefarious plan.
In this case, most would agree that for both the freedom and good of all, Felix should no longer lead this effort.
However, they would want to be sure that even the Will Bates’ great company Bikerosoft didn’t lead the project either, because despite its wonderful and ubiquitous Bikerosoft Office apps and Ezure cloud tools and infrastructure, it was a profit-based company.
Fiveplus | 16 hours ago
The "end of history" hangover is real. We went about building the modern stack assuming bad actors were outliers, not state-sponsored standard procedure. But trying to legislate good use into licenses? I don't know how you would realistically implement it and to what extent? That solution implies we have to move toward zero-trust architectures even within open communities.
As an example: formal proofs and compartmentalization are unsexy but they're a solid way we survive the next decade of adversarial noise.
I remember reading a quote somewhere that stuck with me. Paraphrasing, "If the architecture of my code doesn't enforce privacy and resistance to censorship by default, we have to assume it will be weaponized".
I am out of ideas, practical ones, lots sound good on paper and in theory. It's a bit sad tbh. Always curious to hear more on this issue from smarter people.
elcapitan | 16 hours ago
It's also questionable to which extent restrictive licenses for open source software stay that relevant in the first place, as you can now relatively easily run an AI code generator that just imitates the logic of the FOSS project, but with newly generated code, so that you don't need to adhere to a license's restrictions at all.
littlestymaar | 16 hours ago
Zero trust cannot exist as long as you interact with the real world. The problem wasn't trust per se, but blind trust.
The answer isn't to eschew trust (because you can't) but to organize it with social structures, like what people did with “chain of trust” certificates back then before it became commoditized by commercial providers and cloud giants.
pooyan99 | 16 hours ago
It’s changing but not completely.
oblio | 14 hours ago
Back in 1770 there were basically 0 democracies on the planet. In 1790 there were 2. Now there are about 70 with about 35 more somewhere in between democracy and autocracy. So most of the world's population is living under a form of democracy. I know that things are degrading for many big democracies, but it wouldn't be the first time (the period between WW1 until the end of WW2 was a bad time for democracies).
I have no idea how we get from here to a civilized internet, though.
rando77 | 16 hours ago
It would require it not to be easy to farm (Entropy detection on user behaviour perhaps and clique detection).
loa_in_ | 14 hours ago
rando77 | 14 hours ago
Imustaskforhelp | 13 hours ago
rando77 | 13 hours ago
That is hard too though.
FpUser | 15 hours ago
conartist6 | 14 hours ago
wereknat | 15 hours ago
which is impossible.
- No code is feasibly guaranteed to be secure
- All code can be weaponized, though not all feasibly; password vaults, privacy infrastructure, etc. tend to show holes.
- It’s unrealistic to assume you can control any information; case-in-point the garden of Eden test: “all data is here; I’m all-powerful and you should not take it”.
I’m not against regulation and protective measures. But, you have to be prioritize carefully. Do you want to spend most of the world’s resources mining cryptocurrency and breaking quantum cryptography, or do you want to develop games and great software that solves hunger and homelessness?
RodgerTheGreat | 11 hours ago
Some code architectures make privacy and security structurally impossible from the beginning.
As technologists, we should hold ourselves responsible for ensuring the game isn't automatically lost before the software decisions even leave our hands.
mattmanser | 14 hours ago
I think privacy is essential for freedom.
I'm also fine with lots of censorship, on publicly accessible websites.
I don't want my children watching beheading videos, or being exposed to extremists like (as an example of many) Andrew Tate. And people like Andrew Tate are actively pushed by YouTube, TikTok, etc. I don't want my children to be exposed to what I personally feel are extremist Christians in America, who infest children's channels.
I think anyone advocating against censorship is incredibly naive to how impossible it's become for parents. Right now it's a binary choice:
1. No internet for your children
2, Risk potential, massive, life-altering, harm as parental controls are useless, half-hearted or non-existent. Even someone like Sony or Apple make it almost impossible to have a choice in what your children can access. It's truly bewildering.
And I think you should have identify yourself. You should be liable for what you post to the internet, and if a company has published your material but doesn't know who you are, THEY should be liable for the material published.
Safe harbor laws and anonymous accounts should never have been allowed to co-exist. It should have been one or the other. It's a preposterous situation we're in.
loa_in_ | 14 hours ago
mdavid626 | 14 hours ago
Doomscrolling or porn is just too "appealing" to children, like sugar. Children don't have their minds fully developed to be able to say "no" to them.
If in school everybody has a smartphone and does doomscrolling, your children will do as well. Or they'll be ostracised.
mattmanser | 14 hours ago
We have had several arguments about no social media and we're only 1 out of 6-ish years in to the too naïve to look after yourself on the internet phase, and the eldest already figured out how to download some chat app I'd never even heard of without permission.
mattmanser | 14 hours ago
armchairhacker | 13 hours ago
Bad “censorship” is involuntarily denying or hiding from adults what they want to see. IMO, that power tends to get abused, so it should only be applied in specific, exceptional circumstances (and probably always temporarily, if only because information tends to leak, so there should be a longer fix that makes it unnecessary).
I agree with you that children should be protected from beheading and extremism; also, you should be able to easily avoid that yourself. I disagree in that, IMO, anonymous accounts and “free” websites should exist and be accessible to adults. I believe that trusted locked-down websites should also exist, which require ID and block visceral media; and bypassing the ID requirement or filter (as a client) or not properly enforcing it (as a server operator) should be illegal. Granting children access to unlocked sites should also be illegal (like giving children alcohol, except parents are allowed to grant their own children access).
welferkj | 12 hours ago
tomhow | 2 hours ago
These ones in particular are relevant:
Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
Please don't fulminate. Please don't sneer, including at the rest of the community.
Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
Eschew flamebait. Avoid generic tangents. Omit internet tropes.
Please don't use Hacker News for political or ideological battle. It tramples curiosity.
https://news.ycombinator.com/newsguidelines.html
praptak | 14 hours ago
This is still techno-optimism. The architecture of your code will not to that. We are long past the limits of what you can fix with code.
The only action that matters is political and I don't think voting cuts it.
cyber_kinetist | 13 hours ago
Never estimate how state actors can use violence (or merely the threat of it) to force people to do things. The only way to respond to that is not through code or algorithms or protocols, but through political action (whether it be violent or non-violent)
dlahoda | 12 hours ago
wizzwizz4 | 11 hours ago
some_furry | 10 hours ago
Nothing is xkcd-538-proof, in absolute terms. Violence is always possible.
But having a tool that is more resistant to authoritarian overreach by being geographically distributed does make it harder to pull these attacks off.
dlahoda | 12 hours ago
example of what is not possible to fix with code?
cyjackx | 12 hours ago
thewebguyd | 8 hours ago
Maybe there's 3 of us, and the 4th part of the password/secret/private key is on a server of mine somewhere. If I don't check in for x duration, it wipes itself.
Yeah it means my Monero is gone now, but at least my attacker didn't get it.
Espressosaurus | 12 hours ago
thewebguyd | 8 hours ago
Combine that with rate limiting and a dead man's switch.
Espressosaurus | 7 hours ago
Cute tech can slow them down until they go through the effort of controlling most of TOR's exit nodes and point the Eye of Sauron at you.
nathan_compton | 12 hours ago
Power. Real power. The power to kill you, take your property, harm your family, tell lies about you on the news, etc.
I've always been surprised by the naivety of tech people with respect to this question. The only possible solution to power is power itself. Software can be a small part of that, but the main part of it is human organization: credible power to be used against other organized holders of power. No amount of technology will let you go it alone safely. At best, you may hope to hide away from power with the expectation that its abuse will just skip over you. That is the best you could hope for if all you want are software solutions.
dlahoda | 10 hours ago
some exact piece of hardware or some exact activity of power?
think of it as tdd. we check few simple exact cases before generalising.
positron26 | 12 hours ago
Text files don't have power. Appealing to old power institutions to give them power is not the way to create new power either. Legacy systems with entrenched power have tended to insulate those at the top, killing social mobility and enabling those institutions to act against broad interests.
Open source has always been a force of social mobility. You could learn from reading high quality code. Anyone could provide service for a program. You could start a company not bound by bad decision makers who held the keys.
Open source always outmaneuvers inefficiency. Those who need to organize are not beholden to legacy systems. We need technically enabled solutions to organize and create effective decision making. The designs must preserve social mobility within to avoid becoming what they seek to replace. I'm building the technically enabled solutions for at https://positron.solutions
FrustratedMonky | 12 hours ago
This is the real issue. FOSS was born out of a utopian era in 60's-2000s' where the US was still a beacon of hope. That is fundamentally impossible in todays world of ultra-shark-world-eat-you capitalism and global race to the bottom.
If it didn't already exist, FOSS would not be able to get off the ground today. FOSS couldn't start and survive today. Its survival is in jeopardy.
bigyabai | 10 hours ago
FOSS doesn't need the US or 1980s counterculture to succeed. It just needs cheap disk space and someone willing to share their code. The price of storage and internet continues to fall, and I think FOSS will be fine as long as that continues.
FrustratedMonky | 9 hours ago
But the article was kind of about how to control access, the licensing, and bad actors. And that is out the window, anybody can steal your code regardless of the license, and North Korea can use it in missiles if they want, nothing can stop it if it is openly shared.
AndrewKemendo | 12 hours ago
We reached the limits of societal coherence and there’s no way to bridge the gap
retrocog | 11 hours ago
I think you are on to something there. Noise is really signal divorced from tone. Our current consensus protocols are signal based. They demonstrate control, but not rightful ownership. Pairing a tone keypair with a matching signal keypair in a multisig configuration would be compatible with current networks, but also allow a bottom-up federated trust network to potentially emerge?
paganel | 16 hours ago
> NGI Zero, a family of research programmes including NGI0 Entrust, NGI0 Core and NGI0 Commons Fund, part of the Next Generation Internet initiative.
with the Next Generation Internet thing at the end receiving money/financing from the political supra-state entity called the EU [1] . So I guess said speech-holder is not happy because political entities which are seen by the EU as adversarial are also using open-source code? Not sure how war plays into this, as I’m sure he must be aware of the hundreds of billions of euros the EU has allocated for that.
[1] https://ngi.eu/
pjmlp | 16 hours ago
pluralmonad | 14 hours ago
pjmlp | 14 hours ago
Vinnl | 14 hours ago
(Disclosure: I once received NGI0 funding.)
paganel | 11 hours ago
There's no "intimate" knowledge required in order to be aware of the EU spending tens to hundreds of billions of euros on the war close to its Eastern border, it has been one of the main topics of discussion in the media for a good time now. Unless this speech holder has lived under a rock since February 2022, which doesn't seem to be the case (he was the one mentioning the "war" thing).
sebtron | 14 hours ago
hashtag-til | 14 hours ago
I suppose this is relevant to a subset of HN audience who attend FOSDEM. Even the talk abstract is worth discussion as it highlights an important side effect of FOSS goals and the current state of the world.
net01 | 12 hours ago
simianwords | 13 hours ago
This is untrue. It does have sense of truth and ethics. Although it does get few things wrong from time to time but you can't reliably get it to say something blatantly incorrect (at least with thinking enabled). I would say it is more truthful than any human on average. Ethically I don't think you can get it to do or say something unethical.
simianwords | 12 hours ago
GuinansEyebrows | 11 hours ago
simianwords | 10 hours ago
The lie should be clearly blatant one, something that a reasonable person would never do.
GuinansEyebrows | 6 hours ago
you can't. an LLM chatbot has no concept of truth, or reasonableness, or blatantness. you're willing to accept a distillation of tokens that may or may not form a truthful statement based on probability, with no intent or understanding behind it.
sublinear | 11 hours ago
There has never been any inherent political or economic value in open source software. Those things come from deliberate decisions by authors and users such as licensing and mass adoption.
Open source is not synonymous with the GPL and most businesses try to avoid open source software when implementing their core competency.
MaxBarraclough | 8 hours ago
What you mean here? Businesses often implement their own core code, but they don't deliberately favour closed-source software.