If you're interested in this resource, I highly recommend checking out Stanford's CS336 class. It covers all this curriculum in a lot more depth, introduces you into a lot of theoretical aspects (scaling laws, intuitions) and systems thinking (kernel optimization/profiling). For this, you have to do the assignments, of course... https://cs336.stanford.edu/
Hey now! I've got a half terabyte of RAM at my disposal! I mean, it's DDR4 but... it's RAM!
And it's paired with 48 processor cores! I mean, they don't even support AVX512 but they can do math!
I could totally train a LLM! Or at least my family could... might need my kid to pick up and carry on the project.
But in all seriousness... you either missed the point, are being needlessly pedantic, or are... wrong?
This is about learning concepts, and the rest of this is mostly moot.
On the pedantic or wrong notes--What is the documented cut-off for a "large" language model? Because GPT-2 was and is described as a "large" language model. It had 1.5B parameters. You can just about get a consumer GPU capable of training that for about $400 these days.
Yeah it's just a semantic pet peeve. Let me ask you this: What is a "Language Model", if this is a "Large Language Model"? Inversely, if a 1.5B model is "Large" then what is the recent 1T param models? "Superlarge"?
In my own very humble opinion, it becomes "Large" when it's out of non-specialized hardware. So currently, a model which requires more than 32GB vram is large (as that's roughly where the high-end gaming GPUs cut off).
And btw, there is no way you can train a language model on a CPU, even with ddr5, lest you wait a whole week for a single training cycle. Give it a go! I know I did, it's a magnitude away from being feasible.
Calling anything "large" in computing is problematic since hardware keeps improving. GPT-1 was an LLM in 2017 and had 117M parameters, when did it stop being large?
GPT would have been a better term than LLM, but unfortunately became too associated with OpenAI. And then, what about non-transformer LLMs? And multimodal LLMs?
Maybe we should just give up, shrug and call it "AI".
I'm not sure. Microsoft calls Phi-4 a small language model, so the distinction is considered meaningful to some people working in the space. My own view is that the term "LLM" implies something about the capabilities of the model in 2026. Maybe there's not a hard definition of the term, but whatever the definition is, the model in the article wouldn't make it.
> Yeah it's just a semantic pet peeve. Let me ask you this: What is a "Language Model", if this is a "Large Language Model"? Inversely, if a 1.5B model is "Large" then what is the recent 1T param models? "Superlarge"?
Sure, we could do it like we did radio frequencies! Most of what we use are "High Frequency" and above... Very High Frequency, Ultra High Frequency, Super High Frequency, Extremely High Frequency.
> In my own very humble opinion, it becomes "Large" when it's out of non-specialized hardware. So currently, a model which requires more than 32GB vram is large (as that's roughly where the high-end gaming GPUs cut off).
So the definition shifts over time based on the market availability of RAM? And can also go backwards? I can't really see anyone bothering to look up the state of the GPU market in order to determine correct terminology whenever they want to talk about this stuff (or interpret old comments, or...).
That also decouples the terminology from the actual capabilities which is what people are generally more interested in. GPT-3 was a "large" language model at this present time. However the the seemingly much more capable Gemma 4 was a large language model at the time GPT-3 was in use, but isn't a large language model right now.
I kinda question the arbitrary line drawn here too--32GB VRAM? Where I am that's a ~$5-6k problem. I'm not sure I'd call that a "consumer" product any more than the $20k data center cards regardless of the OEM intent, but we could argue semantics on that one too.
Fundamentally, defining it this way just seems kind of... useless? It's borderline a meaningless modifier already. This just defines it in a way that's so complex to use or interpret that it's just meaningless in a different way.
For what it's worth, I'd vote to use "large" to mean "big enough to be general purpose", more differentiating from the small, specialized models that came before.
> And btw, there is no way you can train a language model on a CPU, even with ddr5, lest you wait a whole week for a single training cycle. Give it a go! I know I did, it's a magnitude away from being feasible.
Yeah, was mostly being silly--tried to allude to that with the "intergenerational project" comment toward the end there.
Though I _did_ try doing some inference on CPU, which is how I found out that these Xeons I have don't implement AVX512. Surprisingly Gemma 4 (2B) was able to spit out a solid 13-14 tok/s! Was expecting more like... 0.13.
Good point, and I'm actually not sure that there is a clear dividing line. I expect that once we achieve capable world models and are able to analyze their internals, we'll find that the prediction mechanisms for purely physical and for verbal/behavioral responses to the agent's actions are at least partially colocated.
As particular motivation for my intuition, I expect that we had evolutionary pressure to adapt our defense mechanisms of predicting the movements of predators and prey, to handle human opponents.
...nanoGPT targets reproducing GPT-2 (124M params) and covers a lot of ground. This project strips it down to the essentials and scales it to a ~10M param model that trains on a laptop in under an hour...
It was not my best (nor normal) behavior, but the point in this case is that the OP offered very little in his rebuttal. A more contextualized reply would have improved mine as well. I believe actually the person that published this LLM course on GitHub works at ElevenLabs, as Google shows. So the reply could be: "Are you sure? I googled and apparently he works for ElevenLabs". That would have triggered a different reply. So I was not polite enough, and I said sorry, but given the exchange to say "google it" was not terrible, was exactly how I thought I had found it (I google for the wrong name, but citing MLX, plus X, and Google returned the wrong result). So it was a metter of "I did this way".
Coincidentally, I just started on Build a Large Language Model (From Scratch), a repo/book/course by Sebastian Raschka [0][1][2]. Maybe it is a good problem to have to have to decide which learning resource to use.
I did it back in the day when fast.ai was relatively new with ULMFiT. This must have been when Bert was sota. The architecture allows you to train a base and specialize with a head. I used the entire Wikipedia for the base and then some GBs of tweets I had collected through the firehouse. I had access to a lab with 20 game dev computers. Must have been roughly GTX 2080s. One training cycle took about half a day for the tokenized Wikipedia so I hyper parameter tuned by running one different setting on each computer and then moving on with the winner as the starting point for the next day. It was always fun to come to work the next morning and check the results.
The engineering was horrible and very ad-hoc but I learned a lot. Results were ok-ish (I classified tweets) but it gave me a good perspective on the sheer GPU power (and engineering challenges) one would need to do this seriously. I didn't fully grasp the potential of generating output but spent quite some time chuckling at generated tweets (was just curious to try it).
I know it's a bit of a joke, but "I Built a Neural Network from Scratch in SCRATCH" gave me, a complete outsider, a lot of insight into how neural networks work.
I would start with linear algebra, some calculus and statistics and understand how a neural network - which really is just one type of ML - works, the learn the basics of CNN and RNN, then learn transformers and LLM.
But that is just me. I think is more useful to understand the how and whys before training a LLM.
I'm not sure using pytorch counts as "from scratch" anymore. I'm not saying you should avoid the stdlib or anything crazy, but at the point where you're pulling in for-purpose libraries it really doesn't seem like "from scratch" to me.
Can anyone suggest or come up with viable "use cases" of a custom LLM like this? I wouldn't mind giving it a try but ideally I'm looking for something that is not just a toy.
> A hands-on workshop where you write every piece of a GPT training pipeline yourself, understanding what each component does and why.
I see in dependencies torch, so most likely tensors and backpropagation are not implemented, but rather taken as granted. Does it count then as writing "from scratch"?..
I did something similar (in Rust, AI assisted), but I restricted myself not to use any dependency, only standard library. As result, I have to implement much more things, such as tensor design, kernels concept, simple gradient descent optimizer and even custom json parser, cpu data parallelism abstractions similar to rayon, etc. It was quite fun when I got everything wired and working - soo sloooow, but working.
iamnotarobotman | a day ago
jvican | a day ago
the_real_cher | a day ago
eftychis | a day ago
azangru | a day ago
baalimago | a day ago
I doubt you have a machine big enough to make it "Large".
nucleardog | a day ago
And it's paired with 48 processor cores! I mean, they don't even support AVX512 but they can do math!
I could totally train a LLM! Or at least my family could... might need my kid to pick up and carry on the project.
But in all seriousness... you either missed the point, are being needlessly pedantic, or are... wrong?
This is about learning concepts, and the rest of this is mostly moot.
On the pedantic or wrong notes--What is the documented cut-off for a "large" language model? Because GPT-2 was and is described as a "large" language model. It had 1.5B parameters. You can just about get a consumer GPU capable of training that for about $400 these days.
Malcolmlisk | a day ago
improbableinf | a day ago
And no one is stopping anyone from tweaking few parameters in this repo to go above 10M parameters.
skinfaxi | 23 hours ago
baalimago | a day ago
In my own very humble opinion, it becomes "Large" when it's out of non-specialized hardware. So currently, a model which requires more than 32GB vram is large (as that's roughly where the high-end gaming GPUs cut off).
And btw, there is no way you can train a language model on a CPU, even with ddr5, lest you wait a whole week for a single training cycle. Give it a go! I know I did, it's a magnitude away from being feasible.
joefourier | 22 hours ago
GPT would have been a better term than LLM, but unfortunately became too associated with OpenAI. And then, what about non-transformer LLMs? And multimodal LLMs?
Maybe we should just give up, shrug and call it "AI".
bachmeier | 21 hours ago
I'm not sure. Microsoft calls Phi-4 a small language model, so the distinction is considered meaningful to some people working in the space. My own view is that the term "LLM" implies something about the capabilities of the model in 2026. Maybe there's not a hard definition of the term, but whatever the definition is, the model in the article wouldn't make it.
nucleardog | 18 hours ago
Sure, we could do it like we did radio frequencies! Most of what we use are "High Frequency" and above... Very High Frequency, Ultra High Frequency, Super High Frequency, Extremely High Frequency.
> In my own very humble opinion, it becomes "Large" when it's out of non-specialized hardware. So currently, a model which requires more than 32GB vram is large (as that's roughly where the high-end gaming GPUs cut off).
So the definition shifts over time based on the market availability of RAM? And can also go backwards? I can't really see anyone bothering to look up the state of the GPU market in order to determine correct terminology whenever they want to talk about this stuff (or interpret old comments, or...).
That also decouples the terminology from the actual capabilities which is what people are generally more interested in. GPT-3 was a "large" language model at this present time. However the the seemingly much more capable Gemma 4 was a large language model at the time GPT-3 was in use, but isn't a large language model right now.
I kinda question the arbitrary line drawn here too--32GB VRAM? Where I am that's a ~$5-6k problem. I'm not sure I'd call that a "consumer" product any more than the $20k data center cards regardless of the OEM intent, but we could argue semantics on that one too.
Fundamentally, defining it this way just seems kind of... useless? It's borderline a meaningless modifier already. This just defines it in a way that's so complex to use or interpret that it's just meaningless in a different way.
For what it's worth, I'd vote to use "large" to mean "big enough to be general purpose", more differentiating from the small, specialized models that came before.
> And btw, there is no way you can train a language model on a CPU, even with ddr5, lest you wait a whole week for a single training cycle. Give it a go! I know I did, it's a magnitude away from being feasible.
Yeah, was mostly being silly--tried to allude to that with the "intergenerational project" comment toward the end there.
Though I _did_ try doing some inference on CPU, which is how I found out that these Xeons I have don't implement AVX512. Surprisingly Gemma 4 (2B) was able to spit out a solid 13-14 tok/s! Was expecting more like... 0.13.
mips_avatar | a day ago
electroglyph | a day ago
mips_avatar | 6 hours ago
utopiah | a day ago
I'm not saying it's worth it but you don't need to buy a GPU yourself to be able to train.
busfahrer | 23 hours ago
hiroakiaizawa | a day ago
lynx97 | a day ago
runs on a Blackwell 6000 Max-Q, using 86GB VRAM. Training supposedly takes 3h40m
NSUserDefaults | a day ago
hliyan | a day ago
falcor84 | 22 hours ago
As particular motivation for my intuition, I expect that we had evolutionary pressure to adapt our defense mechanisms of predicting the movements of predators and prey, to handle human opponents.
ofsen | a day ago
drcongo | a day ago
mellosouls | 22 hours ago
...nanoGPT targets reproducing GPT-2 (124M params) and covers a lot of ground. This project strips it down to the essentials and scales it to a ~10M param model that trains on a laptop in under an hour...
steveharing1 | a day ago
antirez | a day ago
thrww26 | 22 hours ago
antirez | 22 hours ago
thrww26 | 21 hours ago
If you want to be snarky, it helps if you are right.
antirez | 21 hours ago
dooglius | 21 hours ago
Maxatar | 18 hours ago
He could have done that initially instead of saying "Google the name of the author."
antirez | 17 hours ago
JoeDaDude | a day ago
[0] https://github.com/rasbt/LLMs-from-scratch
[1] https://www.manning.com/books/build-a-large-language-model-f...
[2] https://magazine.sebastianraschka.com/p/coding-llms-from-the...
gchadwick | a day ago
kriro | a day ago
The engineering was horrible and very ad-hoc but I learned a lot. Results were ok-ish (I classified tweets) but it gave me a good perspective on the sheer GPU power (and engineering challenges) one would need to do this seriously. I didn't fully grasp the potential of generating output but spent quite some time chuckling at generated tweets (was just curious to try it).
yjaspar | a day ago
rithdmc | a day ago
https://www.youtube.com/watch?v=5COUxxTRcL0
DeathArrow | a day ago
But that is just me. I think is more useful to understand the how and whys before training a LLM.
fabian_shipamax | a day ago
y42 | a day ago
A series of Jupyter notebooks explaining the whole machine learning mechanism, from the beginning
https://github.com/nickyreinert/DeepLearning-with-PyTorch-fr...
and of course also how to build an llm from scratch
https://github.com/nickyreinert/basic-llm-with-pytorch/blob/...
Miles_Stone | 23 hours ago
reviewyourai | 21 hours ago
yoklov | 19 hours ago
wrs | 19 hours ago
borplk | 18 hours ago
eiskalt | 17 hours ago
I see in dependencies torch, so most likely tensors and backpropagation are not implemented, but rather taken as granted. Does it count then as writing "from scratch"?..
I did something similar (in Rust, AI assisted), but I restricted myself not to use any dependency, only standard library. As result, I have to implement much more things, such as tensor design, kernels concept, simple gradient descent optimizer and even custom json parser, cpu data parallelism abstractions similar to rayon, etc. It was quite fun when I got everything wired and working - soo sloooow, but working.