This is my weekend project. I am building up my pattern recognition in machine learning. By that I mean see X problem, instantly think of Y solution. The primer markdown file is the artifact of that exploration.
read it from top to bottom or better have your favorite language model read it and then explore the space with a strong guided syllabus.
Framing a business problem in terms of ML is indeed important. Where does classification come in, where does regression come in, when to use retrieval, when to use generative solutions. Would be a good section to add imo.
I personally think it is much more important to have strong statistical intuitions rather than intuitions about what neural networks are doing.
The latter isn’t wrong or useless. It’s simply not something a typical software engineer will need.
On the other hand, wiring up LLMs into an application is very popular and may be an engineer’s first experience with systems that are fundamentally chaotic. Knowing the difference between precision and recall and when you care about them will get you a lot more bang for your buck.
I would suggest the gateway drug into ML for most engineers is something like: we have a task and it can currently be done for X dollars. But maybe we can do it for a tenth of the price with a different API call. Or maybe there’s something on Huggingface that does the same thing for a fixed hourly cost, hundreds of times cheaper in practice.
I'm just trying to develop the lens where I can see a problem and know what properties of it are meaningful from an ML standpoint.
Coming from a specific domain where I have a sharpened instinct for how things are haven't really given me the ability to decompose the problem using ML primitives. That's what I'm working on.
Just read a good textbook instead of this LLM-written stuff. For example those by Murphy or Prince or Bishop. Or one of many YouTube lecture series from MIT or Stanford. There are many primer 101 tutorials and Medium posts. But if you actually want to learn instead of procrastinating, pick up a real textbook or work through a course.
I've bounced off of many good textbooks. Even Karpathy's YouTube series was too dense for me. I'm trying to come in at a more palatable level.
This was a two day exploration where I provided the syllabus and ran through it with Claude Code, asking questions, trying to anchor it to stuff I understand well. I feel like the artifact has value.
I think chatting with an llm alongside a textbook can be helpful but producing learning material when you yourself are a novice is not really that valuable.
Can we also ban anything where the second line is "let that sink in"? And anything claiming that "X is a masterclass in Y" (especially for (tweet, empathy))?
my apologies it wasn't up to your standards. In fairness to me, that line is exactly what my effort was. I wasn't trying to "learn ML" I am trying to build a mental model that let's me decompose real problem into ML primitives.
It's unclear to me if you think the resource has no value or if it bother you that I wrote it using a coding agent.
I wrote the syllabus and worked through each section. Where my understanding was weak I explored the space, pulled in research, referred the model to other sources, and just generally tried to ground the topic in something I understood.
What resulted was something that helped a lot of subjects click together for me. Especially when to reach for a particular activation function and the section on gating.
This enter survey was motivated by an ML expirent I ran with assosicative memories that just failed horribly. So rather than post mortem that I set about understanding why it failed.
Anyhow, thank you for the feedback. I submitted this in good faith that it may help others.
Nice weekend project! Even though there are copious resources out there (textbooks, videos, etc.), those may not appeal to everyone. People have different preferred modalities for consuming information and there is always value in (correctly) reframing concepts in a way that can be better understood by people who don’t resonate with traditional textbooks and YouTube videos. I’m
glad you found a formulation that works for you, and judging by the number of upvotes, it resonated with others as well. At the very least, I’m sure that working on this improved your understanding as well!
Please stop trying to trick us into reading AI generated text.
"This isn't a textbook or a tutorial. It's a mental model — the abstractions you need to reason about ML systems the way you already reason about software systems."
It's not a trick bud. The github page shows my user name and Claude. The content is intended to be read by an AI agent and explored through a text interface. That is explicit in the readme and the primer itself.
If you think you can generate this artifact with a prompt then show me. This was 2 days of exploration and research.
If you think that "2 days" makes it sound a lot... You'd be surprised how long it takes to actually make learning materials. I don't want to be too harsh, in case you're a high school student etc. I see it's good faith, but do note the reaction here.
[OP] jmatthews | 12 hours ago
read it from top to bottom or better have your favorite language model read it and then explore the space with a strong guided syllabus.
janalsncm | 11 hours ago
[OP] jmatthews | 11 hours ago
TheTaytay | 7 hours ago
oleggromov | 11 hours ago
[OP] jmatthews | 11 hours ago
janalsncm | 11 hours ago
The latter isn’t wrong or useless. It’s simply not something a typical software engineer will need.
On the other hand, wiring up LLMs into an application is very popular and may be an engineer’s first experience with systems that are fundamentally chaotic. Knowing the difference between precision and recall and when you care about them will get you a lot more bang for your buck.
I would suggest the gateway drug into ML for most engineers is something like: we have a task and it can currently be done for X dollars. But maybe we can do it for a tenth of the price with a different API call. Or maybe there’s something on Huggingface that does the same thing for a fixed hourly cost, hundreds of times cheaper in practice.
[OP] jmatthews | 11 hours ago
Coming from a specific domain where I have a sharpened instinct for how things are haven't really given me the ability to decompose the problem using ML primitives. That's what I'm working on.
bonoboTP | 10 hours ago
[OP] jmatthews | 9 hours ago
This was a two day exploration where I provided the syllabus and ran through it with Claude Code, asking questions, trying to anchor it to stuff I understand well. I feel like the artifact has value.
bonoboTP | 9 hours ago
NewsaHackO | 8 hours ago
antonvs | 9 hours ago
profsummergig | 3 hours ago
mememememememo | 8 hours ago
whoamii | 10 hours ago
ggambetta | 5 hours ago
thirtygeo | 5 hours ago
[OP] jmatthews | 4 hours ago
It's unclear to me if you think the resource has no value or if it bother you that I wrote it using a coding agent.
I wrote the syllabus and worked through each section. Where my understanding was weak I explored the space, pulled in research, referred the model to other sources, and just generally tried to ground the topic in something I understood.
What resulted was something that helped a lot of subjects click together for me. Especially when to reach for a particular activation function and the section on gating.
This enter survey was motivated by an ML expirent I ran with assosicative memories that just failed horribly. So rather than post mortem that I set about understanding why it failed.
Anyhow, thank you for the feedback. I submitted this in good faith that it may help others.
2sb | 21 minutes ago
zar1048576 | 9 hours ago
hilliardfarmer | 5 hours ago
"This isn't a textbook or a tutorial. It's a mental model — the abstractions you need to reason about ML systems the way you already reason about software systems."
thirtygeo | 5 hours ago
[OP] jmatthews | 4 hours ago
If you think you can generate this artifact with a prompt then show me. This was 2 days of exploration and research.
bonoboTP | 3 hours ago
bonoboTP | 3 hours ago