What's in a GGUF, besides the weights – and what's still missing?

153 points by bashbjorn 17 hours ago on hackernews | 46 comments
Nice, I recently pulled down TheBloke 7B mistral to try out I have a 4070.

ganelonhb | 16 hours ago

I have a 2070 and can confirm it works amazingly fast.

I love TheBloke I wish he still made stuff

What do you use it for? I'm still trying to use agents, I barely use copilot, only at work when I have to.

I didn't want to get personal with an LLM unless it was local so that's why I was setting this up but yeah. So far just research is what I was looking at.

[OP] bashbjorn | 16 hours ago

Yeah, TheBloke era of local LLMs were good times. TBF Unsloth are doing a fantastic job of publishing quants of the major models quickly - they just don't have nearly the volume of "weird" models as TheBloke did.

paradox460 | 9 hours ago

A lot of the same spirit lives on in TheDrunmer

They're mostly aimed at role play and sillytavern, but they're still generally good models, with lots of quants available

[OP] bashbjorn | 16 hours ago

I love mistral, but that model is... not the best. Maybe try out Gemma 4 e4b, it's a similar size to Mistral 7B, and should run great on your 4070 ("E4B" is slightly misleading naming).
Thanks for the tip, what do you use Gemma 4 e4b for?

redanddead | 16 hours ago

some say it’s a miniaturized gemini model

it’s good at writing, coding, decently intelligent

you can try it on nvidia nim

mixtureoftakes | 15 hours ago

7b mistral is quite outdated. On a 12gb 4070 you can run qwen 3.5 9b q4km or qwen 3.6 35b, the latter will be a lot smarter but also a lot slower due to ram offload.

Try both in lm studio, they really are surprisingly capable

I have 80gb of ram but it's slow capped by i9 CPU or specific asus mobo sucks I think only 2400mhz despite being ddr4

Tried all the stuff bios, volting

macNchz | 10 hours ago

Gemma 4 26B-A4B might be interesting to try on your machine. The latest optimizations make MoE models work pretty nicely on setups like that with a decent GPU and lots of slowish RAM. I have a 16gb GPU and 64gb of 3200mhz DDR4 and get 15-20 tokens/sec out of that model with zero finagling or tweaking. I’ve been very impressed by it, even having run just about every other open weight model that would fit on my machine over the last few years.
that seems slow? 15-20, was expecting 50-60 like mistral although I have not measured that yet on my setup

I've been asking other people but what do you use it for?

kenreidwilson | 16 hours ago

>Published May 18, 2026

hmmm...

[OP] bashbjorn | 16 hours ago

whoops, my bad. Just a typo in the markdown. Fixed :)

1024bits | 11 hours ago

What're you using to render this blog? Any chance there could be an RSS feed?

[OP] bashbjorn | 11 hours ago

It's just an eleventy site: https://github.com/nobodywho-ooo/website

No RSS feed currently, but it's a good idea to add one!

[OP] bashbjorn | 2 hours ago

There is an RSS feed now: https://nobodywho.ooo/feed.xml

badsectoracula | 15 hours ago

> not to be confused with the somewhat baffling llama_chat_apply_template exposed in the libllama API, which hardcodes a handful of chat formats directly in C++

As someone who is tinkering with a desktop-based inference app in FLTK[0], i wish this used the actual Jinja2 template parser llama.cpp uses (or there was another C function that did that since AFAICT for "proper" parsing you need to be able to pass a bunch of data to the template so it knows if you, e.g., do tool calling). Currently i'm using this adhocky function, but i guess i'll either write a Jinja2 interpreter or copy/paste the one from llama.cpp's code (depending on how i feel at the time :-P).

But yeah, GGUF's "all-in-one" approach is very convenient. And i agree that it feels odd to have the projection models as separate files - i remember when i first download a vision-capable model, i just grabbed whatever GGUF looked appropriate, then llama.cpp told me it couldn't do model and took me a bit to realize that i had to download an extra file. Literally my thought once i did was "wasn't GGUF supposed to contain everything?" :-P

[0] https://i.imgur.com/GiTBE1j.png

bitwize | 15 hours ago

Oh my God I freaking love your app. The 90s Linux desktop vibes hit like a hammer. FLTK FTW!

Sharlin | 14 hours ago

> The really neat thing about GGUF is that it's just one file. Compare this to a typical safetensors repo on huggingface, where there's a pile of necessary JSON files scattered around [...]

Funny, to me AI models have "always" been single files, as that's what has been the norm in the local image gen business. Safetensors files allow stuffing all kinds of stuff inside them too, no GGUF needed for that. Though given that the text encoders of modern models are multi-gigabyte language models themselves, nobody includes redundant copies of those in every checkpoint.

Philpax | 13 hours ago

Single-file deployments were an intentional design goal on my part. While most image models were/are single-file, LLM safetensors (at least at the time) were not, and I wanted to ensure that we enforced that at a structural level. I also didn't want to mandate a JSON reader for executors (e.g. llama.cpp), which the ST approach would have required. The bigger issue at the time, if I recall, was that ST couldn't support the new-and-upcoming quants that GGML had, and having our own file format offered us flexibility that ST couldn't.

amelius | 14 hours ago

> <|turn>user Hi there!<turn|><|turn>model Hi there, how can I help you today <turn|>

Good lord, they managed to invent a format that is even less readable than XML.

aktuel | 14 hours ago

It is not supposed to be readable by humans. You rarely have to look at it. It is designed to not get confused with the actual content, where the content can be any random text from the internet. For that, you have to use a format that is not used anywhere else.

stavros | 13 hours ago

Are these markers actual text? Or does the model "see" one token per marker?

[OP] bashbjorn | 13 hours ago

The model sees one token per marker - but the overlap with ingested actual text is still relevant, because the tokenizer will ingest regular text, where it will turn "<|turn>" into the same token.

For this reason, it can be tricky to work on the runtime for a model with the same model. This really feels like an accidental problem, but I'm not sure if it's really solvable without abandoning the text representations altogether (and the jinja abstraction along with it).

lifis | 13 hours ago

Surely one can just escape the input, no? Seems astonishing if someone isn't doing that

maxbond | 12 hours ago

The escape algorithm here is very simple, you remove special tokens from the runtime tokenizer's vocabulary so that it's forced to encode them as multiple non-special tokens. (That doesn't actually mean the LLM won't treat them as special tokens though, so this isn't sufficient on it's own.)

[OP] bashbjorn | 11 hours ago

Cool technique, but I'm not sure I'd call it simple.

Doing this means that you can't just tokenize the string output of the chat template as one big string. You might need to tokenize things separately, and combine them after.

[OP] bashbjorn | 11 hours ago

You're right, there must be a good and simple way to do it.

Obviously the prefix-with-backslash convention won't do it. The escaping system could be something like inserting a character on the second position in the text repr, and reversing that on output too if it matches an escaped known special token.

Changing the vocab on the fly requires tokenizing things separately, breaking the chat template.

Anecdotally, even claude code has an anneurism sometimes when listing special tokens. Idk exactly what claude's <eos> token is, but I'm fairly sure I've seen it stop generation when it tried to generate it before.

I should also say that I've (clearly) not thought about this deeply. There should be a simpler way to do it.

badsectoracula | 13 hours ago

AFAIK[0] they are (usually) so-called "special" tokens - e.g <|turn> is token id 105 for the vocabulary Gemma4 uses. When you are tokenizing text you can either tokenize the "<|turn>" as a single token (105) or as a series of other tokens (236820, 236909, 887 and 236813 for the "<", "|", "turn" and ">" tokens) with the idea being that the model will treat "105" as the actual separator but can also use "<|turn>" as part of the content.

Though using text-based templates make this a bit tricky regardless. AFAIK llama.cpp tries to avoid this confusion by having their Jinja2 implementation use a custom string type that contains metadata about where characters "come from" so that it can distinguish between special tokens (which would be part of the Jinja2 template) and content (which would be either generated text or text given in by the user) - i.e. even if a string is "<|turn>" the metadata would be used to tell if it is meant to be tokenized as a special token or as a series of non-special tokens.

[0] i might be wrong, this is based on my understanding by messing around with the llama.cpp code, but i never implemented an LLM inference or training engine

rexthonyy | 2 hours ago

You're right. It does seem like a suboptimal format in terms of memory usage efficiency

nixon_why69 | 47 minutes ago

The tokens all have int IDs, this is just how they're rendered.

theapadayo | 14 hours ago

IMO the biggest thing still missing is an actual way to define the model architecture outside of being hard coded into the current build. It doesn't need to be a 1:1 performance parity with the fully supported models. Having proper, vendor validated support for day 1 is what is the difference between people thinking a model is amazing vs horrible. See recent Gemma vs Qwen releases.

Not sure what the solution is, other than writing a DSL to describe the model graphs which you then embed in the GGUF. The other fallback is to just read the PyTorch modules from the official model releases and convert that to GGML ops somehow.

LoganDark | 14 hours ago

I feel like the computation graph could be embedded into the weights similarly to how ONNX works. Then you expose some common interfaces that except some common parameters, and additional custom ones can practically be extensions, sort of like how Wayland works. So you can support not only transformer-ish models like LLaMa, but also RNN-ish models like RWKV and also multimodal models and more. Not sure how this would be implemented in practice but it sounds like a cool idea. I just worry that if the computation graph is baked into the model file, then improvements to the architecture or optimizations that don't require changes to the weights won't be applied to existing files without a conversion.

Philpax | 13 hours ago

Yeah, I intentionally left space for the computation graph to be included in the GGUF spec in the hopes that this would be picked up by someone. I would have loved to have it in the first version, but I was prioritising getting the MVP spec out and implemented.

I'd still love to see this, but it would need a cheerleader very familiar with the current state of the GGML IR.

Philpax | 13 hours ago

I regret that the projection models ended up separate, and I too would have preferred for them to be in a single file. I'm not entirely sure why that ended up happening, but it very much runs counter to the single-file ethos I had in mind when I designed GGUF.

Hoping that someone will shepherd the cause of merging the two; I think I'm too out of the loop to do it this time around :-)

intothemild | 12 hours ago

Well considering right now MTP support is being developed, there was a conversation in that that seemed to throw around the idea of separating the MTP model out of the main GGUF, like with Mmproj. This was rejected.

Which I'm happy for. So given that decision, I don't think it's unreasonable to think that they might be open to including Mmproj files in the GGUF.

Only issue I can think of is, which one? BF16, F16? Etc

Philpax | 11 hours ago

Quantiser's choice, IMO. They're best-placed to decide what compromise to make for their particular model.

uyzstvqs | 12 hours ago

GGML & GGUF have been extremely important to the open-source ML/AI space. Projects like llama.cpp, whisper.cpp, and stable-diffusion.cpp tend to just work perfectly, across a whole bunch of different platforms and hardware backends.

doublerabbit | 12 hours ago

while llama.cpp is an meta creation, and meta as I loathe them with a passion, I do admit it's the easiest out of the others. Compile this, give it brain - run. And you get a webui and api.

packetlost | 12 hours ago

llama.cpp doesn't really have much to do with Meta other than it was originally developed for the first Llama model released by Meta. The creator doesn't and didn't work for Meta when it was written.

doublerabbit | 12 hours ago

well, that solves all my problems. thanks.

monocasa | 11 hours ago

I mean, one if the big issues I've had is that it doesn't really store the compute graph. It only stores a string of the foundational architecture, along with parameter metadata to allow you to rebuild the compute graph.

That means that every foundational model architecture requires new code in whatever is consuming the gguf to support that model.

halyconWays | 11 hours ago

Fun lore, GGUFs were once called GGJTs until I caught the "JT" (Justine Tunney) stealing the memory map code from a user who did 99% of the work in a draft PR (slaren) and lying about it, and misrepresenting or not understanding how memory map worked. She wanted her initials in the file format for bragging rights because it was claimed that it caused 90% memory reduction (actually it was just lazy loading into memory). Gerganov was quite angry when he found out what happened. Jart (JT) was then banned from the llama.cpp repo but managed to get back in a year or so later.

sbinnee | 8 hours ago

Thanks, I learned something more about GGUF by seeing what's not there yet. Tool calling format makes so much sense. It's going to be a milestone transitioning from LLMs to agents.

prashantk_ | 3 hours ago

I have always used safetensors + metadata files (similar to Huggingface repo) format. It is not a major pain point by any means, but good that GGUF has a compact format and good support.