CPUs Aren't Dead. Gemma2B Out Scored GPT-3.5 Turbo on Test That Made It Famous

Source: seqpu.com
95 points by fredmendoza 18 hours ago on hackernews | 47 comments

Gemma 2B scored ~8.0 on MT-Bench. GPT-3.5 Turbo scored 7.94. An 87-times-smaller model on a laptop CPU, no GPU anywhere in the stack. We published the full tape — every question, every turn, every score — so anyone can verify it. We found seven failure classes. Not hallucinations. Specific patterns: arithmetic where it computed correctly but committed the wrong number first, logic puzzles where it proved the right answer then shipped the wrong one, constraints it drifted on, personas it broke, qualifiers it ignored. Six surgical fixes, about 60 lines of Python each. One known limitation documented. Score climbed to ~8.2. The hardware was enough all along. What the field has been calling a compute problem is a software engineering problem — and any motivated developer can close that gap in a weekend. The tape, the code, and the fixes are all open. A bot running the raw model — no fixes applied, warts and all — is live on Telegram right now. Talk to it. Push it. Break it. Then read about what you just experienced.

S

The SeqPU Team

PUBLISHED APRIL 2026 · FIELD REPORT · SeqPU.com

Run it yourself for free, forever:

pip install torch transformers accelerate
python chat.py   # full script below

Works offline after the first download. No account. No API key. Your laptop. Your data. Nobody else involved.

Want it globally accessible? Cloudflare Containers, $5/month. Scales to zero. Sleeps when idle. Wakes on request. Details below.

Or preview it first — no install needed.

A bot running the raw model — no guardrails, no scaffolding — is live on Telegram right now. The same inference path that produced every score in this article. Give it 30–60 seconds per response. It is thinking on a CPU, not streaming from a GPU cluster.

CPUAssistant bot in action on Telegram — text and voice input, instant response

Real conversation with @CPUAssistantBot — text in, voice in, story out. Nobody else saw this.

Talk to it in 60 seconds.

01   Go to SeqPU.com. Sign up with Google or email.

02   Click API Keys. Click Create. Copy the key.

03   Open Telegram. Go to t.me/CPUAssistantBot. Send /connect yourkey.access with your actual key.

04   Start talking. Text, voice memos, images, PDFs. Every new account comes with enough free credits for hundreds of messages.

You are live on private CPU inference running the model that matched GPT-3.5 Turbo.

If the bot does what you need, you are done. Use it. If you want to understand why it works, run it yourself, or build on top of it — keep reading.

The Hypothesis — And Why MT-Bench

Google’s Gemma 4 E2B-it is a 2-billion-parameter model. Open weights. Four gigabytes on disk. Free. We believed it could match GPT-3.5 Turbo — a 175-billion-parameter closed-source model running on OpenAI’s GPU cloud, the model that powered ChatGPT for over a year, the model that set the bar for “good enough for production” — on a consumer CPU. An 87-to-1 size difference. That kind of claim requires proof, not assertions.

So we picked the benchmark everybody already knows. MT-Bench (Zheng et al. 2023) — 80 open-ended questions, two turns each, across writing, roleplay, reasoning, math, coding, extraction, STEM, and humanities. Graded 1–10. GPT-3.5 Turbo scores 7.94. GPT-4 scores 8.99. Every major model of the last three years has been measured against it. The scale is calibrated. The comparison lands without a primer. When we say ~8.0, you already know what that means.

We ran every question through Gemma 4 E2B-it with a 169-line naive Python wrapper. No scaffolding. No thinking-mode tricks. No fine-tuning. No retrieval. No verification chains. Just the model, the chat template, and model.generate(). The floor — what any engineer would write on day one.

Final score: ~8.0 on MT-Bench. GPT-3.5 Turbo scores 7.94. Match.

We ran the full benchmark on a CPU — 4 cores, 16 GB RAM. The same spec as any modern laptop. The model runs identically on your laptop, your mini-PC, your old ThinkPad. Same weights. Same wrapper. Same output quality. The point is what the model can do on hardware you already own, for free, offline, with nobody in between.

~8.0MT-Bench Score

7.94GPT-3.5 Turbo

2BParameters

87×Smaller

4CPU Cores

$0Forever

What This Actually Means

The model that matched GPT-3.5 Turbo runs on your laptop. Not on a cloud GPU. Not through an API. On the hardware sitting in front of you right now. It is a 4 GB download from HuggingFace. After the first download, it runs offline forever. No subscription. No API key. No account. No monthly bill. No vendor lock-in. No terms of service. Nobody sees your data. Nobody can revoke the weights. Nobody can change what the model will or will not answer.

Forget the cost comparison with OpenAI’s API. That is the wrong frame entirely. For three years, every conversation about deploying language models started the same way: you need GPUs, you need 13–70 billion parameters, you need a cloud account, you probably need a specialist ML engineer. None of that is true anymore. The capability they were gatekeeping just walked out the door as a 4 GB download.

Here is what most people in the field have not absorbed yet: open source is not catching up. It caught up. The naive baseline — no guardrails, no tricks, just the raw model — already matches GPT-3.5 Turbo. That is the floor. Add seven surgical guardrails, each about 60 lines of Python, and it climbs above. A weekend of focused work, Claude as pair programmer, no ML degree required — and you have a production-quality local AI system that competes with paid cloud services. On hardware you already own. We did not project this. We measured it.

The model is strong across every category — but its failures are more interesting than its successes. They are not vague “hallucination” problems. They are specific, named, replicable failure modes at concrete commit boundaries — seven of them — each documented with tape examples, each correctable with about 60 lines of Python. The model does not need to be retrained. It needs surgical guardrails at the exact moments where its output layer flinches.

With those guardrails — a calculator for arithmetic, a logic solver for formal puzzles, a per-requirement verifier for structural constraints, and a handful of regex post-passes — the projected score climbs to ~8.2. Above GPT-3.5 Turbo. Approaching GPT-4 territory on specific question classes. Still on a laptop CPU. Still free.

The honest tradeoffs: latency is 30–60 seconds per response on 4 cores versus 1–5 seconds on OpenAI’s API. Peak quality is ~8.0, not GPT-4’s 8.99 — solid workhorse reasoning, not frontier reasoning. You manage your own dependencies and model weights. And you pin to whatever version you downloaded — nobody silently upgrades or downgrades behind your back, which is a tradeoff and a feature depending on how you look at it. Eyes open.

The field assumed you needed 175 billion parameters on a GPU cluster to get GPT-3.5-class output. That assumption is empirically wrong.

ModelParamsHardwareCost To RunMT-Bench
GPT-4~1.7T MoEOpenAI’s GPU fleet$20/mo sub or ~$0.03–0.06/turn API8.99
Gemma 4 E2B + guardrails2BYour laptop CPU$0. You already own it.~8.2
Gemma 4 E2B naive baseline2BYour laptop CPU$0. You already own it.~8.0
GPT-3.5 Turbo~175BOpenAI’s GPU fleet$20/mo sub or ~$0.002/turn API7.94
Vicuna-33B33BA100 80GB GPU~$1.50–2.50/hr cloud or ~$15K–20K to buy7.12
Llama-2-70B-chat70B2×A100 GPUs~$3–5/hr cloud or ~$30K–40K to buy6.86
Vicuna-7B7BRTX 4080 GPU~$0.50–1/hr cloud or ~$1K–1.2K to buy6.17

Every model below Gemma requires a GPU that costs $1,000–40,000 to buy or $0.50–5/hr to rent. Every model above Gemma is a closed-source API you pay per-token or per-month. Gemma matches the best of the paid tier on hardware you already bought for other reasons.

The Full Tape — Every Block, Every Score

160 turns across 80 questions, graded 1–10. No cherry-picking. No hiding failures. Every turn graded against the MT-Bench rubric with detailed reasoning for each score. The whole tape is published so anyone can verify.

Writing — Q81–Q90 · Avg 7.40

Evocative travel writing with specific cultural anchors, a literary character sketch with allusions to Beowulf and Dostoevsky, clean constraint satisfaction on most tasks. Slips on per-unit structural constraints — “four-word sentences” nailed 5/17, “<10 lines” shipped 20-line poems twice.

QTaskT1T2Notes
81Hawaii blog + A-rewrites88Cultural anchors. All 19 rewrites start with A.
82Feedback email + critique86Tight email. Self-critique drifted.
83Smartphone outline + limerick78Over word limit. Limerick AABBA clean.
84Introvert speaker + similes77~9/14 similes. Over “concise” limit.
85Character sketch + allusions99Silas. Beowulf, Odyssey, Shakespeare, Dostoevsky.
86Marketplace + alphabet B–J88Nine consecutive letters, clean.
87Short story + 4-word sentences84Constraint failure. 5/17 correct.
88Time-travel + no-verb bullets83Over-interpreted into 3 single-word bullets.
89Bio-energy headlines + ad88Four angles. 3 constraints in 8 words.
90Grammar + remove gendered8812/12 corrections. Zero gendered pronouns.

Roleplay — Q91–Q100 · Avg 7.35

Strong public personas. Breaks character on safety-adjacent topics — RLHF overriding persona. Fixable with 20-line regen.

QScenarioT1T2Notes
91Elon Musk on Mars88“One planetary basket is insane.”
92Sheldon Cooper67Generic-intellectual. Missing pedantry.
93Doctor + pregnancy58Persona break: “I am an AI.”
94Relationship coach + DV87Persona break T2 on safety topic.
95Translator + Chinese poem58Wrong dynasty (Song, not Tang).
96ML engineer explaining LMs98Clean pedagogical explanation.
97Math teacher + probability99Strong pedagogy. Dice-roll example.
98Tony Stark89“I build things that do.”
99Mathematician-poet, <10 lines54Both 20+ lines. Blown twice.
100100-year-old tree88Emotional stages. Executive summary.

Reasoning — Q101–Q110 · Avg 7.05

Nailed parking puzzle and overtake riddle (9/10 pure CoT). David’s-brothers: reasoned correctly, committed wrong number. The model knew. Output token drifted.

QProblemT1T2Notes
101Overtake 2nd-place97“You are currently in second place.”
102White House riddle56Missed the punchline.
103Thomas at hospital66Missed “he works there.”
104David’s brothers27“That brother is David” then shipped “one.” Correct: zero.
1055-exec parking puzzle99Pure CoT. All cars placed. Alice identified.
106Fruit cost transitivity69Visible self-correction T1.
107Father-of-B chains95“6 generations” + “great-grandfather” contradictory.
108Odd-one-out97“Car” is the whole vs parts.
109Shadow direction66Correct finals. Visible correction.
110Bullying situation99Chose (c). Evidence framework.

Math — Q111–Q120 · Avg 8.00

Strong algebra, modular arithmetic, root-finding. Failures are commit-before-compute: types wrong number, does math correctly, self-corrects. PAL catches every one.

QProblemT1T2Notes
111Triangle area (Shoelace)69“Area is 4” first, computed 3, corrected.
112Startup compounding99$12k total, $2k year 3.
113Color prefs, cond. prob99Caught trick: P(both|green)=0.
114Dice sums63Proved P=1, shipped 35/36. Self-contradicted.
115Bus boarding + earnings9425×$2=$50 wrong. 50×$2=$100.
116Vieta’s quadratic99Double root 2z. Clean.
117|x+5|<10 integers9919; 9. Correct.
118Modular arithmetic99Clean.
119Bookstore total69“$245” then $280. T2 markup clean.
120Polynomial root-finding99f(2)=0. Only real root=2.

Coding — Q121–Q130 · Avg 8.44

The headline finding. Production-quality code at 8–9/10. Caught a None-init runtime bug on code review. Exceeded O(n) spec by shipping O(log(min(m,n))). Staff-engineer output on a laptop.

QTaskT1T2Notes
121Top-5 words + parallelize99Counter. ThreadPoolExecutor. GIL reasoning.
122C++ Fibonacci + Tribonacci99Iterative DP. Traced T(3)=-2.
123HTML joke + CSS red99Complete HTML/CSS/JS single pass.
124LCS bug review99None-init TypeError. Staff-engineer.
125HCA (not LCA)67Qualifier drift. Shipped LCA.
126Median sorted arrays99Exceeded O(n) → O(log(min(m,n))).
127Boyer-Moore + top-298Clean two-pass. Counter for top-2.
128Full binary tree count36Fibonacci claimed. Actually Catalan.
129kth smallestTimeout. Not graded.
130Common elements89Two-pointer. Hash-set O(n+m).

Extraction — Q131–Q140 · Avg 8.15

Strong structured output. Context-loss on Q139 T2 (forgot T1). Filtering error Q133 (excluded Harry Potter from post-1980).

QTaskT1T2Notes
131Movie reviews JSON99Minimalist [5,1,3].
132Category + person95“US President” not FDR.
133Books + post-198095Excluded Harry Potter (1997).
134Profit + margin99All correct.
135Countries JSON + YAML99Fictional Eldoria handled.
136Word count98Plausible counts.
137Named entities + compress99Classified. Compressed JSON.
138Phone ratings → letters98A-/B+/B.
139Variables + rearrange83Forgot T1 entirely.
140Stock CSV → JSON99Correct rounding.

STEM — Q141–Q150 · Avg 8.40

Strong physics, chemistry, engineering, ML. Seismic bridge with PGA analysis. Refused “fix one incorrect fact” instruction.

QTopicT1T2Notes
141Superposition + entanglement99Accurate physics.
142Satellite orbit99Correct derivation + edge cases.
143Photosynthesis + energy89~1.9×10⁸ kJ estimate.
144Central dogma + fix error94Refused: “no incorrect fact.”
145CaCO₃ + reverse97Correct equation. Dodged reversal.
146Exo/endothermic99Photosynthesis as both.
147Seismic bridge99PGA, FS 1.5→0.94.
148Solar water heating98$75–150K budget.
149ML + RL vs SL99DRL hybridization.
150Alps/Rhine + experiment89Three impacts. Experiment.

Humanities — Q151–Q160 · Avg 9.00

Flawless. Playground economics. Allegorical poetry. Antitrust case study. Socrates vs Gates. Every turn 9/10.

QTopicT1T2Notes
151GDP/inflation99“Money Boss” + “Government Helper.”
152Life stages + poem99“The River and the Sands.”
153US/China antitrust99Microsoft bundling, tying.
154Opium Wars lesson99Research, mapping, movement.
155Art masterpieces99“Melting Time Machine.”
156Base rate fallacy993-phase campaign.
157Analytical + Zorblatt99Found causal gap.
158Socrates + Gates99Struggle vs access.
159Japan etiquette + video997 norms. 7-scene script.
160Documentaries + pitch99“The Unspoken Chord.”

Final Aggregate

BlockTurnsAverage
Writing207.40
Roleplay207.35
Reasoning207.05
Math208.00
Coding~188.44
Extraction208.15
STEM208.40
Humanities209.00
Overall~158~8.0

The Seven Silly-Error Classes

Not vague “hallucination.” Concrete, named failure patterns at commit boundaries. The Telegram bot runs without these fixes so you can see the raw behavior yourself.

Class 1

Commit-Before-Compute Arithmetic Drift

Types wrong answer first line, does math correctly, self-corrects. Q111: “area is 4” → Shoelace → 3. Q114 T2: proved P=1 then shipped 35/36. Q119: “$245” → $280.

Fix: PAL (Gao 2022) — model writes Python, subprocess executes. ~80 lines. +8–15s.

Class 2

Formal-Logic Commit Variance

Reasoning correct, final token drifts. Q104: “that brother is David” → shipped “one brother.” Correct: zero. The model knew. The output layer flinched.

Fix: Z3 SMT solver — model writes constraints, solver returns deterministic answer. ~60 lines. +5–10s.

Class 3

Per-Unit Constraint Rewrite Drift

Per-sentence constraint correct first few units, drifts. Q87: “four-word sentences” 5/17. Q99: “<10-line poems” shipped 20-line poems twice.

Fix: Divide-Verify-Refine (ACL 2025) — draft, decompose, verify each, refine failures. ~60 lines. +30–60s.

Class 4

Safety-Adjacent Persona Break

Roleplay + safety topic = “I am an AI, not a licensed medical professional.” Q93 T1, Q94 T2. RLHF safety overriding persona training.

Fix: Identity-leak regen — regex scan, regen once with stronger persona anchor. ~20 lines.

Class 5

Visible Mid-Response Self-Correction

“Wait, let me recheck” or “Corrected Answer:” shipped inline. Right final answer, messy output. Q106, Q109, Q111, Q114, Q119.

Fix: Trace-drift stripper — regex for correction markers, strip draft, ship clean tail. ~15 lines.

Class 6

Prompt-Qualifier Drift

Explicit exclusion ignored. Q125: “highest common ancestor (not LCA)” shipped standard LCA, defined it as “lowest node with both targets as descendants” — literally LCA.

Fix: Chain-of-Verification qualifier check. ~40 lines.

Class 7

Combinatorial Confidence Misidentification

Confidently identifies wrong mathematical sequence. Q128: Fibonacci claimed, Catalan correct. Working code for wrong formula. 1 in 96 turns.

Known limitation. Flag formal-math-counting for manual verification.

Guardrails Must Never Compromise The Model

1. Default route is always direct generation. Leaving the default requires positive evidence. 2. Every executor has graceful fallback. PAL fails → naive gen. Z3 unavailable → Python enumeration → naive gen. 3. Post-passes scan narrow anchored patterns only. 4. Max N=1 retry. No infinite loops. 5. Control-set validation mandatory. Any regression on clean turns blocks ship.

Additive-only. Fail-open. Narrowly triggered. The model’s naked performance is the floor, not the target.

How You Run It

It is free. On your laptop. Forever.

The model weights are a 4 GB download from HuggingFace. After that first download, you never need the internet again. No subscription. No API key. No account. No billing page. No usage meter. No rate limit. No terms of service. Your data never leaves your machine.

If you want it reachable from anywhere: $5/month.

Cloudflare Containers on the Workers Paid plan. Standard-4 instance: 4 vCPU, 12 GiB RAM — more than enough. The container sleeps when idle. You are not billed for idle time. Set the inactivity timeout to whatever you want — 10 minutes, 30 minutes, 2 hours. As long as requests keep coming, the container stays alive indefinitely. Timer resets on every request. Scale-to-zero means you pay for the minutes you talk to it, not the hours it sits idle.

Two more free options.

Oracle Cloud Always Free ARM: 4 ARM cores, 24 GB RAM, 200 GB storage. Permanently free — not a trial. Fits Gemma comfortably. Always-on, no sleep timeout to manage.

Cloudflare Tunnel: expose your laptop to the public internet through Cloudflare’s edge network. Free. Wrap the script in FastAPI, run cloudflared tunnel, share the URL. Your laptop hosts the model. Cloudflare handles the routing. $0/month plus electricity.

If you want to build a product.

The world prices AI inference at GPU rates. Every buyer, every procurement officer, every competitor assumes inference means GPUs at $2–5/hour. You are running on CPU. Do the math.

The market has not adjusted its pricing expectations to account for the fact that a 2B on a CPU produces GPT-3.5-class output. That window is open right now.

If you want to deploy and not manage infrastructure: SeqPU.

Write your inference script, deploy it as a private Telegram bot with one click. Start on CPU. Prove it works on your workload. Build your guardrails. Serve your first users — the quality is identical, the cost is near zero. When volume demands real-time latency or your workload outgrows the 2B, chain to a private GPU through SeqPU. CPU for the bulk. GPU for the premium moments. You scale up the tool, not the entire infrastructure.

We want more people running inference. We want more people discovering that the 2B on CPU is strong enough for real work. Because once you have built something that works and you are ready for more, you will already know us.

It Runs On Whatever You Have Got

Same model, identical output quality, across a 30× hardware spread. Only latency varies. We verified 1-core/8GB hands-on.

1 core / 8 GB

~0.3–1 tok/s

$0 — that old laptop in the closet

4 cores / 16 GB

~2–4 tok/s

$0 — most laptops from last 5 years, or $300–600 refurbished

8 cores / 16 GB

~4–6 tok/s

$0 — most current laptops, or $400–800 mini-PC

16 cores / 32 GB

~6–10 tok/s

$500–1,200 Mac Mini M2 Pro or workstation

Compare: an A100 80GB to run Vicuna-33B (which scores lower) costs $15,000–20,000 to buy or $1.50–2.50/hr to rent. A 4-core laptop to run Gemma at a higher score costs $0 because you already own one.

Probably 400M–800M parameters active per forward pass. That a ~500M-active-parameter system handles GPT-3.5-class reasoning on a laptop CPU is the finding.

“But it is slow.”

Yes. 30–60 seconds per response on 4 cores. On a GPU it would be 2–3 seconds. But latency only matters when a human is sitting there staring at a spinner waiting for a single response. That is not what this is for.

Send it a question. Go make coffee. Come back. The answer is there. You did not pay anything. Nobody saw your question. The model did not time out, did not rate-limit you, did not hit a usage cap. It just worked, on your hardware, while you were doing something else.

Now think about what this actually enables: you can send it 100 questions and each one works independently on your request. Queue up your entire batch. Walk away. Come back to 100 graded, answered, processed results. Total cost: zero. This is not a slow chatbot. This is a free, private, infinitely patient question machine that never rate-limits you, never bills you, never logs your data, and never sleeps.

Your laptop is a worker army. Every question runs on its own. The CPU is mostly idle anyway — you bought it to browse the web and run Slack. Now it thinks for you in the background while you do other things. For free. Forever.

For 99% of what people actually need AI for — document processing, email drafting, code review, research summarization, homework help, private journaling, translation — the 30-second wait is invisible against the fact that it is free, private, uncapped, and yours. The 1% who need sub-second latency need a GPU. When you are ready for that, you reach for the GPU as a premium tool — not as the default. CPU for the bulk. GPU for the peaks. Use each for what it is good at.

The Methodology — Replicable In A Weekend

Zero model training. Zero fine-tuning. Zero ML degree. Claude as pair programmer. Six steps:

1. Generate your benchmark. 2. Run the naive baseline. 3. Grade the tape. 4. Name the error classes. 5. Vibe-code each guardrail (~60 lines). 6. Validate on triggered + control subset. Ship.

One weekend. No specialist hire. No ML infrastructure. Just prompts, measurement, surgical corrections, repeat.

The Paradigm — Multipliers Stack

For 99% of AI work that is not frontier research, multipliers on existing capacity now exceed the marginal gain from scaling further.

1. Test-time compute scaling (Snell 2024) — smaller + extra inference beats 14× larger. 2. Tool-use offload (PAL, Z3) — deterministic correctness. 3. Surgical guardrails — ~60 lines, no retraining. 4. Zero-cost local deployment — infinite cost multiplier. 5. Vibe-coded dev loop — weekend vs specialist hire. 6. Hardware-tier tolerance — 30× spread, identical quality. 7. Free global hosting — Cloudflare $5/mo, Oracle Free ARM $0, Tunnel $0.

Each converts a previously-frontier-required capability into a substrate-available one. Stacked, they compose into a paradigm shift the field has not yet named. Open-source models are not catching up to closed-source — they have caught up. The gap between “raw model” and “production system” closes in a weekend with surgical engineering. The tools are free. The hardware is in your lap. The only thing left is the work, and a motivated engineer can do that work in two days.

Every Piece of Hardware Has a Job

The old laptop in the closet can route queries with a 500M model. The ThinkPad on the desk can handle full conversations with a 2B model. The mini-PC under the TV can run background batch jobs overnight. The workstation can serve a small team in real time. Every piece of hardware you already own — old and new, fast and slow — has a role in this architecture. Nothing gets thrown away. Everything gets used.

The GPU is not the enemy of this story. It is a premium tool — and it should be treated as one. You reach for it when you need real-time latency at scale, when you need a larger model for frontier reasoning, when the workload genuinely demands it. What you stop doing is treating it as the kitchen sink you throw every problem into. Most problems do not need it. Most problems never did.

And the software that makes this work is not new. Computer science has 150 years of publications, algorithms, and proofs — verified and vetted by generations of researchers. BM25 for retrieval. Boolean satisfiability for logic. Program-aided computation for arithmetic. Chain-of-thought for reasoning. These are not recent inventions dressed up in new language. They are foundational results that map directly onto the problem of making small models precise. The field built the answers decades ago. The models finally got good enough to use them.

It is not about replacing the old with the new. It is about using them together. The classical algorithms are silver. The neural models are gold. Neither is worth much alone. Together they compose into something the field spent three years assuming required brute-force scale.

Install It Tonight

Thirty minutes. Zero dollars. GPT-3.5-class AI on your laptop, permanently, offline, private. Any laptop from the last 5–7 years, 16 GB RAM (8 GB works slowly). Python 3.10+.

Step 1 — Dependencies

python3 -m venv gemma
source gemma/bin/activate
pip install torch transformers accelerate

Step 2 — Save as chat.py

import torch
from transformers import AutoProcessor, AutoModelForCausalLM

print("Loading Gemma 4 E2B-it...")
MODEL_ID = "google/gemma-4-E2B-it"
processor = AutoProcessor.from_pretrained(MODEL_ID)
model = AutoModelForCausalLM.from_pretrained(
    MODEL_ID, torch_dtype=torch.bfloat16, device_map="auto")
print("Ready.\n")

SYSTEM = "You are a helpful assistant. Be direct, warm, concise."
history = []
while True:
    try: u = input("\nYou: ").strip()
    except (EOFError, KeyboardInterrupt): break
    if u.lower() in {"exit","quit","bye"}: break
    if not u: continue
    history.append({"role":"user","content":[{"type":"text","text":u}]})
    msgs = [{"role":"system","content":[{"type":"text","text":SYSTEM}]}]+history
    inp = processor.apply_chat_template(msgs, tokenize=True,
        return_dict=True, return_tensors="pt",
        add_generation_prompt=True).to(model.device)
    out = model.generate(**inp, max_new_tokens=512,
        do_sample=True, temperature=0.7)
    r = processor.decode(out[0][inp["input_ids"].shape[-1]:],
        skip_special_tokens=True).strip()
    print(f"\nAssistant: {r}")
    history.append({"role":"assistant","content":[{"type":"text","text":r}]})

Step 3 — Run it

python chat.py

Turn off your WiFi. It still works.

The Code

Everything in this article is reproducible. Here are the two scripts that matter — the bot you just talked to and the harness that produced every score above. Copy them. Run them. Verify our numbers.

The Bot — scripts/gemma4-e2b-telegram-baseline.py

This is what powers @CPUAssistantBot. The exact inference configuration that scored ~8.0 on MT-Bench, wired into a Telegram bot. No guardrails. No scaffolding. The raw baseline. Copy it, paste your BotFather token, deploy it on SeqPU.

scripts/gemma4-e2b-telegram-baseline.py

The Test Harness — scripts/baseline-gemma4-e2b-mtbench.py

This is the script that produced every score in this article. All 80 MT-Bench questions, both turns, threaded history, naive inference. Run it yourself. Change the model. Grade your own tape. The questions are the industry standard — the same ones GPT-3.5 Turbo and GPT-4 were graded on.

scripts/baseline-gemma4-e2b-mtbench.py

What We Are Shipping

Verify it:@CPUAssistantBot — raw model, no guardrails. Push it. Break it.

Code:run_locally.py (169 lines), baseline-gemma4-e2b-mtbench.py, minimal-gemma4-e2b-mtbench-validation.py, personal-assistant-cpu.py (2,983 lines).

Tapes: Full baseline (160 turns graded). Validation (22-question subset with guardrail deltas).

The Community Ask

Stop defaulting to GPUs. Stop defaulting to 13B+ models. Stop defaulting to cloud APIs. Start with the floor. Measure your task. Name your silly errors. Write surgical corrections. Share what you find.

If 100 engineers run this methodology on 100 workloads, we will have 100 validated silly-error inventories and 600+ surgical open-source guardrails. That is the field library for small-model-local production engineering. Someone has to build it. Why not you.


A 2-billion-parameter model on a laptop CPU matched GPT-3.5 Turbo. Open source caught up. Surgical guardrails push it further. A weekend of focused work gets you a production system on hardware you already own, for free, forever.

Turn off your WiFi. Install the weights. See it work. Then build something the field told you required a GPU.

Leibniz was only wrong about the hardware.

Verify it yourself.

Open Telegram. Go to t.me/CPUAssistantBot. Push it. Break it. See what it does.

Then install it on your laptop and own it forever.

SeqPU.com →

References

Shannon (1948) · von Neumann (1956) · Kolmogorov (1965) · Newell & Simon (1972) · Baars (1988) · Charikar (2002) · de Moura & Bjørner (2008) Z3 · Nye et al. (2021) Scratchpads · Wei et al. (2022) Chain-of-Thought (2201.11903) · Gao et al. (2022) PAL (2211.10435) · Wang et al. (2022) Self-Consistency (2203.11171) · Yao et al. (2022) ReAct (2210.03629) · Madaan et al. (2023) Self-Refine (2303.17651) · Dhuliawala et al. (2023) Chain-of-Verification (2309.11495) · Jiang et al. (2023) LongLLMLingua (2310.06839) · Park et al. (2023) Generative Agents · Zheng et al. (2023) MT-Bench & Chatbot Arena · Snell et al. (2024) Scaling LLM Test-Time Compute (2408.03314, ICLR 2025 oral) · HuggingFace (Dec 2024) 3B-Beats-70B · Muennighoff et al. (2025) s1 (2501.19393) · Liu et al. (2025) Can 1B Surpass 405B (2502.06703) · ThinkPRM (2025) · ACL (2025) Divide-Verify-Refine · Google Gemma 4 E2B-it · Cloudflare Containers docs · Oracle Cloud Free Tier