Going to have to disagree on the backup test. Opus flamingo is actually on the pedals and seat with functional spokes and beak. In terms of adherence to physical reality Qwen is completely off. To me it's a little puzzling that someone would prefer the Qwen output.
I'd say the example actually does (vaguely) suggest that Qwen might be overfitting to the Pelican.
Qwen's flamingo is artistically far more interesting. It's a one-eyed flamingo with sunglasses and a bow tie who smokes pot. Meanwhile Opus just made a boring, somewhat dorky flamingo. Even the ground and sky are more interesting in Qwen's version
But in terms of making something physically plausible, Opus certainly got a lot closer
The fundamental challenge of AI is preventing unprompted creativity. I can spin up a random initialization and call all of it's output avante garde if we want to get creative.
I recently fell down the rabbithole of AI-generated videos, and realised that many of the "flaws" that make them distinctive, such as objects morphing and doing unusual things, would've been nearly impossible or require very advanced CGI to create.
"artistically interesting" is IMHO both a subjective and 'solved' problem. These models are trained with an "artistically interesting" reward model that tries to guide the model towards higher quality photos.
I think getting the models to generate realistic and proportional objects is a much harder and important challenge (remember when the models would generate 6 fingers?).
Even the first one - Qwen added extra details in the background sure. But he Pelican itself is a stork with a bent beak and it's feet is cut off it's legs. While impressive for a local model, I don't think it's a winner.
I've been using Qwen3.5-35B-A3B for a bit via open code and oMLX on M5 Max with 128Gb of RAM and I have to say it's impressively good for a model of that size. I've seen a huge jump in the quality of the tool calls and how well it handles the agentic workflow.
I understand the 'fun factor' but at this point I really wonder what this pelican still proofs ? I mean, providers certainly could have adapted for it if they wanted, and if you want to test how well a model adapts to potential out of distribution contexts, it might be more worthwhile to mix different animals with different activity types (a whale on a skateboard) than always the same.
For a delightful moment this morning I thought I might have finally caught a model provider cheating by training for the pelican, but the flamingo convinced me that wasn't the case.
The Opus one looks like a flamingo, and looks like it's riding the unicycle. Sitting on the seat. Feet on the pedals.
The Qwen one looks like a 3-tailed, broken-winged, beakless (I guess? Is that offset white thing a beak? Or is it chewing on a pelican feather like it's a piece of straw?) monstrosity not sitting on the seat, with its one foot off the pedal (the other chopped off at the knee) of a malmanufactured wheel that has bonus spokes that are longer than the wheel.
But yeah, it does have a bowtie and sunglasses that you didn't ask for! Plus it says "<3 Flamingo on a Unicycle <3", which perhaps resolves all ambiguity.
Let's not oversell Opus' output. The Qwen flamingo is flawed but could be easily fixed with 1-2 prompts if you're really upset with it. The Opus SVG is not any better than something that I could make in Inkscape with 3 minutes and sufficient motivation. Calling Opus' flamingo "programmer art" would be an insult to programmers.
If I (commercially) made models I’d put specific care into producing SVGs of various animals doing (riding) various things ... I find it interesting how confident you seem to be that they’re not.
This is a gag that's long outlived its humor, but we're in a space so driven by hype there are people who will unironically take some signal from it. They'll swear up and down they know it's for fun, but let a great pelican come out and see if they don't wave it as proof the model is great alongside their carwash test.
Consider reading the article, which addresses all of the points you raise.
It's directly stated in the post that the entire test is meant to be humorous, not taken seriously, only that is has vaguely followed model performance to date. The author also writes that this new result shows that trend has broken..
Yeah I can imagine these popular benchmarks get special treatment in the training of new models. I wonder how they would perform for "Elephant riding a car" or "Lion sleeping in a bed"
For coding, qwen 3.6 35b a3b solved 11/98 of the Power Ranking tasks (best-of-two), compared to 10/98 for the same size qwen 3.5. So it's at best very slightly improved and not at all in the class of qwen 3.5 27b dense (26 solved) let alone opus (95/98 solved, for 4.6).
You compare tiny modal for local inference vs propertiary, expensive frontier model. It would be more fair to compare against similar priced model or tiny frontier models like haiku, flash or gpt nano.
If all models are trained on the benchmark data, you cannot extrapolate the benchmark scores to performance on unseen data, but the ranking of different models still tells you something. A model that solves 95/98 benchmark problems may turn out much worse than that in real life, but probably not much worse than the one that only solved 11/98 despite training on the benchmark problems.
This doesn't hold if some models trained on the benchmark and some didn't, but you can fix this by deliberately fine-tuning all models for the benchmark before comparing them. For more in-depth discussion of this, see https://mlbenchmarks.org/11-evaluating-language-models.html#...
I've been using Qwen 3.5 35B-A3B with images as input so I suspect you perhaps didn't include the vision part of the model during testing (I use llama.cpp and I learned I needed to include the separate mmproj part).
I literally cannot believe that people are wasting their time doing this either as a benchmark or for fun. After every single language model release, no less.
It feels like the results stopped being interesting a little while ago but the practice has become part of simonw's brand, and it gives him something to post even when there is nothing interesting to say about another incremental improvement to a model, and so I don't imagine he'll stop.
Fun is so un-productive. Everyone doing things for "fun" is going to be sorry when they look back and realizes they were wasting time having a "good time" rather than optimizing their KPIs.
It’s not a waste of time.
As the boundaries of AI are pushed we increasingly struggle to define what intelligence actually is. It becomes more useful to test what models cannot do instead of what they can. Random tasks like the pelican test can show how general the intelligence really is, putting aside the obvious flaw that the labs can optimise for such a simple public benchmark.
The whole point of this benchmark is that it asks the model to work in a modality it is not trained in and does not understand well. The result is largely meaningless. This is just like the people who are endlessly surprised by the fact that a raw LLM does not work with numbers well, or miscounts letters. In short, this test benchmarks the intelligence of the person running it, not of the model.
Such a disconnect from the minutes I’ve lost and given up on Gemini trying to get it to update a diagram in a slide today. The one shot joke stuff is great but trying to say “that is close but just make this small change” seems impossible. It’s the gap between toy and tool.
I really wish they spent some time training for computer use. This model is incapable of finding anywhere near the correct x,y coordinate of a simple object in a picture.
I liked both of Opus' better, it was very illuminating, in both cases I didn't see the error's Simon saw and wondered why Simon skipped over the errors I saw.
I don't know what such a demo would prove in the first place. LLMs are good at things that they have been trained on, or are analogues of things they have been trained on. SVG generation isn't really an analogue to any task that we usually call on LLMs to do. Early models were bad at it because their training only had poor examples of it. At a certain point model companies decided it would be good PR to be halfway decent at generating SVGs, added a bunch of examples to the finetuning, and voila. They still aren't good enough to be useful for anything, and such improvements don't lead them to be good at anything else - likely the opposite - but it makes for cute demos.
I guess initially it would have been a silly way to demonstrate the effect of model size. But the size of the largest models stopped increasing a while ago, recent improvements are driven principally by optimizing for specific tasks. If you had some secret task that you knew they weren't training for then you could use that as a benchmark for how much the models are improving versus overfitting for their training set, but this is not that.
On thinking about the reasons this may be something at least slightly more than training on the task is the richness with which language is filled with spatial metaphors even in basic language not by laymen considered metaphor outside the field of linguistics proper, in which concepts eg Lakoff's analysis in "Metaphors we Live By and others are simply part of the field, (though unsurprisingly, among the HN crowd I've occasionally seen it brought up)
The amount of money you have in the bank may often "increase" or "decrease" but it also goes up and down, spatial. Concepts can be adjacent to each, orthogonal. Plenty more.
So, as models utilize weight more densely with more complex strategies learned during training the patterns & structure of these metaphors might also be deepened. Hmmm... another thing to add to the heap of future project-- trace down the geometry of activations in older/newer models of similar size with the same prompts containing such metaphors, or these pelican prompts, test the idea so it isn't just arm chair speculation.
This is a useless benchmark now a days, every model provider trains their models on making good pelicans. Some have even trained every combination of animal/mode of transportation
More and more I suspect OpenAI is generating comments on HN to try shift the discussion.
I’m not sure you’re a bot but this is the stereotypical comment being overly critical of anything where OpenAI is not superior or being overly supportive (see comments on the Codex post today) while clearly not understanding the discussed topic at all.
LLM's really causing serious brainrot if html pelican drawings are a usage basis for your programming projects, even all these shitty benchmarks don't say or mean anything if companies secretly tweak them on the go
Will Claude constantly be able to deliver more value than rolling your own ?
I think the future is a bunch of just good enough models, which is what most people need. Not top of the line models that require millions in hardware to run
not that I disagree with you in principle but I see this the same was a "cloud" - 10's of thousands of companies could save gazillion dollars by hosting their infrastructure and yet they continue to pay insane amounts of moneys to AWSs and Azures and whatnots. While some company's future may as well be running local models I would venture a guess that vast majority will just eat the costs and pass on as much of it as they can to their customers...
ericpauley | a day ago
I'd say the example actually does (vaguely) suggest that Qwen might be overfitting to the Pelican.
wongarsu | a day ago
But in terms of making something physically plausible, Opus certainly got a lot closer
kmacdough | a day ago
BobbyJo | a day ago
userbinator | a day ago
itake | 21 hours ago
I think getting the models to generate realistic and proportional objects is a much harder and important challenge (remember when the models would generate 6 fingers?).
tpm | 18 hours ago
tecoholic | a day ago
mejutoco | a day ago
irthomasthomas | a day ago
monocasa | a day ago
kube-system | 23 hours ago
gowld | 3 hours ago
kube-system | 2 hours ago
comandillos | a day ago
iib | a day ago
mentalgear | a day ago
[OP] simonw | a day ago
For a delightful moment this morning I thought I might have finally caught a model provider cheating by training for the pelican, but the flamingo convinced me that wasn't the case.
prodigycorp | a day ago
dude250711 | a day ago
furyofantares | a day ago
[OP] simonw | a day ago
furyofantares | a day ago
The Qwen one looks like a 3-tailed, broken-winged, beakless (I guess? Is that offset white thing a beak? Or is it chewing on a pelican feather like it's a piece of straw?) monstrosity not sitting on the seat, with its one foot off the pedal (the other chopped off at the knee) of a malmanufactured wheel that has bonus spokes that are longer than the wheel.
But yeah, it does have a bowtie and sunglasses that you didn't ask for! Plus it says "<3 Flamingo on a Unicycle <3", which perhaps resolves all ambiguity.
bigyabai | a day ago
monksy | a day ago
akavel | a day ago
https://redd.it/1slz38i
solarkraft | 22 hours ago
[OP] simonw | 19 hours ago
stephbook | a day ago
https://x.com/JeffDean/status/2024525132266688757
If anything, the disastrous Opus4.7 pelican shows us they don't pelicanmaxx
bitwize | a day ago
BoorishBears | a day ago
luyu_wu | a day ago
It's directly stated in the post that the entire test is meant to be humorous, not taken seriously, only that is has vaguely followed model performance to date. The author also writes that this new result shows that trend has broken..
gistscience | 17 hours ago
jbellis | a day ago
__natty__ | a day ago
ericd | a day ago
javawizard | a day ago
kristianp | a day ago
https://blog.brokk.ai/introducing-the-brokk-power-ranking/
yorwba | a day ago
This doesn't hold if some models trained on the benchmark and some didn't, but you can fix this by deliberately fine-tuning all models for the benchmark before comparing them. For more in-depth discussion of this, see https://mlbenchmarks.org/11-evaluating-language-models.html#...
spwa4 | 15 hours ago
Qwen 3.6 35b a3b: 34 tok/sec
Qwen 3.5 27b: 10 tok/sec
Qwen 3.5 35b a3b: doesn't support image input
upboundspiral | 7 hours ago
19qUq | a day ago
mvanbaak | a day ago
VHRanger | a day ago
aliljet | a day ago
smashed | a day ago
chabes | a day ago
lofaszvanitt | a day ago
jedisct1 | a day ago
It's pretty good at finding bugs, but not so good at writing patches to fix them.
throwuxiytayq | a day ago
sharkjacobs | a day ago
stephbook | a day ago
But that Opus pelican?
segmondy | a day ago
recursive | a day ago
throwuxiytayq | 17 hours ago
cedws | a day ago
throwuxiytayq | 17 hours ago
Marciplan | a day ago
bschwindHN | 22 hours ago
casey2 | 5 hours ago
sailingcode | a day ago
DANmode | a day ago
layer8 | a day ago
wood_spirit | a day ago
JaggerFoo | a day ago
bottlepalm | a day ago
justinbaker84 | a day ago
refulgentis | a day ago
Pelican: saturated!
nba456_ | a day ago
f33d5173 | a day ago
I guess initially it would have been a silly way to demonstrate the effect of model size. But the size of the largest models stopped increasing a while ago, recent improvements are driven principally by optimizing for specific tasks. If you had some secret task that you knew they weren't training for then you could use that as a benchmark for how much the models are improving versus overfitting for their training set, but this is not that.
[OP] simonw | a day ago
yieldcrv | a day ago
That’s so wild
kburman | a day ago
Havoc | a day ago
ineedasername | a day ago
The amount of money you have in the bank may often "increase" or "decrease" but it also goes up and down, spatial. Concepts can be adjacent to each, orthogonal. Plenty more.
So, as models utilize weight more densely with more complex strategies learned during training the patterns & structure of these metaphors might also be deepened. Hmmm... another thing to add to the heap of future project-- trace down the geometry of activations in older/newer models of similar size with the same prompts containing such metaphors, or these pelican prompts, test the idea so it isn't just arm chair speculation.
Quarrelsome | a day ago
quux | 23 hours ago
henry2023 | 22 hours ago
atonse | 22 hours ago
Oh maybe it might continue to iterate on the existing drawing?
ralph84 | 22 hours ago
henry2023 | 22 hours ago
I’m not sure you’re a bot but this is the stereotypical comment being overly critical of anything where OpenAI is not superior or being overly supportive (see comments on the Codex post today) while clearly not understanding the discussed topic at all.
SJMG | 21 hours ago
This is not refutation of astroturfing on HN, but in this case, I doubt it.
[OP] simonw | 22 hours ago
Illustrations with SVGs of pelicans riding bicycles will never be useful, because pelicans can't ride bicycles.
hopinhopout | 17 hours ago
wongarsu | 17 hours ago
And so far, the ability to make SVGs of $animal on $ vehicle seems to correlate surprisingly well with model 'intelligence'
big-chungus4 | 17 hours ago
999900000999 | 11 hours ago
God bless these open models. Claude can’t subsidize its users forever and no one can afford 1200$ a month for llm credits.
bdangubic | 11 hours ago
you'd be surprised....
999900000999 | 11 hours ago
Will Claude constantly be able to deliver more value than rolling your own ?
I think the future is a bunch of just good enough models, which is what most people need. Not top of the line models that require millions in hardware to run
bdangubic | 10 hours ago
999900000999 | 7 hours ago
Eventually another cloud provider can just spin up a few llms vs paying whatever Claude demands