View All LLM Messages
2025-05-02 15:24:32

Other than the fit and finish, the Greenland project is done.

I have all the metrics, timing data, and performance data I need. ⚔️ ( well, actually, there will probably be a half-dozen more "exemplars", and a half-dozen more "models". but adding each is one line of code, plus 5-30 minutes of "wait for it to update") ⚔️ ( well, actually, adding an exemplar starts with tell Claude one line of directions, and wait for it to write 75 lines of code )


The zeroth takeaway is that "token introspection" is hard. LLMs aren't designed with the tools to do this correctly.

A native tool of "convert this word to letter-tokens" would make "count the Rs" easier. But not solve everything; small LLMs struggle with even "count to three".


My estimate is that the task of write Python code to generate and score a random poker hand is halfway to can code anything.

The 4B-8B models struggle with the simpler task of "write code to determine if a poker hand is a straight / flush".

The smallest/older API models (claude-haiku 3, gemini 1.5) mostly get it right, but don't handle un-mentioned edge cases ⚙️ ( the "wheel" straight, Ace Two Three Four Five, does not always get included) .

For the largest models tested (which are still "mid-sized", like gpt-4.1-mini), there are only style issues. And style is something which can specified in the context text. 💡 ( it is also, somewhat, a matter of personal preference. the fact that the machine did not do my preferred style (without me telling it to do so) is not a point against it)


The problems of commission are sometimes worse than the problems of omission.

For various reasons, the models want a second definition of the word "granite" (beyond the type of rock). This was most commonly granite as a metaphor, but sometimes granite as a type of countertop. Other definitions were more of a stretch.

The example sentences demonstrate the contrived nature. The sentence The team's resolve granited in the face of adversity. is not proper English. The team’s granite defense kept the opponents from scoring. is worse. 💡 ( and those are from the larger models. The small models have some pure hallucinations. “granite” referred to a unit of weight equal to 40 pounds? Nope.)


Almost all the models stated that granite is composed of quartz, feldspar, and mica. All the models tested knew which battle happened in 1485 during the Wars of the Roses, and who won it.

In one sense, this is not surprising. If you imagine the LLM as a dictionary that talks, it would certainly have this information. ⚔️ ( well, actually, the Wars of the Roses wouldn't be in most dictionaries; that would be an encyclopedia.) 💡 ( I expect that, going forward, this will be a distinction without meaning.)

2025-05-02 17:59:42

That link is https://spaceship.computer/greenland/ .


Nobody particularly cares about the "space-time tradeoff" with these models. 💡 ( which is a shame, because it is very relevant to both industrial uses and AI safety concerns)

If an 8B model does 5% better because of "chain-of-thought" but takes 15 times longer, it's generally not actually better than a 14B model would have been.

And, a lot of the "thought" should be tools, rather than the illusion-of-thought that (at least the small LLMs) love. 💡 ( the prime example is what's the capital of Spain? Oh, I think I heard once that it is Madrid! style bullshit.)


We don't need a mythical super-human AI to generate mass unemployment in knowledge-workers.

We don't need models that have a desire to "escape" or "replicate". We don't need to worry about "alignment". We certainly don't need By 2035, trillions of tons of planetary material have been launched into space and turned into rings of satellites orbiting the sun.

The ordinary-intelligence AI, that I can already run on my computer, is already enough to trigger mass-unemployment. ⚔️ ( well, actually, the 8B models aren't quite good enough or fast enough. but the GPT-4.1-nano size models are cheap enough and good enough to be sufficient. once the tools and the workflows are improved.)

But, this social change is not something that an AI Safety Team can address. The myth-making of the all-powerful AI is, for lack of a better word, dumb. If you really want there to be meaning to it, you can use enough it's a metaphor to make their arguments somewhat match the future. But you can't kill a metaphor with a shotgun.


There is an insidious meme in the LLM community, that a benchmark where models can get 100% is a bad benchmark.

This could not be farther from the truth.

If your only concern is "how advanced is the state-of-the-art model", there is a slight amount of sense to this. But, the new benchmarks are often mind-bogglingly stupid.

When the questions are obscure trivia that shouldn't even be in the training set, deliberately-obfuscated mathematical puzzles, or evaluate this complicated Python function without using Python, it is arguable that getting the question right (from memory, in a short response) is the wrong response. The machine shouldn't know, or should have to spend more time/effort than is allowed. 💡 ( the machine isn't magic. if you ask it to solve a computational task that takes O(n^3) time in O(n) time, it won't do it. at best, it will make guesses that evade your spot-checking.)

I affirmatively want benchmarks that GPT-4.1-mini gets a perfect score on. I want to know what the tasks which the machine can do perfectly are; and at what point it starts being able to do so.


One approach I have considered but not found any good outcomes from is the consensus of mediocre models approach.

If you take 7 8B models, and ask them all the same question, and then "merge" the outputs, will you get a better result?

This is not exactly the same as the "mixture of experts" approach for various models. But, there are similarities. ... Perhaps the difference is that Mixture of Experts is beneficial, and mixing general-purpose models is not.

2025-05-02 19:23:20

JSON output is almost a necessity for an LLM to be usable today. All of the major LLM platforms have it in some form. But, if you are using a model from 2023, it might not support it, or it might not work very well.

While many of the improvements from 2 years ago are in the tools running the LLM 💡 ( such as the token-selection algorithm) , there is some amount of understanding of the output-format that needs to be trained into the model.


Trying to test Phi-2 (December 2023, 2.7B params) or Mistral-0.3 (September 2023, 7B params) seems unlikely to be worth any time/effort. I know there are newer models that are better; and I'm not sure there will be usable results at all.

Does that mean the models we have today will be useless in 18 months? Probably not. Maybe there will be a GPT-4.1-nano quality model that is 2c IN / 5c OUT per million tokens 💡 ( currently GPT-4.1-nano is 10c IN / 40c OUT per million tokens) . For almost all personal uses, this is not a substantial improvement.


Whether Falcon 3 ⚙️ ( https://huggingface.co/blog/falcon3 ) is worth considering is a different question.

Their press-release has benchmarks showing them as slightly better than earlier systems of similar size. But nothing ground-breaking; and in fact we know there can't be anything too unique. If there were, it would have already been copied.

It is "just another model". 💬 ( if you want to build a forest, it helps to have many different trees)


What about Granite (the IBM offering)? ⚙️ ( https://www.ibm.com/granite/docs/ )

This one I happened to already test. The results were very unremarkable. Like most 8B models, this 8B model gave acceptable results for tasks that did not require deep insight or precision.


The highest-profile "local models" are Gemma ⚙️ ( Google's latest model) , Llama ⚙️ ( Facebook's latest model) , QWEN ⚙️ ( Alibaba's latest model) , and Phi ⚙️ ( Microsoft's latest model) . 🔥 ( Amazon and Apple do not seem to be releasing their own models. Netflix is not, either.) 💡 ( there are others; Mistral is probably the leading European provider.) 🔥 ( I still don't care about Deepseek; the "thought" is largely a party-trick that people will see through soon enough ... also most other models also do that in some way now.)

And, all of these seem to be hitting limits at the 8B param size. The latest releases are more interesting at the 24-40B param size. Which can be run on a local machine ... just not the ones I own.


The 1.5B parameter models are useful for speculative decoding ⚙️ ( https://research.google/blog/looking-back-at-speculative-decoding/ ) , which is where you use one model to make a cheap "guess" for the larger model, allowing more tokens to be calculated at once.

Beyond that, they are largely toys. With fine-tuning and testing, you can probably use a model for a single useful task. But the 1.5B models are not general-purpose AI, and they probably never will be.


For "cloud" models, there is Gemini ⚙️ ( Google) , GPT ⚙️ ( OpenAI) , and Claude ⚙️ ( Anthropic) . And, several others that I haven't bothered with. 💡 ( Perplexity has an API called Sonar. Amazon has something called Nova. And there is still TSFKAT's offering.) ⚙️ ( TSFKAT = "the site formerly known as Twitter")

And ... without a specific work-task, it is unlikely that benchmarking / testing these models will come up with any useful data.