Channel: LLM - Large Language Model discussion
Other than the fit and finish, the Greenland project is done.
I have all the metrics, timing data, and performance data I need. ⚔️ ( well, actually, there will probably be a half-dozen more "exemplars", and a half-dozen more "models". but adding each is one line of code, plus 5-30 minutes of "wait for it to update") ⚔️ ( well, actually, adding an exemplar starts with tell Claude one line of directions, and wait for it to write 75 lines of code )
The zeroth takeaway is that "token introspection" is hard. LLMs aren't designed with the tools to do this correctly.
A native tool of "convert this word to letter-tokens" would make "count the Rs" easier. But not solve everything; small LLMs struggle with even "count to three".
My estimate is that the task of write Python code to generate and score a random poker hand is halfway to can code anything.
The 4B-8B models struggle with the simpler task of "write code to determine if a poker hand is a straight / flush".
The smallest/older API models (claude-haiku 3, gemini 1.5) mostly get it right, but don't handle un-mentioned edge cases ⚙️ ( the "wheel" straight, Ace Two Three Four Five, does not always get included) .
For the largest models tested (which are still "mid-sized", like gpt-4.1-mini), there are only style issues. And style is something which can specified in the context text. 💡 ( it is also, somewhat, a matter of personal preference. the fact that the machine did not do my preferred style (without me telling it to do so) is not a point against it)
The problems of commission are sometimes worse than the problems of omission.
For various reasons, the models want a second definition of the word "granite" (beyond the type of rock). This was most commonly granite as a metaphor, but sometimes granite as a type of countertop. Other definitions were more of a stretch.
The example sentences demonstrate the contrived nature. The sentence The team's resolve granited in the face of adversity. is not proper English. The team’s granite defense kept the opponents from scoring. is worse. 💡 ( and those are from the larger models. The small models have some pure hallucinations. “granite” referred to a unit of weight equal to 40 pounds? Nope.)
Almost all the models stated that granite is composed of quartz, feldspar, and mica. All the models tested knew which battle happened in 1485 during the Wars of the Roses, and who won it.
In one sense, this is not surprising. If you imagine the LLM as a dictionary that talks, it would certainly have this information. ⚔️ ( well, actually, the Wars of the Roses wouldn't be in most dictionaries; that would be an encyclopedia.) 💡 ( I expect that, going forward, this will be a distinction without meaning.)