Edgeley, North Dakota, is a small rural town in LaMoure County, located in the southeastern part of the state. With a population hovering around 500 people, it's one of many prairie towns that exemplify the broader character of the upper Great Plains—quiet, sparsely populated, and closely tied to agriculture.


https://www.lesswrong.com/posts/bfHDoWLnBH9xR3YAK/ai-2027-is-a-bet-against-amdahl-s-law

Of course the post is right. The various FOOM claims are all bullshit. And Amdahl's Law is one of the reason why. Just because a few things will be a hundred times faster (or a million times faster) doesn't make the whole thing that much faster.

Also, AGI definitions vary so widely, from things that have already happened to things that are impossible, that a "prediction market" is nearly meaningless.


I have seen various commentary related to "Twilight of the Edgelords" ⚙️ ( https://www.astralcodexten.com/p/twilight-of-the-edgelords ) , a piece that I don't have access to.

And, the response I can piece together from the fragments I can see would fall under GUILD LAW. ⚙️ ( additional commentary at https://www.writingruxandrabio.com/p/the-edgelords-were-right-a-response and https://theahura.substack.com/p/contra-scott-and-rux-on-whos-to-blame )


https://developers.googleblog.com/en/gemma-3-quantized-aware-trained-state-of-the-art-ai-to-consumer-gpus/

To make Gemma 3 even more accessible, we are announcing new versions optimized with Quantization-Aware Training (QAT) that dramatically reduces memory requirements while maintaining high quality. This enables you to run powerful models like Gemma 3 27B locally on consumer-grade GPUs like the NVIDIA RTX 3090.

It seems pretty obvious. A majority of the users of open-source models are using quantized models on personal hardware; might as well optimize that use-case. 💡 ( it is less clear that a majority of the CPU cycles are there; but a majority of the people certainly are.)

My next round of updating the Greenland metrics will have to include the gemma3-12b-qat model. 💡 ( or, maybe the 27b. According to Hacker News, gemma3-27b-Q4 only uses ~22Gb (via Ollama) or ~15GB (MLX). On a 24GB machine, this clearly needs the non-Ollama approach.)

And, also, GPT-4.1 . And probably Gemini-2.5 . 💡 ( the goal for these models should be to perform at 100% accuracy.) ⚔️ ( well, actually, a few of the "correct" benchmark answers right now are incorrect.)