medicine hat (1/6)
🤖 Medicine Hat, a vibrant city in southeast Alberta, Canada, blends natural beauty, cultural heritage, and modern convenience. Known as "The Gas City" for its rich natural gas reserves, the city enjoys over 330 days of sunshine annually, making it one of Canada's sunniest destinations. Nestled along the South Saskatchewan River, Medicine Hat offers stunning landscapes of rolling prairies, coulees, and river valleys, complemented by outdoor attractions like Echo Dale Regional Park and Police Point Park. Rich in history, the city derives its name from the Blackfoot term "saamis," meaning a medicine man's headdress, and features landmarks like the historic clay district and the Esplanade Arts and Heritage Centre. Its welcoming community of approximately 65,000 residents enjoys a balance of small-town charm and urban amenities, fostering a dynamic economy rooted in energy, agriculture, and arts.
plans for Thursday:
- One Zoom meeting (already done; notes withheld)
- Snow removal (once it stops coming down)
- Cook lamb roast (3 days of leftovers) 🌎( the House Rule is "no beef or pork")
- fix six bugs on Atacama ⚙️( note https://earlyversion.com/messages/46 )
- attempt to fix Peloton bike. 💡( the most likely problem is that some of the screws are loose or the bike is slightly non-level; adjusting all the screws might fix the issue)
- Hanukkah cleanup - remove melted wax from menorah.
the weekly Zvi Mowshowitz dispatch is out.
the takes on "AI Alignment" continue to border on the unreadable. the specific question of What could an OpenAI non-profit do with $30 billion is interesting, but none of the answers are interesting.
the Gary Marcus counter-AI content is also borderline-unreadable. he proposes a challenge with tasks (write a Pulitzer-winning novel, make a major scientific discovery) that will obviously require a certain amount of human collaboration the first time. so, even if the tasks are accomplished, Marcus will weasel out of claims that it proves AI isn't useless.
the updates on "here is model improvement" are also less interesting now than they were last year.
as far as both "medium" ⚙️( any model that can run on an M3 laptop in 8GB RAM) and "large" ⚙️( 70B models, Claude Haiku, GPT-4o-mini) models: what they do today is "good enough". and the progress is likely to be just shifting the categories rather than improved performance.
💡 that is: maybe those size cutoffs are 4B/30B in six months? but the paradigm will hold, and performance will stay roughly the same.
meanwhile, extra-large models ⚙️( Claude Sonnet, GPT-4o) can still get better, but progress is slowing.
as for "foundation" models: 💡( Foundation models are too expensive to make available in a non-subscription ChatGPT-style interface. And they always will be.)