Three of the Trakaido fixes are done: the hover issues, the multiple-choice answer count issues, and the 🔥 forgotten last dispatch, but also importantimproved word selection algorithm issues.
The word-lists are somewhat polished, but that hasn't been updated in prod yet.
The React re-draw bug is not fixed. I may need a smarter LLM to fix the bug; the ones I am using aren't finding it. 💡 the solution is probably to define a "canvas" area for the question, and redraw it completely for each question. This will require fixing the fact that the "question type" takes up an excessive amount of screen real-estate
💡 There is also a plan to re-generate the word lists entirely. But, a few key issues remain before I can run the "ask the LLM to regenerate data" step. For example: "are these two definitions of the same word?"
Additional "short" features, for tomorrow, include:
have a way of showing/counting the available words not yet "exposed" in Journey Mode.
have "Drill Mode" expose all words in a corpus if you do well enough
a more-linear path through the lessons. 💡 a full "tech tree" style path could be done. Duolingo got rid of that, for some reason.
make some of the "You're doing great" interstitials remind you about app features ⚙️ such as "don't type the text in parentheses" and grammar ⚙️ such as "Here is the pronoun table, with audio"
per-day stats: questions answered, words reviewed, words at each "level" of comprehension
track "last exposed" and "last correct answer"
an option to "always match pronoun" on two-word compounds like "He ate".
We live in a world that expects everyone to be specialized in politics. This is a worrisome thing. How can a democracy work when a majority of the people are disengaged from the national issues that drive voting?
Perhaps the answer is that it can't; a more staged system, where the people elect ward/city chiefs, and the chiefs vote on a candidate, might be better. 💡 but the flaw there is obvious; the same capture as the various political parties have had at their National Convention regarding the Presidential Candidate.⚙️ it becomes polarized. 100 years ago, people could be selected for the convention before anyone knew the candidates for the Presidential nomination. Today, each faction selects the luminaries who will be required to vote for their candidate.
The idea of "governance by per-profession representatives" has its own flaws.
We can imagine a "House of Lords" style setup, with (say) 200 seats, 12 of which are for medical doctors. Initially, they may be selected based on their skill in medicine 🔥 and at getting elected. But, after the passage of time, they will be selected based on their loyalty to a faction. 💡 this is how Dr. Oz is in charge of Medicare.
I have not yet found any flaw in the "governance by per-birthyear representatives".
For example, on a STV system, all people born 1970-1975 select 8 representatives who are members of that cohort. With a provision for alternates should a member suffer untimely death, or (in the case of an inferior assembly) leave the jurisdiction.
The technical problems are addressable. No representation until the age of 10, "parental representation" (where the candidates are not members of the cohort, but parents of those members) until the age of 20. At the age of 70, they elect "permanent members"; no further elections, but no replacement on death.
Monango, North Dakota is a small rural town located in Dickey County, in the southeastern part of the state. With a population of under 50 residents, it's one of the least populous communities in North Dakota. The town is surrounded by wide stretches of farmland and prairie, reflecting its strong agricultural roots.
I have some more ambitious tasks for Trakaido planned. New modes, new languages, new webdomains, new spaced repetition algorithms. 🔥 I don't think of it as "spaced repetition". It is just "studying the material you don't know".
There is also "set up a pipeline to check audio file quality". Which is not ambitious as it is defined above, but it is perhaps equally difficult.
But, there is a short-term list for today:
Do one final "cleanse" of the wordlist
Re-enable the ability to have 6 or 8 multiple choice options available
Re-analyze the "audio" logic; on mobile devices there are some hover/selection-focus related issues causing audio truncation or double-plays.
Find the reason part of the screen isn't re-drawing properly after "New Word" activity in Journey Mode. ⚙️ all projects develop a lexicon. The "Mode" is the type of activity / question-selection algorithm, and the "Activity" is the individual question.
And, one additional question too complex for this, but still in-mind: how hard is it to convert this to a native app for iOS/Android? ⚙️ it would allow audio caching, and better on-device stats. But, it involves "interacting with the App Store". But also, could generate revenue.
The first time trial is over, and the GC standings are recognizable.
They have been recognizable throughout; with cross-winds on day 1 and attacks by Pogacar/Vingegaard.
Tadej Pogacar - the odds-on favorite.
Remco Evenepoel - the odds-on favorite for third place.
Kevin Vauquelin
Jonas Vingegaard - the odds-on favorite for second place
Matteo Jorgenson
Mathieu Van Der Poel
Joao Almeida
Primoz Roglic
Florian Lipowitz
other than Van Der Poel, that could well be the top 8 finishers. Other contenders for the top 8 include Oscar Onley (11th), Enric Mas (13th), Tobias Johannessen (15th).
NBC has Phil Liggett, again. I had heard he had retired; apparently not. He makes viewing the race less enjoyable.
💡 and, I would prefer "listen-only" to "video-only", much of the time.
There have been no mountains of note, only hills. This is how the rouleur ⚙️ A rider who excels at stages that are neither sprint-finishes nor high-mountains of Van Der Poel is at the top of the standings.
A new paradigm is emerging, one driven not by page rank, but by language models. We’re entering Act II of search: Generative Engine Optimization (GEO). ...
It’s no longer just about click-through rates, it’s about reference rates: how often your brand or content is cited or used as a source in model-generated answers. In a world of AI-generated outputs, GEO means optimizing for what the model chooses to reference, not just whether or where you appear in traditional search. That shift is revamping how we define and measure brand visibility and performance.
Already, new platforms like Profound, Goodie, and Daydream enable brands to analyze how they appear in AI-generated responses, track sentiment across model outputs, and understand which publishers are shaping model behavior. These platforms work by fine-tuning models to mirror brand-relevant prompt language, strategically injecting top SEO keywords, and running synthetic queries at scale. The outputs are then organized into actionable dashboards that help marketing teams monitor visibility, messaging consistency, and competitive share of voice.
Canada Goose used one such tool to gain insight into how LLMs referenced the brand — not just in terms of product features like warmth or waterproofing, but brand recognition itself. The takeaways were less about how users discovered Canada Goose, but whether the model spontaneously mentioned the brand at all, an indicator of unaided awareness in the AI era.
This kind of monitoring is becoming as important as traditional SEO dashboards. Tools like Ahrefs’ Brand Radar now track brand mentions in AI Overviews, helping companies understand how they’re framed and remembered by generative engines. Semrush also has a dedicated AI toolkit designed to help brands track perception across generative platforms, optimize content for AI visibility, and respond quickly to emerging mentions in LLM outputs, a sign that legacy SEO players are adapting to the GEO era.
This is a mix of cargo-cult marketing, and pure bullshit. The theory of the these companies is fatally flawed because of a few factors:
the models use a dataset that is 9-12 months old. Whatever changes these companies make, won't show up immediately.
there are no "traffic stats". The traffic stats that they provide have to be fake.
the 30 "companies" listed include a lot that seem fake ⚙️https://www.limy.ai and https://relixir.ai are two that are just "somebody who started an idea at YCombinator 3 months ago, and don't actually have anything sellable yet. Key links like "pricing" and "features" don't exist.
LLMs are not indexes. They are statistical models of language, trained on enormous corpora to predict token sequences. There is no top 10 list inside GPT-4 or Claude. There is only a tangled web of parameter weights encoding the probability that, given a prompt, certain tokens will follow. Trying to optimize your brand’s presence in that is like trying to guarantee your reflection in a kaleidoscope. ...
What’s more, the entire underlying substrate is profoundly unstable. Even minor prompt rephrasings can dramatically alter which brands get mentioned. Change the context window by 10 tokens, or adjust the system prompt’s tone, and you might collapse entirely different parts of the model’s probability distribution.
If you want your "ChatGPT ranking" to be better, questions like How does temperature, top-p sampling, and prompt framing alter our probabilistic surface area across different LLMs? don't actually matter.
The entire argument is flawed. It is just this is too complex to understand, random means anything can happen, oogie-boogie.
Hopefully today (or possibly tomorrow) it will be at a "release milestone".
The three main issues I am dealing with:
LLMs are bad at state machines, and React has a lot of them.
LLMs are bad at succinct UIs, and mobile apps need them.
The app was architected so most of the code was in Trakaido.jsx, and passed to the "Modes". But, with additional modes, it is now three-layers (Trakaido to Mode to Activity). And the functions being passed (like playAudio) should be imported directly by the activity. This refactor, somehow, is beyond the LLMs unaided capabilities.
So, after 90 minutes of trying and failing to get it to work, I reverted and am doing a more hands-on approach of re-factoring.
Yesterday I did a Google search to identify the most successful movie of the last 5 years. Instead I got an AI response—which I didn’t ask for. AI identified a film from 2019 as the most popular movie in the last 5 years. And it even specified that the movie came out in 2019.
AI always serves up this garbage—it makes mistakes a child would easily avoid. And now tech CEOs want AI to rewrite history? It won’t even get the dates correct.
It gets worse, Google AI seems to think that the period from 2019 through 2024 is five years in duration. It can’t even count to five without making a mistake.
The period from 2019 to 2024 IS five years. Yes, the whole period would be six years. But if you want a period of 5 years that ends in May 2024, it is 2019-2024.
That Samaritan's entire schtick seems to be saying that AIs are garbage because they didn't make the same mistake that he made. And, that the solution is for more people to get an Oxford education like he did.
🔥 It is both a humblebrag and a completely stupid idea. "Use more Latin! Don't even allow typewriters!" His solution is to stick one's hand in the sands-of-the-past, and assume that this will fix everything. Because he's not a technologist, he's an old musician who thinks he knows everything but very clearly does not.
The PSJVTA’s personal jurisdiction provision does not violate the Fifth Amendment’s Due Process Clause because the statute reasonably ties the assertion of jurisdiction over the PLO and PA to conduct involving the United States and implicating sensitive foreign policy matters within the prerogative of the political branches.
The primary reason why the statute should fall is that the United States inherently doesn't have jurisdiction over a quasi-state entity halfway around the world. But, this isn't based in a concept of "due process". If anything, by asserting a right to due-process, the PLO does implicitly consent to jurisdiction, in a way that "not ceasing a policy" does not.
The "foreign policy" note is more concerning. There is a 🔥 far-right theory of government that would hold that the Constitution only governs how the federal government interacts with US citizens, and does not bind its external actions 💡 other than a few enumerated exceptions, such as "participating in the slave trade". I disagree with this; and I generally feel that the blanket exception various courts are working towards is a loophole large enough to drive a truck through.
But, the limits of American power will remain evident. A court can issue as many universal injunctions as it wants, but the PLO will not act based on it. And the ever-increasing fines based on foreign activity by a kangaroo-court will impugn the United States more than they will ever punish the PLO. When Russia fines Google an amount so large the TV announcer cannot pronounce it ⚙️https://www.bbc.com/news/articles/cdxvnwkl5kgo, it is dismissed as the folly of a rogue state. 💡 Clarence Thomas, in his concurrence, spells it out more thoroughly. That Congress may override general principles of international law does not imply that it should, but instead that the relevant considerations are not constitutional ones. If you view international law as superseding the constitution on certain matters, it should not be surprising that the Constitution does not incorporate these restrictions.
This set is primarily "gameplay" focused, with getting the "audio" right a primary goal. The secondary goal is getting the "tools to generate wordlists" working. ⚙️ those are "greenland" projects, where the language-corpus and LLM queries for translations live
There are several glaring factual or interpretation errors that cloud his analysis of the space.
But first, his analysis is that "tech platforms" go through a three-stage cycle:
Identify the moat
Open the gates
Close for monetization
This is, on some level, accurate. There are a lot of people trying this on Substack, or elsewhere. Give it away to get viral growth when you are small, charge and profit when you are big. Business 101.
His analysis of how this worked for multiple large tech companies, unfortunately, is fatally flawed.
Facebook - to start with, several of the "added features" like Marketplace and Photos existed years before the ill-fated "Facebook Platform" ever launched. But, the real problem is that Zynga-style gamification cluttering activity feeds was a disaster 💡 although, in hindsight, it was probably better than the political takes that followed it. They could not allow it forever, so it stopped. And, without a limitless reservoir of free advertising, Zynga crumbled.
Also, constantly throughout this post, the author is nostalgic for what can best be described as awful crap. The "quizzes" and "vampire bite" apps were not something to be defended; the people who ostensibly "made millions" from them not to be celebrated.
Apple - first off, the "70-30" revenue distribution is considered as both "open" and "closed". Once again, people being able to make millions off the "iFart" app is not to be celebrated. And the demands that the platform never include features that are in platform apps are unreasonable.
But the more serious concern is the specter of government regulation. Most of the "privacy restrictions" he criticizes are imposed by the government. And the anti-trust concerns govern pricing more than anything else.
Google - the SEO⚙️ search engine optimization industry is largely the scourge of the earth, and should be destroyed. 💡 there seems to be a trend that it is bottom-feeders, creators of frivolities, and abusers of the public common that suffer from this "contraction". Unlike the author, overall I would prefer those changes
The problem here is that the moat is contrived and unrelated to the changes. The increase in oneboxes ⚙️ in Google parlance, a "onebox" is a structured-content response, other than organic search or an ad. For example, a "weather" onebox for a search for the weather. They were called "oneboxes" because there would be only one per search result page, but that seems to have gone by the wayside. was certainly a change from the 2004 philosophy ... but it's unrelated to market-share growth against Yahoo.
LinkedIn - I'm not even sure what he's talking about. I don't believe there was a "2 year window" where LinkedIn encouraged content creators. I think it was either a COVID effect, or a personal observation of this guy extrapolated to the whole site.
And, once again, the people creating "B2B Marketing" content are people I do not want to succeed.
as a reminder, viruses are bad. these people act like cockroaches, and then are offended that people want to make them go away.
So the conclusion, that platform cycles are accelerating, cannot be supported by this data. And the attempt to project this onto ChatGPT is bad.
The moat has to be chat memory ... because this guy can't think of anything else. And, even if it is ... the platform that can access your email history is more valuable than the chat history.
Also, ChatGPT can't shut down the API. For one, it's profitable. For two, the alternative of "self-hosted models" is too good for this to meaningless harm the (now)-competitors.
The ChatGPT "market share" advantage is driven by higher name recognition among non-technical people, and people who haven't tried competitors. They are in the position of Yahoo! and MySpace. I'm not saying that this means they can't be successful ... but there is not such an advantage that Google and Facebook are irrelevant.
And, finally, the complaints that ChatGPT is competing with enterprise players like Glean are pathetic. Presumably the feature is "connectors" ⚙️https://techcrunch.com/2025/03/17/openai-to-start-testing-chatgpt-connectors-for-google-drive-and-slack/: being able to access data in your Google Drive account. This is a feature that Claude has had for a long time. Once again, this is a ludicrous demand that a platform remain incomplete because someone else implemented a necessary feature.