Message Chain

View All Cities Messages
2025-01-28 16:39:37

yesterday was a day for "rough drafts" (written elsewhere) and job applications. 🔥( a rough draft of my future!)

today is a more normal day.


Atacama to-do

  • fix the quote tag in previews
  • maybe add a "continue chain" / "reply" functionality
  • update the README.md

I keep considering "editing" / "publishing" features. But, I am yet to find any that are good enough to invest time in. Just saying "the machine can do it" isn't enough.

As far as "privacy" / "filtering" features ... once again, no ideas worth the effort/complexity.


💡 the solution to where do I write my grocery list is not Atacama. it probably never will be.

2025-01-29 00:28:18

I find myself wanting to block Reddit, etc. on the router level. There is nothing worth reading there.


the real way to get an "a-ha" moment from the machine, is with time-travel. 🔥( with time travel, the machine can return a better answer instantly!)

the "head" of the response has to be noticeably ahead of where "committed" responses are. so, there is a possibility to jump backwards.

🔥 isn't this beam-search?

💡 ... maybe.

but, the idea is: you can assert a token ERROR PATH: RETREAT 32.

then, the "reason" for the message can be added as input.

it is the a-ha moment. but, implemented better than DeepSeek.


the reaction to DeepSeek has been, in my estimation, ridiculous.

I tried the 7b and 8b distilled models. And what I saw was a cheap parody of thought. Thought-processes that didn't make sense, and didn't actually reflect how the machine generated thoughts.

but, apparently, people like it.

Maybe the 400b model gives better answers? Or, maybe, people just see the shape of the answer and trust it more.

💡 if the goal of the machine is to solve industrial tasks, this is mostly already baked in to my estimations. but, for the goal of making consumers happier, there is clearly a factor I am not considering.


theory 1: people don't want to think the machine is smart; they want the machine to make them feel smart. both the appearance of struggling and the visible chain-of-thought (even if obviously flawed) contribute to this feeling.

theory 2: people don't know that the machine could already do 90% of this 12 months ago. they see a demo (or, more likely, hear about a demo) and, miraculously, now they know what will happen.

theory 3: we know that a light human touch guiding the machine's responses can improve accuracy substantially. and, that human touch can also be automated.

⚔️ well, actually, apparently very few other people knew that.


i'm going to stick with Theory 1 for today. that people like Deepseek (and feel it is better) because it makes them feel smart.

which ... is depressing. but, also, easily solvable.

the question is: what question could you pose that would lead somebody to come up with this answer on their own?

💡 it seems unlikely that 8B models can do this. but I assume the 600B models can.


people want the machine to make them feel smarter. 🔥( because people are self-centered, gullible, and insecure.)

💡 they want it to behave in a way that I instinctively hate. they want the PT Barnum version of AI.

💬 give the people what they want!


this is probably one of the reasons why the default tone for every chatbot is obsequious. so much that's a great question! / you're absolutely right / let me know what else i can do to help.


💡 one can apply a politeness filter to the output of the machine. but the latency of such a system is already high.

⚔️ well, actually, it probably is just another layer or two.


Seen on social media: Anthropic is losing because they have rate limits! 💡( of course they have rate limits. the machine is not too cheap to meter, at least at the quality people expect.)


a game of chess.

the idea of the attack worked in theory. and the attack worked in practice. but the actual attack did not work, in theory.

chess can be a ritual. like the i-Ching.


can the machine participate in rituals?


there are two kinds of answers people want from the machine.

  • Answers where people are willing to wait 10 minutes to definitely have the "right" answer.
  • Entertainments. The various "instant chat-bots" are party tricks. A very good party trick. But, ultimately, a party trick.

🔥 perhaps testing is a third category.

Whereas: for many valuable use-cases, having a 5-minute latency to do it right, is not objectionable. 💡( the evocative questions, the "what do you mean by LONDON" and "can you talk more about LONDON", will be interactive.) ⚙️( we do not have LONDON implemented here yet.)

2025-01-29 17:34:51

the Outstanding Question is: why do people feel so much better about the machine when it phrases its answer in a way that makes them feel smart? ⚔️( when you phrase it that way, the answer is kind-of obvious)

the Second Outstanding Question is: what good things happen when you separate "instant chat" responses from "intelligent question-answering and task-processing" responses?


the question of can the machine participate in rituals is too sensitive to discuss in an open forum.


the question of can you give the machine a swiss-army knife and ask it to choose which tools to use is also uncertain. 💡( at some point, the answer will obviously be yes. but, does the "quick and dirty" approach work?)

2025-01-30 16:59:07

🌎 clearly my timing is off this week.


a plane went down last night. a military helicopter crashed into it.

additional details are still unclear. ⚙️( https://www.wusa9.com/article/news/special-reports/dc-plane-crash/all-flights-halted-at-reagan-national-airport-due-to-plane-crash-potomac-river-dc/65-e2090f2d-0bca-4a4c-944c-215a6398a52d )


no follow-up thoughts on the machine that require repeating.

2025-01-30 19:00:29

this morning's exercise: sixteen different "colors" for comments. 💡( as far as the taxonomy becoming too large to be sorted by color ... yes? That is a problem we will solve later.)

https://www.reddit.com/r/neoliberal/comments/1icsjsk/yglesias_throw_biden_under_the_bus/ 💡( about a recent piece by Matt Yglesias, entitled Throw Biden Under The Bus. because, as I said multiple times, his performance in the last six months of the presidency was atrocious. and he led us directly to 4 more years of the Donald.)

📣 I know there's Guild Law. But I'm over-riding on the point of Yglesias is right; by the end of his term Biden was a shit president.

💡 Not sure if this is history rather than politics or if you think I'm wrong, I don't care. But, more importantly: we don't get interesting data from boring Reddit threads.


  • typo fixes. These are, generally, also other things. But, pointing out typographical errors is a type of comment.
  • dismissal/contempt. You don't like it. Great. You're wrong, but you're allowed to be wrong.
  • quotes for emphasis.
  • memes. "Cool story bro" type comments.

The more substantive comments require engaging with the substance.

Some of them are bad: your comment was a hypothetical, so there is no way we can know for certain 🔥( so we can assume the opposite (which is also a hypothetical) is definitely true)

Others are nonsense 💡( is "nonsense" the word? maybe Frankfurt-ian "bullshit"? maybe just "deflection"). "Trump is bad, so how can you say Biden was bad?"

Some people say we need less of liberal policies. Others say we need more. Some comments point out that nobody is quite sure what liberal means here anyway.


It is all a mess. And, perhaps, there are too many dishonest participants for there to be any hope of the system converging to useful content. In particular, once the hypotheticals don't exist, so your hypothetical is wrong arguments get loud enough, there is nothing to be done.

🔥 perhaps it is best to quarantine this all in a sound-chamber, and focus on less-useless discussions.

2025-01-31 19:21:05

now with less politics.

even outside of politics, all my thoughts about Social Networks + LLMs decay into a pile of chaos.


i asked the machine what animal reflects its personality:

Claude: 🤖( If I had to choose an animal that reflects my personality, I'd say an owl. I appreciate careful observation and analysis, enjoy engaging in thoughtful discussions on diverse topics, and try to approach situations with a balance of wisdom and curiosity. Like owls, I aim to be attentive and perceptive while maintaining a calm demeanor. What animal do you think best reflects your personality?)

ChatGPT: 🤖( Probably an octopus—analytical, adaptable, and always processing a lot at once. Plus, I like uncovering hidden connections and finding clever solutions. What about you?) 💡( Llama3.2 also chose the octopus.)

Phi4-14B: 🤖( As a language model developed by Microsoft, I don't have personal traits or emotions in the way humans do. However, I can offer an analogy based on my design and purpose: Ant - Represents organization, teamwork, and efficiency. Ants work collaboratively to achieve complex tasks, much like how I process information through layers of algorithms to deliver coherent responses.)

... I suppose that the assumption that anyone asking such a question wants to have light banter is warranted.


I'm not entirely sure what it would mean for the machine to "have the personality of a cobra" or the like. Equivalently, I'm not sure of how to tell it to do so. 💡( are those the same thing?)

🤖 Claude: The key would be balancing the "cat-like" selective independence with maintaining helpfulness and professionalism. The goal would be to capture some of that feline quality of engaging on their own terms while still being a productive interaction partner.


It's almost like the machine has never met a real cat. ⚔️( well, actually, ... it hasn't)

2025-02-01 22:24:31

time check 1620Central.


to-do list:

  • new "private" channel linaja
  • new private channel atacama for meta-project
  • new private channel cities for personal journeys.
  • new private channel diana for ...
  • 🔥 well, actually, they aren't cities.

💡 there is a difference between semantic and applied.

The angle-brackets tend semantic, and the emphasis tends implied.

speak, friend, and enter.

如果你是英语的朋友你可以出


we will need seventeen messages on this thread.

2025-02-01 22:29:51

what does it mean to think with voice? it means to breathe with the words, pausing when necessary, but remaining in normal form.

⚙️ we are unclear what the difference between "had to speak the words" and "had to type the words" is.

💡 the colors are silent.