{"chain":[{"channel":"cities","content":"yesterday was a day for \"rough drafts\" (written elsewhere) and job applications. (<xantham> a rough draft of my future!)\r\n\r\ntoday is a more normal day.\r\n\r\n----\r\n\r\nAtacama to-do\r\n\r\n# fix the << quote >> tag in previews\r\n# maybe add a \"continue chain\" / \"reply\" functionality\r\n# update the README.md\r\n\r\nI keep considering \"editing\" / \"publishing\" features.  But, I am yet to find any that are good enough to invest time in.  Just saying \"the *machine* can do it\" isn't enough.\r\n\r\nAs far as \"privacy\" / \"filtering\" features ... once again, no ideas worth the effort/complexity.\r\n\r\n----\r\n\r\n<red> the solution to << where do I write my grocery list >> is not Atacama.  it *probably* never will be.","created_at":"2025-01-28T16:39:37.163922","id":159,"is_target":false,"parent_id":null,"processed_content":"<p>yesterday was a day for \"rough drafts\" (written elsewhere) and job applications. <span class=\"colorblock color-xantham\"><span class=\"sigil\">\ud83d\udd25</span><span class=\"colortext-content\">( a rough draft of my future!)</span></span>\r</p>\n<p>today is a more normal day.\r</p><hr class=\"section-break\" /><p>Atacama to-do\r</p>\n<ul>\n<li class=\"number-list\"> fix the <span class=\"literal-text\">quote</span> tag in previews\r</li>\n<li class=\"number-list\"> maybe add a \"continue chain\" / \"reply\" functionality\r</li>\n<li class=\"number-list\"> update the README.md\r</li>\n</ul>\n<p>I keep considering \"editing\" / \"publishing\" features.  But, I am yet to find any that are good enough to invest time in.  Just saying \"the <em>machine</em> can do it\" isn't enough.\r</p>\n<p>As far as \"privacy\" / \"filtering\" features ... once again, no ideas worth the effort/complexity.\r</p><hr class=\"section-break\" /><p><span class=\"colorblock color-red\"><span class=\"sigil\">\ud83d\udca1</span><span class=\"colortext-content\"> the solution to <span class=\"literal-text\">where do I write my grocery list</span> is not Atacama.  it <em>probably</em> never will be.</span></span></p>","subject":"helena (1/8)"},{"channel":"cities","content":"I find myself wanting to block Reddit, etc. on the router level.  There is nothing worth reading there.\r\n\r\n----\r\n\r\nthe real way to get an \"a-ha\" moment from the machine, is with time-travel. (<xantham> with time travel, the machine can return a better answer instantly!)\r\n\r\nthe \"head\" of the response has to be noticeably ahead of where \"committed\" responses are.  so, there is a possibility to jump backwards.\r\n\r\n<xantham> isn't this beam-search?\r\n<red> ... maybe.\r\n\r\nbut, the idea is: you can assert a token << ERROR PATH: RETREAT 32 >>.\r\n\r\nthen, the \"reason\" for the message can be added as input.\r\n\r\nit is the << a-ha >> moment.  but, implemented better than DeepSeek.\r\n\r\n----\r\n\r\nthe reaction to DeepSeek has been, in my estimation, ridiculous.\r\n\r\nI tried the 7b and 8b << distilled >> models.  And what I saw was a cheap parody of thought.  Thought-processes that didn't make sense, and didn't actually reflect how the *machine* generated thoughts.\r\n\r\nbut, apparently, people like it.\r\n\r\nMaybe the 400b model gives better answers?  Or, maybe, people just see the shape of the answer and trust it more.\r\n\r\n<red> if the goal of the machine is to solve industrial tasks, this is mostly already baked in to my estimations.  but, for the goal of << making consumers happier >>, there is clearly a factor I am not considering.\r\n\r\n----\r\n\r\ntheory 1: people don't want to think the *machine* is smart; they want the *machine* to make them feel smart.  both the appearance of << struggling >> and the visible chain-of-thought (even if obviously flawed) contribute to this feeling.\r\n\r\ntheory 2: people don't know that the *machine* could already do 90% of this 12 months ago.  they see a demo (or, more likely, hear about a demo) and, miraculously, now they know what will happen.\r\n\r\ntheory 3: we know that << a light human touch guiding the *machine*'s responses >> can improve accuracy substantially.  and, that human touch can also be automated.\r\n<orange> well, actually, apparently very few other people knew that.\r\n\r\n----\r\n\r\ni'm going to stick with Theory 1 for today.  that people like Deepseek (and feel it is better) because it makes them feel smart.\r\n\r\nwhich ... is depressing.  but, also, easily solvable.\r\n\r\nthe question is: what question could you pose that would lead somebody to come up with this answer on their own?\r\n\r\n<red> it seems unlikely that 8B models can do this.  but I assume the 600B models can.\r\n\r\n----\r\n\r\npeople want the machine to make them feel smarter. (<xantham> because people are self-centered, gullible, and insecure.)\r\n\r\n<red> they want it to behave in a way that I instinctively hate.  they want the PT Barnum version of AI.\r\n<quote> give the people what they want!\r\n\r\n----\r\n\r\nthis is probably one of the reasons why the default << tone >> for every chatbot is obsequious.  so much << that's a great question! >> / << you're absolutely right >> / << let me know what else i can do to help >>.\r\n\r\n----\r\n\r\n<red> one can apply a << politeness filter >> to the output of the machine.  but the latency of such a system is already high.\r\n\r\n<orange> well, actually, it probably is just another layer or two.\r\n\r\n----\r\n\r\nSeen on social media: << Anthropic is losing because they have rate limits! >> (<red> of course they have rate limits.  the machine is not too cheap to meter, at least at the quality people expect.)\r\n\r\n----\r\n\r\na game of chess.\r\n\r\nthe idea of the attack worked in theory.  and the attack worked in practice.  but the actual attack did not work, in theory.\r\n\r\n<blue> chess can be a ritual.  like the i-Ching.\r\n\r\n----\r\n\r\ncan the machine participate in rituals?\r\n\r\n----\r\n\r\nthere are two kinds of answers people want from the *machine*.\r\n\r\n# Answers where people are willing to wait 10 minutes to definitely have the \"right\" answer.\r\n# Entertainments. The various \"instant chat-bots\" are party tricks.  A very good party trick.  But, ultimately, a party trick.\r\n\r\n<xantham> perhaps << testing >> is a third category.\r\n\r\nWhereas: for many valuable use-cases, having a 5-minute latency to do it right, is not objectionable. (<red> the *evocative* questions, the \"what do you mean by LONDON\" and \"can you talk more about LONDON\", will be interactive.) (<green> we do not have LONDON implemented here yet.)","created_at":"2025-01-29T00:28:18.077734","id":161,"is_target":false,"parent_id":159,"processed_content":"<p>I find myself wanting to block Reddit, etc. on the router level.  There is nothing worth reading there.\r</p><hr class=\"section-break\" /><p>the real way to get an \"a-ha\" moment from the machine, is with time-travel. <span class=\"colorblock color-xantham\"><span class=\"sigil\">\ud83d\udd25</span><span class=\"colortext-content\">( with time travel, the machine can return a better answer instantly!)</span></span>\r</p>\n<p>the \"head\" of the response has to be noticeably ahead of where \"committed\" responses are.  so, there is a possibility to jump backwards.\r</p>\n<p><span class=\"colorblock color-xantham\"><span class=\"sigil\">\ud83d\udd25</span><span class=\"colortext-content\"> isn't this beam-search?\r</span></span></p>\n<p><span class=\"colorblock color-red\"><span class=\"sigil\">\ud83d\udca1</span><span class=\"colortext-content\"> ... maybe.\r</span></span></p>\n<p>but, the idea is: you can assert a token <span class=\"literal-text\">ERROR PATH: RETREAT 32</span>.\r</p>\n<p>then, the \"reason\" for the message can be added as input.\r</p>\n<p>it is the <span class=\"literal-text\">a-ha</span> moment.  but, implemented better than DeepSeek.\r</p><hr class=\"section-break\" /><p>the reaction to DeepSeek has been, in my estimation, ridiculous.\r</p>\n<p>I tried the 7b and 8b <span class=\"literal-text\">distilled</span> models.  And what I saw was a cheap parody of thought.  Thought-processes that didn't make sense, and didn't actually reflect how the <em>machine</em> generated thoughts.\r</p>\n<p>but, apparently, people like it.\r</p>\n<p>Maybe the 400b model gives better answers?  Or, maybe, people just see the shape of the answer and trust it more.\r</p>\n<p><span class=\"colorblock color-red\"><span class=\"sigil\">\ud83d\udca1</span><span class=\"colortext-content\"> if the goal of the machine is to solve industrial tasks, this is mostly already baked in to my estimations.  but, for the goal of <span class=\"literal-text\">making consumers happier</span>, there is clearly a factor I am not considering.\r</span></span></p><hr class=\"section-break\" /><p>theory 1: people don't want to think the <em>machine</em> is smart; they want the <em>machine</em> to make them feel smart.  both the appearance of <span class=\"literal-text\">struggling</span> and the visible chain-of-thought (even if obviously flawed) contribute to this feeling.\r</p>\n<p>theory 2: people don't know that the <em>machine</em> could already do 90% of this 12 months ago.  they see a demo (or, more likely, hear about a demo) and, miraculously, now they know what will happen.\r</p>\n<p>theory 3: we know that <span class=\"literal-text\">a light human touch guiding the <em>machine</em>'s responses</span> can improve accuracy substantially.  and, that human touch can also be automated.\r</p>\n<p><span class=\"colorblock color-orange\"><span class=\"sigil\">\u2694\ufe0f</span><span class=\"colortext-content\"> well, actually, apparently very few other people knew that.\r</span></span></p><hr class=\"section-break\" /><p>i'm going to stick with Theory 1 for today.  that people like Deepseek (and feel it is better) because it makes them feel smart.\r</p>\n<p>which ... is depressing.  but, also, easily solvable.\r</p>\n<p>the question is: what question could you pose that would lead somebody to come up with this answer on their own?\r</p>\n<p><span class=\"colorblock color-red\"><span class=\"sigil\">\ud83d\udca1</span><span class=\"colortext-content\"> it seems unlikely that 8B models can do this.  but I assume the 600B models can.\r</span></span></p><hr class=\"section-break\" /><p>people want the machine to make them feel smarter. <span class=\"colorblock color-xantham\"><span class=\"sigil\">\ud83d\udd25</span><span class=\"colortext-content\">( because people are self-centered, gullible, and insecure.)</span></span>\r</p>\n<p><span class=\"colorblock color-red\"><span class=\"sigil\">\ud83d\udca1</span><span class=\"colortext-content\"> they want it to behave in a way that I instinctively hate.  they want the PT Barnum version of AI.\r</span></span></p>\n<p><span class=\"colorblock color-quote\"><span class=\"sigil\">\ud83d\udcac</span><span class=\"colortext-content\"> give the people what they want!\r</span></span></p><hr class=\"section-break\" /><p>this is probably one of the reasons why the default <span class=\"literal-text\">tone</span> for every chatbot is obsequious.  so much <span class=\"literal-text\">that's a great question!</span> / <span class=\"literal-text\">you're absolutely right</span> / <span class=\"literal-text\">let me know what else i can do to help</span>.\r</p><hr class=\"section-break\" /><p><span class=\"colorblock color-red\"><span class=\"sigil\">\ud83d\udca1</span><span class=\"colortext-content\"> one can apply a <span class=\"literal-text\">politeness filter</span> to the output of the machine.  but the latency of such a system is already high.\r</span></span></p>\n<p><span class=\"colorblock color-orange\"><span class=\"sigil\">\u2694\ufe0f</span><span class=\"colortext-content\"> well, actually, it probably is just another layer or two.\r</span></span></p><hr class=\"section-break\" /><p>Seen on social media: <span class=\"literal-text\">Anthropic is losing because they have rate limits!</span> <span class=\"colorblock color-red\"><span class=\"sigil\">\ud83d\udca1</span><span class=\"colortext-content\">( of course they have rate limits.  the machine is not too cheap to meter, at least at the quality people expect.)</span></span>\r</p><hr class=\"section-break\" /><p>a game of chess.\r</p>\n<p>the idea of the attack worked in theory.  and the attack worked in practice.  but the actual attack did not work, in theory.\r</p>\n<p><span class=\"colorblock color-blue\"><span class=\"sigil\">\u2728</span><span class=\"colortext-content\"> chess can be a ritual.  like the i-Ching.\r</span></span></p><hr class=\"section-break\" /><p>can the machine participate in rituals?\r</p><hr class=\"section-break\" /><p>there are two kinds of answers people want from the <em>machine</em>.\r</p>\n<ul>\n<li class=\"number-list\"> Answers where people are willing to wait 10 minutes to definitely have the \"right\" answer.\r</li>\n<li class=\"number-list\"> Entertainments. The various \"instant chat-bots\" are party tricks.  A very good party trick.  But, ultimately, a party trick.\r</li>\n</ul>\n<p><span class=\"colorblock color-xantham\"><span class=\"sigil\">\ud83d\udd25</span><span class=\"colortext-content\"> perhaps <span class=\"literal-text\">testing</span> is a third category.\r</span></span></p>\n<p>Whereas: for many valuable use-cases, having a 5-minute latency to do it right, is not objectionable. <span class=\"colorblock color-red\"><span class=\"sigil\">\ud83d\udca1</span><span class=\"colortext-content\">( the <em>evocative</em> questions, the \"what do you mean by LONDON\" and \"can you talk more about LONDON\", will be interactive.)</span></span> <span class=\"colorblock color-green\"><span class=\"sigil\">\u2699\ufe0f</span><span class=\"colortext-content\">( we do not have LONDON implemented here yet.)</span></span></p>","subject":"helena (2/8)"},{"channel":"cities","content":"the Outstanding Question is: << why do people feel so much better about the *machine* when it phrases its answer in a way that makes them feel smart >>? (<orange> when you phrase it that way, the answer is kind-of obvious)\r\n\r\nthe Second Outstanding Question is: << what good things happen when you separate \"instant chat\" responses from \"intelligent question-answering and task-processing\" responses >>?\r\n\r\n----\r\n\r\nthe question of << can the *machine* participate in rituals >> is too *sensitive* to discuss in an open forum.\r\n\r\n----\r\n\r\nthe question of << can you give the *machine* a swiss-army knife and ask it to choose which tools to use >> is also uncertain.  (<red> at some point, the answer will obviously be yes.  but, does the \"quick and dirty\" approach work?)","created_at":"2025-01-29T17:34:51.130260","id":162,"is_target":true,"parent_id":161,"processed_content":"<p>the Outstanding Question is: <span class=\"literal-text\">why do people feel so much better about the <em>machine</em> when it phrases its answer in a way that makes them feel smart</span>? <span class=\"colorblock color-orange\"><span class=\"sigil\">\u2694\ufe0f</span><span class=\"colortext-content\">( when you phrase it that way, the answer is kind-of obvious)</span></span>\r</p>\n<p>the Second Outstanding Question is: <span class=\"literal-text\">what good things happen when you separate \"instant chat\" responses from \"intelligent question-answering and task-processing\" responses</span>?\r</p><hr class=\"section-break\" /><p>the question of <span class=\"literal-text\">can the <em>machine</em> participate in rituals</span> is too <em>sensitive</em> to discuss in an open forum.\r</p><hr class=\"section-break\" /><p>the question of <span class=\"literal-text\">can you give the <em>machine</em> a swiss-army knife and ask it to choose which tools to use</span> is also uncertain.  <span class=\"colorblock color-red\"><span class=\"sigil\">\ud83d\udca1</span><span class=\"colortext-content\">( at some point, the answer will obviously be yes.  but, does the \"quick and dirty\" approach work?)</span></span></p>","subject":"helena (3/8)"},{"channel":"cities","content":"<mogue> clearly my *timing* is off this week.\r\n\r\n----\r\n\r\na plane went down last night.  a military helicopter crashed into it.\r\n\r\nadditional details are still unclear. (<green> https://www.wusa9.com/article/news/special-reports/dc-plane-crash/all-flights-halted-at-reagan-national-airport-due-to-plane-crash-potomac-river-dc/65-e2090f2d-0bca-4a4c-944c-215a6398a52d )\r\n\r\n----\r\n\r\nno follow-up thoughts on the *machine* that require repeating.","created_at":"2025-01-30T16:59:07.779786","id":163,"is_target":false,"parent_id":162,"processed_content":"<p><span class=\"colorblock color-mogue\"><span class=\"sigil\">\ud83c\udf0e</span><span class=\"colortext-content\"> clearly my <em>timing</em> is off this week.\r</span></span></p><hr class=\"section-break\" /><p>a plane went down last night.  a military helicopter crashed into it.\r</p>\n<p>additional details are still unclear. <span class=\"colorblock color-green\"><span class=\"sigil\">\u2699\ufe0f</span><span class=\"colortext-content\">( <a href=\"https://www.wusa9.com/article/news/special-reports/dc-plane-crash/all-flights-halted-at-reagan-national-airport-due-to-plane-crash-potomac-river-dc/65-e2090f2d-0bca-4a4c-944c-215a6398a52d\" target=\"_blank\" rel=\"noopener noreferrer\">https://www.wusa9.com/article/news/special-reports/dc-plane-crash/all-flights-halted-at-reagan-national-airport-due-to-plane-crash-potomac-river-dc/65-e2090f2d-0bca-4a4c-944c-215a6398a52d</a> )</span></span>\r</p><hr class=\"section-break\" /><p>no follow-up thoughts on the <em>machine</em> that require repeating.</p>","subject":"helena (4/8)"}]}
