{"channel":"cities","content":"I find myself wanting to block Reddit, etc. on the router level.  There is nothing worth reading there.\r\n\r\n----\r\n\r\nthe real way to get an \"a-ha\" moment from the machine, is with time-travel. (<xantham> with time travel, the machine can return a better answer instantly!)\r\n\r\nthe \"head\" of the response has to be noticeably ahead of where \"committed\" responses are.  so, there is a possibility to jump backwards.\r\n\r\n<xantham> isn't this beam-search?\r\n<red> ... maybe.\r\n\r\nbut, the idea is: you can assert a token << ERROR PATH: RETREAT 32 >>.\r\n\r\nthen, the \"reason\" for the message can be added as input.\r\n\r\nit is the << a-ha >> moment.  but, implemented better than DeepSeek.\r\n\r\n----\r\n\r\nthe reaction to DeepSeek has been, in my estimation, ridiculous.\r\n\r\nI tried the 7b and 8b << distilled >> models.  And what I saw was a cheap parody of thought.  Thought-processes that didn't make sense, and didn't actually reflect how the *machine* generated thoughts.\r\n\r\nbut, apparently, people like it.\r\n\r\nMaybe the 400b model gives better answers?  Or, maybe, people just see the shape of the answer and trust it more.\r\n\r\n<red> if the goal of the machine is to solve industrial tasks, this is mostly already baked in to my estimations.  but, for the goal of << making consumers happier >>, there is clearly a factor I am not considering.\r\n\r\n----\r\n\r\ntheory 1: people don't want to think the *machine* is smart; they want the *machine* to make them feel smart.  both the appearance of << struggling >> and the visible chain-of-thought (even if obviously flawed) contribute to this feeling.\r\n\r\ntheory 2: people don't know that the *machine* could already do 90% of this 12 months ago.  they see a demo (or, more likely, hear about a demo) and, miraculously, now they know what will happen.\r\n\r\ntheory 3: we know that << a light human touch guiding the *machine*'s responses >> can improve accuracy substantially.  and, that human touch can also be automated.\r\n<orange> well, actually, apparently very few other people knew that.\r\n\r\n----\r\n\r\ni'm going to stick with Theory 1 for today.  that people like Deepseek (and feel it is better) because it makes them feel smart.\r\n\r\nwhich ... is depressing.  but, also, easily solvable.\r\n\r\nthe question is: what question could you pose that would lead somebody to come up with this answer on their own?\r\n\r\n<red> it seems unlikely that 8B models can do this.  but I assume the 600B models can.\r\n\r\n----\r\n\r\npeople want the machine to make them feel smarter. (<xantham> because people are self-centered, gullible, and insecure.)\r\n\r\n<red> they want it to behave in a way that I instinctively hate.  they want the PT Barnum version of AI.\r\n<quote> give the people what they want!\r\n\r\n----\r\n\r\nthis is probably one of the reasons why the default << tone >> for every chatbot is obsequious.  so much << that's a great question! >> / << you're absolutely right >> / << let me know what else i can do to help >>.\r\n\r\n----\r\n\r\n<red> one can apply a << politeness filter >> to the output of the machine.  but the latency of such a system is already high.\r\n\r\n<orange> well, actually, it probably is just another layer or two.\r\n\r\n----\r\n\r\nSeen on social media: << Anthropic is losing because they have rate limits! >> (<red> of course they have rate limits.  the machine is not too cheap to meter, at least at the quality people expect.)\r\n\r\n----\r\n\r\na game of chess.\r\n\r\nthe idea of the attack worked in theory.  and the attack worked in practice.  but the actual attack did not work, in theory.\r\n\r\n<blue> chess can be a ritual.  like the i-Ching.\r\n\r\n----\r\n\r\ncan the machine participate in rituals?\r\n\r\n----\r\n\r\nthere are two kinds of answers people want from the *machine*.\r\n\r\n# Answers where people are willing to wait 10 minutes to definitely have the \"right\" answer.\r\n# Entertainments. The various \"instant chat-bots\" are party tricks.  A very good party trick.  But, ultimately, a party trick.\r\n\r\n<xantham> perhaps << testing >> is a third category.\r\n\r\nWhereas: for many valuable use-cases, having a 5-minute latency to do it right, is not objectionable. (<red> the *evocative* questions, the \"what do you mean by LONDON\" and \"can you talk more about LONDON\", will be interactive.) (<green> we do not have LONDON implemented here yet.)","created_at":"2025-01-29T00:28:18.077734","id":161,"llm_annotations":{},"parent_id":159,"processed_content":"<p>I find myself wanting to block Reddit, etc. on the router level.  There is nothing worth reading there.\r</p><hr class=\"section-break\" /><p>the real way to get an \"a-ha\" moment from the machine, is with time-travel. <span class=\"colorblock color-xantham\"><span class=\"sigil\">\ud83d\udd25</span><span class=\"colortext-content\">( with time travel, the machine can return a better answer instantly!)</span></span>\r</p>\n<p>the \"head\" of the response has to be noticeably ahead of where \"committed\" responses are.  so, there is a possibility to jump backwards.\r</p>\n<p><span class=\"colorblock color-xantham\"><span class=\"sigil\">\ud83d\udd25</span><span class=\"colortext-content\"> isn't this beam-search?\r</span></span></p>\n<p><span class=\"colorblock color-red\"><span class=\"sigil\">\ud83d\udca1</span><span class=\"colortext-content\"> ... maybe.\r</span></span></p>\n<p>but, the idea is: you can assert a token <span class=\"literal-text\">ERROR PATH: RETREAT 32</span>.\r</p>\n<p>then, the \"reason\" for the message can be added as input.\r</p>\n<p>it is the <span class=\"literal-text\">a-ha</span> moment.  but, implemented better than DeepSeek.\r</p><hr class=\"section-break\" /><p>the reaction to DeepSeek has been, in my estimation, ridiculous.\r</p>\n<p>I tried the 7b and 8b <span class=\"literal-text\">distilled</span> models.  And what I saw was a cheap parody of thought.  Thought-processes that didn't make sense, and didn't actually reflect how the <em>machine</em> generated thoughts.\r</p>\n<p>but, apparently, people like it.\r</p>\n<p>Maybe the 400b model gives better answers?  Or, maybe, people just see the shape of the answer and trust it more.\r</p>\n<p><span class=\"colorblock color-red\"><span class=\"sigil\">\ud83d\udca1</span><span class=\"colortext-content\"> if the goal of the machine is to solve industrial tasks, this is mostly already baked in to my estimations.  but, for the goal of <span class=\"literal-text\">making consumers happier</span>, there is clearly a factor I am not considering.\r</span></span></p><hr class=\"section-break\" /><p>theory 1: people don't want to think the <em>machine</em> is smart; they want the <em>machine</em> to make them feel smart.  both the appearance of <span class=\"literal-text\">struggling</span> and the visible chain-of-thought (even if obviously flawed) contribute to this feeling.\r</p>\n<p>theory 2: people don't know that the <em>machine</em> could already do 90% of this 12 months ago.  they see a demo (or, more likely, hear about a demo) and, miraculously, now they know what will happen.\r</p>\n<p>theory 3: we know that <span class=\"literal-text\">a light human touch guiding the <em>machine</em>'s responses</span> can improve accuracy substantially.  and, that human touch can also be automated.\r</p>\n<p><span class=\"colorblock color-orange\"><span class=\"sigil\">\u2694\ufe0f</span><span class=\"colortext-content\"> well, actually, apparently very few other people knew that.\r</span></span></p><hr class=\"section-break\" /><p>i'm going to stick with Theory 1 for today.  that people like Deepseek (and feel it is better) because it makes them feel smart.\r</p>\n<p>which ... is depressing.  but, also, easily solvable.\r</p>\n<p>the question is: what question could you pose that would lead somebody to come up with this answer on their own?\r</p>\n<p><span class=\"colorblock color-red\"><span class=\"sigil\">\ud83d\udca1</span><span class=\"colortext-content\"> it seems unlikely that 8B models can do this.  but I assume the 600B models can.\r</span></span></p><hr class=\"section-break\" /><p>people want the machine to make them feel smarter. <span class=\"colorblock color-xantham\"><span class=\"sigil\">\ud83d\udd25</span><span class=\"colortext-content\">( because people are self-centered, gullible, and insecure.)</span></span>\r</p>\n<p><span class=\"colorblock color-red\"><span class=\"sigil\">\ud83d\udca1</span><span class=\"colortext-content\"> they want it to behave in a way that I instinctively hate.  they want the PT Barnum version of AI.\r</span></span></p>\n<p><span class=\"colorblock color-quote\"><span class=\"sigil\">\ud83d\udcac</span><span class=\"colortext-content\"> give the people what they want!\r</span></span></p><hr class=\"section-break\" /><p>this is probably one of the reasons why the default <span class=\"literal-text\">tone</span> for every chatbot is obsequious.  so much <span class=\"literal-text\">that's a great question!</span> / <span class=\"literal-text\">you're absolutely right</span> / <span class=\"literal-text\">let me know what else i can do to help</span>.\r</p><hr class=\"section-break\" /><p><span class=\"colorblock color-red\"><span class=\"sigil\">\ud83d\udca1</span><span class=\"colortext-content\"> one can apply a <span class=\"literal-text\">politeness filter</span> to the output of the machine.  but the latency of such a system is already high.\r</span></span></p>\n<p><span class=\"colorblock color-orange\"><span class=\"sigil\">\u2694\ufe0f</span><span class=\"colortext-content\"> well, actually, it probably is just another layer or two.\r</span></span></p><hr class=\"section-break\" /><p>Seen on social media: <span class=\"literal-text\">Anthropic is losing because they have rate limits!</span> <span class=\"colorblock color-red\"><span class=\"sigil\">\ud83d\udca1</span><span class=\"colortext-content\">( of course they have rate limits.  the machine is not too cheap to meter, at least at the quality people expect.)</span></span>\r</p><hr class=\"section-break\" /><p>a game of chess.\r</p>\n<p>the idea of the attack worked in theory.  and the attack worked in practice.  but the actual attack did not work, in theory.\r</p>\n<p><span class=\"colorblock color-blue\"><span class=\"sigil\">\u2728</span><span class=\"colortext-content\"> chess can be a ritual.  like the i-Ching.\r</span></span></p><hr class=\"section-break\" /><p>can the machine participate in rituals?\r</p><hr class=\"section-break\" /><p>there are two kinds of answers people want from the <em>machine</em>.\r</p>\n<ul>\n<li class=\"number-list\"> Answers where people are willing to wait 10 minutes to definitely have the \"right\" answer.\r</li>\n<li class=\"number-list\"> Entertainments. The various \"instant chat-bots\" are party tricks.  A very good party trick.  But, ultimately, a party trick.\r</li>\n</ul>\n<p><span class=\"colorblock color-xantham\"><span class=\"sigil\">\ud83d\udd25</span><span class=\"colortext-content\"> perhaps <span class=\"literal-text\">testing</span> is a third category.\r</span></span></p>\n<p>Whereas: for many valuable use-cases, having a 5-minute latency to do it right, is not objectionable. <span class=\"colorblock color-red\"><span class=\"sigil\">\ud83d\udca1</span><span class=\"colortext-content\">( the <em>evocative</em> questions, the \"what do you mean by LONDON\" and \"can you talk more about LONDON\", will be interactive.)</span></span> <span class=\"colorblock color-green\"><span class=\"sigil\">\u2699\ufe0f</span><span class=\"colortext-content\">( we do not have LONDON implemented here yet.)</span></span></p>","quotes":[{"text":"give the people what they want!","type":"reference"}],"subject":"helena (2/8)"}
