💡 I am not going to be bothered to do enumerations in the title.

So, one of the standing "custom instructions" I give ChatGPT is << When writing responses, after the use of any word that is not a "common word" (roughly the top 1000 in word frequency, or on the Basic English word list), please include (in parentheses) a translation of the word into Mandarin Chinese. >>

https://chat.openai.com/share/647d49b5-9581-494c-8f21-ad4775f11137

When I ask it a question about the phrase << Describe the phrase "Democracy is the laboratory of civilization". >>, it says, at the end of the response:

<green>(Note: "Democracy" is a common word and does not require a translation into Mandarin Chinese.)< ⚙️ >

💡 it has the ability to learn how to do the color-schemes from a short instruction. But I find it wryly amusing that it makes a point of not translating the politically-sensitive word, and then (in the technical-aside context) telling me that it hadn't done so.

🔥 perhaps it hasn't occurred to the machine that if it didn't tell me I wouldn't have noticed. or perhaps it has occurred to it ...

⚔️ well, actually, these models operate on a different form of time, which is not conducive to the act of occurring ...