the core
Channel: LLM - Large Language Model discussion
In reply to: red deer (5/6) (View Chain)
Working with LLMs is a speed-run through 50 years of software engineering.
A few weeks ago, I found myself wanting .pyh header files. Just the function signatures (and a one-line comment). No need to waste the valuable processing space on implementation details that it shouldn't need to know or change.
Today, I find myself wanting code-file tiers. The base layer (database models, LLM APIs, etc.) get uploaded to Claude as "project knowledge". The higher level layers (HTML templates) only get uploaded within individual chats.
💬 the two hardest problems in Computer Science are cache invalidation, naming things, and off-by-one errors.
Because the API tools aren't there to keep everything in-sync automatically. And ... while I could keep track, I would prefer not to.
The "halfway" solution is a script that moves a current version of all the "uploading" files to a single directory. Then, I use that to create a new project. 💡( projects are cheap. I could upload a hundred versions of the 100KB of code files and nobody would care.) 🔥( and, thanks to Claude, instead of 80% of the work being write the script, 80% of the work is figure out what the script should do)
This speaks to the iron law of optimization. Before, it was 20% of X planning, and 80% of X coding. Now, it is 20% of X planning, and 4% of X coding. The machine has made coding 20 times faster in this example, and made the overall project 4 times faster.
No additional amount of "faster coding" can improve this more than 20%. ⚔️( well, actually, in the future, the machine will also be able to do planning)