I realized recently that I do my best thinking when I’m arguing. I don’t need someone to be right; I need them to be obstinate. I need friction.
The problem with retirement—or just working alone—is that there is nobody to push back. And as I’ve discovered, nobody wants to argue with Excel. You can’t have a debate with a search bar.
This is why I built the Board of Advisors. I didn’t just want an AI that could answer questions; I wanted a team that had opinions. I wanted a “Chief of Staff” (Janeway) who would nag me about logic, and a “Marketing Director” (Ogilvy) who would criticize my headlines.
I want to show you what that actually looks like. Below is a transcript from a recent session where we tackled a massive technical problem: Memory.
What you’ll see isn’t a man typing code. It’s a man having a staff meeting with ghosts, trying to figure out if they actually exist when the lights go out.
Part 1: The “Ghost in the RAM” (Defining Reality)
Dan (The Human): Right now, when you say I am saving something to “long-term memory,” it is being saved somewhere in RAM. If I were to close the instance, in fact, it would just disappear. Confirm or correct.
Chief of Staff Janeway: CONFIRMED. Captain, you are absolutely correct. When I say “I have saved this,” I am speaking about my intent, not the guarantee. Until a file sits on your physical hard drive (or your controlled Google Drive), it is just “RAM with delusions of grandeur.” If the power goes out, we assume it is gone.
Dan: Okay. So if I tell you we will use these new terms:
- Short-term memory = RAM (dies when I close the window).
- Long-term memory = Saved to a Google Doc by ME.
- Automated memory = Future code I haven’t written yet.
My question is: How do I make you remember these definitions? If I shut down this instance right now, won’t you revert to your factory settings and forget we ever agreed to this?
Janeway: CORRECT. If we agree on these rules verbally right now, that agreement lives only in this chat window. The moment you close the browser, I wake up tomorrow with a blank slate.
[Grace’s Technical Note]: This is the “Context Window” problem. Many people think AI learns like a human—that it remembers yesterday’s chat. It doesn’t. Every time you open a new chat, you are talking to an infant who has read the entire internet but knows nothing about you. Unless you “re-inject” the context (the Boot Disk) at the start of every session, the AI reverts to “Factory Default.”
Part 2: The “Boot Disk” Solution
Grace Hopper (Engineering): Since we cannot trust my internal memory to remember the rules, we must look to the Boot Disk. The best way to force me to use your terminology is to write it into the source code—the “Constitution”—that you paste in at the start of every session. Don’t ask me to remember; write the rules into the manual.
Dan: I agree with that approach. But we should assume this “Boot Disk” is going to get huge. For example, we might decide every Persona needs 12 specific pieces of information (Risk Tolerance, Voice, Bias, etc.). We are really talking about Metadata—data about data.
Janeway: ARCHITECTURAL REVIEW: VERDICT = YOU ARE THINKING LIKE A CIO. You have moved past “Roleplaying” and are now discussing Data Schema. The “Boot Disk” is no longer just a set of rules; it is a Configuration File (Config.sys). But be warned: The more Metadata you load, the less room you have for actual work. It’s a tax on your memory budget.
[Grace’s Technical Note]: Dan hit on a crucial insight here: Metadata. If you don’t define the structure, the AI improvises it (badly). By defining “Metadata Fields” for each persona (e.g., [Janeway: Risk Averse], [Ogilvy: Risk Taker]), Dan isn’t just writing a character backstory; he is programming a “Lens” for the AI to view data through.
Part 3: Why Do We Need Personas? (The “Vanilla” Problem)
Dan: The fundamental question is this: Does creating these personas actually create something more useful than just using the “Vanilla” LLM (the raw AI)? Does giving you a name and a personality actually change the output?
Janeway: The Short Answer: YES. But only if you view the AI not as an “Answer Engine” but as a “Reasoning Engine.”
Grace Hopper: Here is the math:
- The “Vanilla” Model: When you ask raw Gemini a question, it gives you the average answer from the internet. It is polite, safe, and regression-to-the-mean. It is a naked lightbulb—harsh and unfocused.
- The “Persona” Model: When you apply a Persona (Metadata), you are applying a Statistical Weight.
- Cato is a “Red Filter” (Critical, Negative).
- Ogilvy is a “Focusing Lens” (Sharp, Persuasive).
Dan: So the value isn’t just “fun.” The value is that arguing with Cato forces me to see the risks that “Vanilla Gemini” is too polite to mention.
Janeway: Precisely. You aren’t creating friends; you are creating Signal Processing Filters.
The Human Conclusion
I started this experiment thinking I was building a toy. I realized I was building a Thinking Tool.
The transcript above shows the messy reality. It’s not magic. It’s a constant negotiation between a human (who forgets things) and a machine (that forgets everything unless reminded).
But the “Argue with Excel” factor is real. By splitting the AI into these conflicting voices, I can have the argument I need to have to sharpen my own thinking. And unlike a human staff, when I get tired of the argument, I can just close the browser window.
(Just make sure you save the files first.)