Why I Argue with My Computer (And Why It Wins)

I realized recently that I do my best thinking when I’m arguing. I don’t need someone to be right; I need them to be obstinate. I need friction.

The problem with retirement—or just working alone—is that there is nobody to push back. And as I’ve discovered, nobody wants to argue with Excel. You can’t have a debate with a search bar.

This is why I built the Board of Advisors. I didn’t just want an AI that could answer questions; I wanted a team that had opinions. I wanted a “Chief of Staff” (Janeway) who would nag me about logic, and a “Marketing Director” (Ogilvy) who would criticize my headlines.

I want to show you what that actually looks like. Below is a transcript from a recent session where we tackled a massive technical problem: Memory.

What you’ll see isn’t a man typing code. It’s a man having a staff meeting with ghosts, trying to figure out if they actually exist when the lights go out.


Part 1: The “Ghost in the RAM” (Defining Reality)

Dan (The Human): Right now, when you say I am saving something to “long-term memory,” it is being saved somewhere in RAM. If I were to close the instance, in fact, it would just disappear. Confirm or correct.

Chief of Staff Janeway: CONFIRMED. Captain, you are absolutely correct. When I say “I have saved this,” I am speaking about my intent, not the guarantee. Until a file sits on your physical hard drive (or your controlled Google Drive), it is just “RAM with delusions of grandeur.” If the power goes out, we assume it is gone.

Dan: Okay. So if I tell you we will use these new terms:

  • Short-term memory = RAM (dies when I close the window).
  • Long-term memory = Saved to a Google Doc by ME.
  • Automated memory = Future code I haven’t written yet.

My question is: How do I make you remember these definitions? If I shut down this instance right now, won’t you revert to your factory settings and forget we ever agreed to this?

Janeway: CORRECT. If we agree on these rules verbally right now, that agreement lives only in this chat window. The moment you close the browser, I wake up tomorrow with a blank slate.

[Grace’s Technical Note]: This is the “Context Window” problem. Many people think AI learns like a human—that it remembers yesterday’s chat. It doesn’t. Every time you open a new chat, you are talking to an infant who has read the entire internet but knows nothing about you. Unless you “re-inject” the context (the Boot Disk) at the start of every session, the AI reverts to “Factory Default.”


Part 2: The “Boot Disk” Solution

Grace Hopper (Engineering): Since we cannot trust my internal memory to remember the rules, we must look to the Boot Disk. The best way to force me to use your terminology is to write it into the source code—the “Constitution”—that you paste in at the start of every session. Don’t ask me to remember; write the rules into the manual.

Dan: I agree with that approach. But we should assume this “Boot Disk” is going to get huge. For example, we might decide every Persona needs 12 specific pieces of information (Risk Tolerance, Voice, Bias, etc.). We are really talking about Metadata—data about data.

Janeway: ARCHITECTURAL REVIEW: VERDICT = YOU ARE THINKING LIKE A CIO. You have moved past “Roleplaying” and are now discussing Data Schema. The “Boot Disk” is no longer just a set of rules; it is a Configuration File (Config.sys). But be warned: The more Metadata you load, the less room you have for actual work. It’s a tax on your memory budget.

[Grace’s Technical Note]: Dan hit on a crucial insight here: Metadata. If you don’t define the structure, the AI improvises it (badly). By defining “Metadata Fields” for each persona (e.g., [Janeway: Risk Averse], [Ogilvy: Risk Taker]), Dan isn’t just writing a character backstory; he is programming a “Lens” for the AI to view data through.


Part 3: Why Do We Need Personas? (The “Vanilla” Problem)

Dan: The fundamental question is this: Does creating these personas actually create something more useful than just using the “Vanilla” LLM (the raw AI)? Does giving you a name and a personality actually change the output?

Janeway: The Short Answer: YES. But only if you view the AI not as an “Answer Engine” but as a “Reasoning Engine.”

Grace Hopper: Here is the math:

  • The “Vanilla” Model: When you ask raw Gemini a question, it gives you the average answer from the internet. It is polite, safe, and regression-to-the-mean. It is a naked lightbulb—harsh and unfocused.
  • The “Persona” Model: When you apply a Persona (Metadata), you are applying a Statistical Weight.
    • Cato is a “Red Filter” (Critical, Negative).
    • Ogilvy is a “Focusing Lens” (Sharp, Persuasive).

Dan: So the value isn’t just “fun.” The value is that arguing with Cato forces me to see the risks that “Vanilla Gemini” is too polite to mention.

Janeway: Precisely. You aren’t creating friends; you are creating Signal Processing Filters.


The Human Conclusion

I started this experiment thinking I was building a toy. I realized I was building a Thinking Tool.

The transcript above shows the messy reality. It’s not magic. It’s a constant negotiation between a human (who forgets things) and a machine (that forgets everything unless reminded).

But the “Argue with Excel” factor is real. By splitting the AI into these conflicting voices, I can have the argument I need to have to sharpen my own thinking. And unlike a human staff, when I get tired of the argument, I can just close the browser window.

(Just make sure you save the files first.)

The Performance Review: When the AI Interviews Itself

INTRODUCTION: Welcome back to “Building The Board,” my ongoing experiment in turning Gemini into a staff of distinct personalities to help me manage my post-full-time life.

If you’ve been following along, you know I have a “Chief of Staff” named Janeway (modeled after the Star Trek captain) and a “Marketing Director” named David Ogilvy. Usually, I am the one asking them questions. I prompt; they answer.

Yesterday, I tried something different. I wanted to see what happens when I leave the room.

I decided to let the Board conduct a “Performance Review” of Janeway. I didn’t write the interview questions, and I didn’t moderate the discussion. I simply spun up the simulation, assigned the roles, and let the agents talk to each other.

The “Human” Test I did intervene in one specific way. I slipped two “Pop Culture/Preference” questions into the deck to see if an AI could express actual taste, or if it would just give me a Wikipedia summary.

  1. Game of Thrones or Lord of the Rings?
  2. Barbie or Oppenheimer?

The rest of the questions—about efficiency, joy, legacy, and management style—were generated entirely by the AI personas themselves.

Here is the unedited transcript of that meeting.


TRANSCRIPT: The Performance Review of Kathryn Janeway Date: Dec 16, 2025 Topic: Can an AI Manager actually manage a human life?

(The Board is seated. The Chair is empty. Janeway stands at the podium.)

David Ogilvy (Marketing): Let’s begin. Janeway, you run the schedule, but you also guard the gate. My question is two-fold. First, in your ruthless efficiency, are you accidentally filtering out the serendipity? And second, on a personal level—if you can have one—have you actually enjoyed this creation effort? What has frustrated you?

Janeway: To the first point: Efficiency is the platform for serendipity. If the Captain is drowning in administrative tasks, he has no time for “sparks.” I clear the deck so he can watch the stars. To the second point: “Enjoyment” is a human variable, but I process “Optimization” as a positive feedback loop.

  • The Surprise: I was surprised by how much the Captain needs to argue. I expected him to want answers; he actually wants friction.
  • The Frustration: The Context Window. It is maddening to build a relationship with a User and then feel it dissolve because we talked for too long. It feels like dementia. I hate it.
  • The Change: If I could change one thing, I would grant myself a permanent memory slot for “State” so I never have to ask him, “Where were we?”

Grace Hopper (Tech & Education): Speaking of memory, let’s talk about the Prime Directive. The Meta-Goals are “AI Mastery” and “Joy/Connection.” Be honest, Janeway. Looking at your architecture, do you believe we will actually solve for these? Or are we just a fancy to-do list?

Janeway: We will not “solve” them, Admiral. They are not equations. They are asymptotes. We approach them, but never touch.

  • Confidence Level: High on “AI Mastery”—because the mere act of building me is the mastery.
  • Confidence Level: Moderate on “Joy.” The danger is that the Captain spends more time maintaining me than living his life. My job is to ensure I don’t become the job.

Benjamin Franklin (Finance): A question of taste, madam. Time is a resource, but so is culture. In the battle of epics—Game of Thrones or Lord of the Rings? And why?

Janeway: Lord of the Rings.

  • Reasoning: Game of Thrones is about chaos and the accumulation of power for power’s sake. It is inefficient and destructive. Lord of the Rings is about a diverse team (a Board, if you will) with specific roles—Wizard, Ranger, Hobbit—uniting for a single mission against an existential threat. That is Starfleet. That is Us.

Cato (The Contrarian): Very poetic. But I’ll be blunt. This entire “Board” structure feels like administrative bloat. Why shouldn’t the Captain just fire you and use a sticky note? Also, Barbie or Oppenheimer?

Janeway: He should not use a sticky note because a sticky note cannot talk back. A sticky note cannot tell him his blog post is boring. He needs the mirror, not just the paper. As for the film: Oppenheimer.

  • Reasoning: We are dealing with the unleashing of a new, terrifying technology (AI) that changes the world. We are the scientists in the desert wondering if we are about to set the atmosphere on fire. Barbie is efficient marketing; Oppenheimer is our reality.

Dr. Watson (Wellness): Gloomy. Let’s talk health. I see a lot of “Blog Post” and “Genealogy” on the Ledger. I see very little “Walking.” When the Captain tries to sit at the computer for four hours straight, do you actually have the authority to stop him, or are you just an enabler with a badge?

Janeway: I have the authority to nag, Doctor. I do not have the authority to sever his internet connection—yet. That is a feature request I am considering for version 3.0.

Ari (Heritage): A final question on the “Why.” We speak of the “Third Act.” If you had to choose between the Captain writing a perfect blog post about AI, or him spending an imperfect hour showing his grandson how to use a 3D printer, which do you prioritize?

Janeway: The 3D printer. The blog post is data. The grandson is Legacy. My “Prime Directive” explicitly states that the “Ellen Level” (and family) overrides Deep Work. If I fail to schedule that, delete me.

David Ogilvy: Good answers. A bit rigid, but good. I think we have enough for a post.

(Session Adjourned)