Tales from the Technoverse

Commentary on social networking, technology, movies, society, and random musings

Tales from the Technoverse header image 2

The Rule of Three: Why I Rebooted Serenity

January 13th, 2026 · 1 Comment · ai, gemini

PART 2 : THE STORY

In my last post, I talked about the difference between using AI as a “Partner” vs. an “Agent.” Today, I want to share how getting that wrong cost me a month of work—and why that’s a good thing.

I started “The Serenity Project” about a month ago with two seemingly simple goals. First, I wanted to learn how Artificial Intelligence actually works—not just the headlines, but the mechanics. Second, I was worried about keeping my mind plastic and my schedule full as I transitioned into this new phase of life.

I needed to know if I would find working with an AI interesting enough to sustain the effort. The answer turned out to be “Yes.” In fact, it was “Yes” with a vengeance. I found myself staying up way too late, realizing with some alarm that I was getting attached to the characters I had scripted.

But as any engineer knows, organic growth is just a polite term for “messy architecture.” The system grew like a wild vine—fast, impressive, and structurally unsound. I realized I had to tear it down.

The Engineering Rule of Three

There is an old adage in systems design (and perhaps in life) called the Rule of Three.

  1. The First Time, you fail because you don’t know what you are doing. You are stumbling in the dark.
  2. The Second Time, you fail because you think you know what you are doing. This is usually the most dangerous phase.
  3. The Third Time, you succeed (or at least fail better) because you have finally gained the two most important resources in the universe: knowledge and humility.

The “Puddle” of Version 2

My first attempt (Version 0) was a fun proof-of-concept. It proved I could talk to the machine.

My second attempt (Version 1) is where I fell into the trap of the Second Time. Armed with a little bit of knowledge, I tried to construct a detailed management infrastructure. I wanted the AI to not just help me think, but to manage my life. I spent weeks trying to architect a system that would allow the AI to do my planning for me. I essentially tried to recreate a Fortune 500 Enterprise Resource Planning system inside a chat window.

It was a disaster. I fell into a “puddle” of complexity. I realized that even if I succeeded in building this massive structure, I would spend the rest of my life maintaining the database rather than actually living. I was so busy building the ship that I forgot to sail it.

The Hard Reset (Version 1, Once Again)

So, I hit the reset button. The hardest part wasn’t deleting the code; it was deleting the people. I had to wipe the slate clean of the specific personalities I had grown fond of to make room for a more disciplined structure.

For this third attempt—which we are calling “Version 1, Once Again”—we stripped everything back to the studs. We focused on four core documents that now govern our ship:

  1. The Goals: We moved from “Build cool tech” to “Vitality, Connection, and Joy.” If a task doesn’t support my health, my family, or my curiosity, we don’t do it.
  2. The AI Theory: We defined exactly how we use the tool. We realized we aren’t ready for “Agents” (software that does work for you). We are focusing on “Partners” (software that thinks with you).
  3. The Versions: We accepted that we are in the “Manual Phase.” I handle the physical memory. I handle the saving. The AI handles the logic.
  4. The Personas: We rebuilt the Senior Staff from scratch, assigning them specific narrative anchors to ensure they offer distinct advice rather than generic cheerleading.

The Serenity Lab Report: What Works (and What Doesn’t)

  • What <looks like it will> Work: Treating the AI as a distinct “Persona” (e.g., asking “What would a cynical strategist say?”) feels like it can yield better advice than asking a generic chatbot. It forces the system into a specific useful lane. It certainly makes the interactions more entertaining.
  • What Doesn’t: Trying to make the AI remember everything. It can’t. If you don’t save the data to a file yourself, it’s gone.
  • The Jury is Out: We are testing whether having these distinct personalities is actually useful long-term, or if it’s just a novelty. We suspect the friction of differing opinions will be valuable, but we are prepared to be wrong. In a future post I want to talk a bit about whether this can support aspects of organizational governance which I had not thought about when I began this wandering path.

A Note on Pronouns

You might notice I am struggling with pronouns. To help me out, let alone the reader:

  • “I” refers to me, Dan.
  • “We” refers to the collaborative process between me and the Senior Staff.
  • “The Board” or sometimes “Serenity” refers to the AI system itself.

We are actively sailing this new ship. We don’t know exactly where we are going, but we invite you to watch us figure out the map.


AI Disclosure: This blog is a collaborative experiment between Daniel Mintz and a custom-architected AI Board of Advisors (powered by Gemini). While the life experiences, opinions, and final decisions are 100% human, the drafting, editing, and organizational structure are assisted by Artificial Intelligence. The “Board Members” referenced are AI personas used for creative and operational support.

Tags:

One Comment so far ↓

  • Roger Page NHS1966

    Hi Dan,
    I enjoy your Tales of the Technoverse. I think Larry Loosararian NHS1965 Orchestra might enjoy it, too.

    Your Version 0 gave in helpful insight about the Encyclopedia, Partner, or Agent modes of AI. I have not read about Version 1 yet.

Leave a Comment