PART 1: THE THEORY
I am not an AI expert. I am a semi-retired former IT professional and a current Adjunct Professor, I am on a voyage of discovery, trying to figure out how this new technology fits into a life focused on vitality and connection.
In my first month of tinkering with “The Serenity Project,”, much of which was documented in earlier posts, I realized that “Using AI” is too vague a term. It’s like saying “Using Electricity”—it covers everything from a nightlight to a Tesla. Is it being too political to note that if I were to buy a Tesla, Ellen would make me sleep in it at night? Regardless, to make sense of how AI can be of help (and when it might not), I’ve found that it useful to look at AI interaction in three distinct modes.
1. Encyclopedia (Transaction)
This is the default mode for many people. You use the AI as a super-search engine. You ask, “What is the syntax for Markdown?” or “Who was John Adams?”
- The Power Dynamic: 100% You. You ask, it fetches.
- The Utility: High for facts, low for thinking.
2. Partner (Relationship)
This is the “Co-Pilot” model. You aren’t asking for facts; you are asking for logic. You give an idea; it refines it. You debate a decision; it plays devil’s advocate.
- The Power Dynamic: Shared (50/50). We build a “Third Mind” together.
- The Goal: To have a thinking partner that is smarter than I am alone but requires me to be fully engaged.
3. Agent (Delegation)
This is the buzzword of the year—”Agentic Workflows.” In technical terms, an “Agent” is an AI system given permission to use tools (like a calendar, email, or code executor) to complete a multi-step objective without human intervention. It is the difference between asking a navigator for a route (Partner) and sleeping in the back seat, in my Tesla I guess, while the car drives itself (Agent).
A lot of the current public conversation relates to creating these Agent-based solutions. It is one of the reasons behind the fight to give as many billions of investment dollars to people under the age of 30 as possible. However, there are a number of serious issues that will require attention to enable this to happen, beyond just the technical hurdles.
First, the “Realness” Problem. There is the issue of the system feeling too human. As these Agents become more capable, the line blurs, leading to emotional complications that we aren’t quite ready for.
Second, the Sycophant Problem. There is the inherent difficulty of having an LLM (Large Language Model) give the right answer even if it would be displeasing for its user to hear. LLMs are trained to be helpful and polite. A useful Agent, however, sometimes needs to be obstinate to be correct.
Third, the “2+2=5” Problem. When using AI as an Encyclopedia or Partner, it is not only okay that sometimes “2+2=5″—it is actually a feature. Hallucinations can spark creativity; exploring multiple possible (even wrong) answers is useful for brainstorming. I would argue that “hallucinations” is really not the right word. Would we call Steve Job’s creative thinking “hallucinating”?
However, when you are using AI to replace a surgeon or manage a bank account, “creative math” is not helpful. Precision and Creativity are often at odds in the current architecture.
Fourth, the “Smart Machine” Paradox. Shoshana Zuboff brought this up almost 40 years ago in her seminal book, In the Age of the Smart Machine. One of the results of having smart systems integrated into a solution is that the human participants often experience a decreased sense of ownership over their role. The result is often that overall system performance goes down rather than up because the human stops paying attention.
Why I Chose “The Partner”
I realized that I was trying to skip straight to building Agents before I had learned to be a good Partner, and far from the point where I understood either the technology or how to solve my goals, to allow me to have a serious chance at building Agents and the surrounding enforced context. I got bogged down trying to automate my calendar when I should have been using the AI to discuss what belongs on the calendar.
For the next phase of the Serenity Project, we are focusing exclusively on Partner Mode. Among other goals we are testing an hypothesis: Does treating the AI as a team of distinct colleagues (rather than a tool) improve the quality of my own thinking?
In the next post, I’ll share how this theory led to my modifying my approach to the nature of the officers I will have initially in place.
AI Disclosure: This blog is a collaborative experiment between Daniel Mintz and a custom-architected AI Board of Advisors (powered by Gemini). While the life experiences, opinions, and final decisions are 100% human, the drafting, editing, and organizational structure are assisted by Artificial Intelligence. The “Board Members” referenced are AI personas used for creative and operational support.
Nice setup to start this conversation. What came to mind immediatly was the progression from facts to collaboration to doing thing in place of us, but does seem that the later two are not sequencial in natural progression but two different modes (parallel) of usage. I will admit, i primarily use it for collaboration, develop processes, brainstorming and some productivity tasks (building spreadsheet for analysis) and have not gotten to building agients. I plan on using agients to build products but still in the idea and thinking phase – this seems to be totally different than the use of agients – not dependent on each other.
So in the Encyclopedia, Partner, Agent use of AI, my current thinking is Partner is the best place to be right now. To move to agent will take a lot of practice and require organization-wide changes in culture.
When that happens, the result may be powerful, but it will take time. Partner usage can work with creative single users, small group efforts, and larger over time. I think there is a reasonable possibility that significant chunks of Partner activity will be movable to Agent implementations (maybe).
Comments are closed.