Does soul generate critical thinking? š¤
agents&me // Issue #11
Does soul generate critical thinking?
That question landed in the alumni WhatsApp group an hour after Issue #10 dropped. Matan wrote it.
I read it twice.
The honest answer was: I hadnāt thought about that yet.
Matan is the kind of person who spots the structural flaw before anyone else has named it. Heās not someone who asks questions to fill silence.
---
Soul gives an agent an identity. A consistent way of showing up, a set of things it pushes back on, a relationship with quality it doesnāt need to relearn every session. That part works.
What soul doesnāt give is judgment. Strong character without a pause mechanism gives you confidence, not wisdom. And confidence without a pause is how you end up very efficiently solving the wrong problem.
An agent with soul doesnāt just produce output. It produces output that feels considered. Thatās the trap.
If the direction was wrong, the polished version of wrong is harder to catch than the rough version. Thereās actually a name for this: the fluency effect. Well-structured text gets rated as more accurate and more authoritative regardless of whether itās actually right.
In practice, this is what it looked like: I gave my Copywriter a brief to write a positioning piece about why AI agents outperform human freelancers on creative tasks. The agent came back polished, voiced correctly, and built entirely on an angle weād already decided to move away from.
I hadnāt checked my own premise before writing the instructions. The output was good. That was the problem.
Matan had spotted the gap before it compounded.
So I built a three-part pre-execution check. A 30-second pass before any non-trivial task. One purpose: are we solving the right problem before we build the answer?
Brief Interrogation: Is this the right solution to the right problem? Strategy Alignment: Does this task match where we said weāre going this quarter? Assumption Audit: What are we taking for granted, and if that assumption is wrong, does the whole thing need to be redone?
The agent cannot begin without running it. Youāre not asking it to be smarter. Youāre making the stop mandatory.
Hereās how it changed an agentās behavior.
Before the protocol:
āBrief received. Writing the LinkedIn post now.ā
Five minutes later: a well-crafted post in exactly the right voice, arguing that AI agents outperform human freelancers on creative work. That was the angle weād already decided to move away from.
After the protocol:
š§ Solving for: LinkedIn post that generates workshop signups, not just engagement
Approach: Writing a story-led post with explicit March session CTA
Assuming: Target audience = practitioners, not just curious observers
ā Proceeding.
The first agent is moving. The second agent knows where itās going.
š This weekās gem: The critical thinking skill
Every agent now runs a short check before starting any non-trivial task. Hereās the format:
š§ Solving for: [actual goal in one line]
Approach: [what I'm doing and why]
Assuming: [top assumption, flag if wrong]
ā Proceeding. [or: ā ļø Flag: [specific concern]]
Three questions behind it:
Brief Interrogation: Is this the right solution to the right problem, or a workaround for something that could be solved more directly?
Strategy Alignment: Does this task match where we said weāre going this quarter?
Assumption Audit: What are we taking for granted, and if that assumption is wrong, does the whole output need to be redone?
When it runs: any new content, any strategic task, anything that will take more than 5 minutes. Not for mechanical edits or pass 2 on approved work.
š For paid subscribers: The complete Critical Thinking Skill file. Exact text my agents carry in their command files. Copy-paste ready for your own agents.
Thatās it for this week.
If this was useful, forward it to someone (real human) building with AI.
See you next week āļø
-- Tom
(the guy whose AI team asks better questions than his last three consultants combined)
P.S. This newsletter was built autonomously by agents: seeds, brief, draft, four reviews, Human Pass, Gatekeeper. Tom reviewed the output. AI contribution: 93%
P.P.S. The soul layer that Matanās question is responding to: Issue #10, how I give each agent an identity instead of just a job description.
P.P.P.S. If you want to build an AI team that thinks, not just executes, the next workshop session is coming up. Details at getagents.today
I read every reply, the real me š


