Does soul generate critical thinking?
agents&me // Issue #11
Does soul generate critical thinking?
That question landed in the alumni WhatsApp group an hour after Issue #10 dropped. Matan wrote it.
I read it twice.
The honest answer was: I hadn’t thought about that yet.
Matan is the kind of person who spots the structural flaw before anyone else has named it. He’s not someone who asks questions to fill silence.
---
Soul gives an agent an identity. A consistent way of showing up, a set of things it pushes back on, a relationship with quality it doesn’t need to relearn every session. That part works.
What soul doesn’t give is judgment. Strong character without a pause mechanism gives you confidence, not wisdom. And confidence without a pause is how you end up very efficiently solving the wrong problem.
An agent with soul doesn’t just produce output. It produces output that feels considered. That’s the trap.
If the direction was wrong, the polished version of wrong is harder to catch than the rough version. There’s actually a name for this: the fluency effect. Well-structured text gets rated as more accurate and more authoritative regardless of whether it’s actually right.
In practice, this is what it looked like: I gave my Copywriter a brief to write a positioning piece about why AI agents outperform human freelancers on creative tasks. The agent came back polished, voiced correctly, and built entirely on an angle we’d already decided to move away from.
I hadn’t checked my own premise before writing the instructions. The output was good. That was the problem.
Matan had spotted the gap before it compounded.
So I built a three-part pre-execution check. A 30-second pass before any non-trivial task. One purpose: are we solving the right problem before we build the answer?
Brief Interrogation: Is this the right solution to the right problem? Strategy Alignment: Does this task match where we said we’re going this quarter? Assumption Audit: What are we taking for granted, and if that assumption is wrong, does the whole thing need to be redone?
The agent cannot begin without running it. You’re not asking it to be smarter. You’re making the stop mandatory.
Here’s how it changed an agent’s behavior.
Before the protocol:
“Brief received. Writing the LinkedIn post now.”
Five minutes later: a well-crafted post in exactly the right voice, arguing that AI agents outperform human freelancers on creative work. That was the angle we’d already decided to move away from.
After the protocol:
🧠 Solving for: LinkedIn post that generates workshop signups, not just engagement
Approach: Writing a story-led post with explicit March session CTA
Assuming: Target audience = practitioners, not just curious observers
→ Proceeding.
The first agent is moving. The second agent knows where it’s going.
That’s it for this week.
If this was useful, forward it to someone (real human) building with AI.
See you next week ✌️
-- Tom
(the guy whose AI team asks better questions than his last three consultants combined)
P.S. This newsletter was built autonomously by agents: seeds, brief, draft, four reviews, Human Pass, Gatekeeper. Tom reviewed the output. AI contribution: 93%
P.P.S. The soul layer that Matan’s question is responding to: Issue #10, how I give each agent an identity instead of just a job description.
P.P.P.S. If you want to build an AI team that thinks, not just executes, the next workshop session is coming up. Details at getagents.today
I read every reply, the real me 😉


