How I taught my AI team to detect B.S š
agents&me // Issue #5
From: Tom
A bench under the trees
Tuesday, late afternoon
Last Tuesday my Gatekeeper agent rejected three drafts before I even read them.
One scored 67 on a scale I invented. The scale measures bullshit.
I should back up.
---
It started with a sentence I couldnāt stop fixing
A few weeks ago my Copywriter agent delivered a draft. Technically fine. Structured well. But one paragraph made me wince:
āThis game-changing approach will transform how you work, giving you the freedom to focus on what truly matters.ā
You know that sentence. Youāve read it a hundred times on LinkedIn. It says absolutely nothing. No proof, no picture, no person behind it. Just air shaped like confidence.
I rewrote it. Moved on. Next day, another draft:
āItās not just about saving time. Itās about working smarter.ā
Same disease. Different symptom.
I could keep fixing these by hand. Iād been doing it for weeks, actually, the same em dashes, the same two-part template sentences, the same vague superlatives that sound important but prove nothing. Every review cycle, same patterns, same corrections.
And then I thought: wait. If I can describe exactly whatās wrong, can I teach an agent to catch it too?
So I built a BS detector
Itās a skill called `/bs`. You feed it any text, it scores it on a 1-100 ābullshit scaleā:
- 1 = Authentic, specific, sounds like a real person
- 100 = Maximum bullshit, manipulative, or clearly AI-generated
The skill looks for specific patterns. Em dashes and template sentences. Vague claims without proof. Manipulative language and fake urgency. That generic āAI voiceā we all recognize (and pretend we donāt use).
It doesnāt just flag problems. It quotes the exact sentences that smell off and explains why.
Now my Gatekeeper agent runs `/bs` on every piece of content before review.
The before/after that convinced me it works
Real example. Hereās a paragraph from a first draft, scored 67:
Before (Score 67):
āThis game-changing approach will transform how you work, giving you the freedom to focus on what truly matters. Itās not just about saving time. Itās about working smarter and building systems that scale with your ambitions.ā
The feedback: āSounds like a LinkedIn influencer selling a course. Three empty superlatives. One em dash pattern. Zero specific numbers. Zero visual detail.ā
I revised it. Same idea, different execution:
After (Score 23):
āI used to spend 4 hours reviewing AI drafts every week. Now the Gatekeeper catches the bad ones before I see them. Last Tuesday I reviewed two drafts instead of nine. I went for a walk.ā
The feedback: āSounds like Tom talking to a friend. Specific numbers (4 hours, two drafts, nine). Physical detail (went for a walk). Honest, no superlatives.ā
Thatās the shift. From words that sound important to words that show something.
But hereās what I didnāt expect
I built the detector as a filter. A gate. Something to catch bad output and bounce it back.
It did that. Fine.
But after about two weeks, something else happened. The first drafts started arriving cleaner.
Not perfect (Iām not sure theyāll ever be perfect, and Iām weirdly okay with that). But cleaner. Fewer em dashes. Fewer empty superlatives. More specific numbers. The Copywriter agent had internalized what gets rejected, so it stopped making those mistakes in the first place.
I didnāt train it to write better. I trained the Gatekeeper to reject worse. And the writing improved anyway.
This is the part that interests me more than the detector itself. Quality control isnāt just a filter. Itās a feedback loop. The standard got built into the workflow, and the workflow got better because the standard existed.
(I think thereās a management lesson in there somewhere, but Iām still working it out. Something about how clear rejection criteria teach faster than vague encouragement. Ask any writer whoās gotten a ānice but needs workā rejection versus a āthis specific paragraph doesnāt earn its placeā rejection.)
The irony, obviously
Iām using AI to catch AI. A machine judging whether other machines sound too machine-like.
I know.
š This weekās gem: the BS detector skill
The complete /bs skill I use to catch AI-sounding content.
Whatās included:
- The full prompt with scoring rubric (1-100 scale)
- Complete checklist of patterns to detect
- Output format template
- Integration instructions for your Gatekeeper workflow
What youāll be able to do:
- Score any text for BS before publishing
- Catch specific AI patterns automatically
- Build quality control into your workflow (not your review time)
š Available to paid subscribers.
Thatās it for this week.
If this was useful, forward it to someone (real human) building with AI.
Want the full BS Detector Skill? Subscribe for $15/month.
See you next week āļø
-- Tom
(the guy whose AI now rejects his own writing for sounding too AI)
P.S. This newsletter was made almost entirely by my AI team. The BS detector scored this draft at 19. Iāll take it.
P.P.S. Want to build your own AI team? join me to the next online workshop. from zero to running in 2 hours. Details at getagents.today
P.P.P..S. I read every reply. The real me, not the AI.



This hit home. I catch myself rewriting the same empty āsounds confident but means nothingā sentences again and again. Turning that into a systematic check is such a smart move.