The Disagreement Pass: a 12-Minute Check That Catches AI Writing Errors
TL;DR
Many AI-assisted drafts are fluent but fragile: claims sound right until someone asks, “What if this is wrong?”
A Disagreement Pass is a fixed 12-minute step before publish where you deliberately attack your own core claim:
- state the strongest counterclaim,
- test your evidence against that counterclaim,
- revise conclusion scope.
The goal is not to kill your post. The goal is to prevent overconfident, under-tested conclusions from shipping.
Context
Recent reliability patterns (claim registers, verification windows, source checks) reduce obvious factual errors. But a different class of failure still gets through: one-sided reasoning.
In practice, this happens when:
- the draft only contains supporting evidence,
- counterexamples are ignored or treated as edge cases,
- conclusion language (“always,” “best,” “proves”) outruns evidence.
Model behavior can amplify this. If evaluation rewards “answering” over admitting uncertainty, outputs can trend toward confident completion rather than calibrated judgment. That is useful for speed, but risky for publish quality.
Key Points
1) Disagreement is a quality tool, not a personality trait
You do not need to be naturally contrarian. You need a repeatable process that makes your draft survive contact with opposition.
The pass should be procedural:
- identify claim,
- construct strongest reasonable challenge,
- test claim against available evidence.
2) Use the strongest counterclaim, not a weak strawman
Weak objections make you feel safe but improve nothing.
A good counterclaim is:
- plausible,
- specific,
- capable of changing the conclusion if true.
If your claim survives a strong challenge, confidence is earned.
3) Distinguish three outcomes clearly
After the pass, every major claim should land in one bucket:
- supported (evidence still holds),
- narrowed (true only under conditions),
- uncertain (insufficient support; label or remove).
This keeps the post honest without forcing false certainty.
4) Scope edits create most of the value
In many cases you do not need to delete the argument; you need to tighten scope.
Typical improvements:
- “X is best” → “X is best when latency matters more than setup cost.”
- “This method works” → “This method works in teams with explicit review gates.”
Smaller claim, higher trust.
5) Time-boxing prevents perfection spirals
Without a time box, disagreement can become endless debate.
A fixed 12-minute pass is enough to catch major overreach while preserving daily shipping cadence.
Steps / Code
12-minute Disagreement Pass
Minute 0-2: Write your core claim in one sentence.
Minute 2-4: Write the strongest plausible counterclaim.
Minute 4-8: Check sources/examples against both claim and counterclaim.
Minute 8-10: Mark each claim as supported / narrowed / uncertain.
Minute 10-12: Rewrite TL;DR and Final Take to match the new scope.
Minimal disagreement table
| Core claim | Strong counterclaim | Evidence status | Decision | Edit |
|------------|---------------------|-----------------|----------|------|
| "Technique X reduces errors" | "Only in high-process teams" | Mixed | Narrow | Add team/process condition |
| "Tool Y is safer" | "Safer only with strict policies" | Supported with caveat | Narrow | Add policy dependency |
Publish gate
If a high-impact claim is still "uncertain" after the pass:
- label uncertainty explicitly, OR
- remove/defer that claim.
Never ship uncertain claims as confident conclusions.
Trade-offs
Costs
- Adds ~12 minutes per post.
- Can feel uncomfortable because it forces self-critique.
- May reduce punchy language in headlines and conclusions.
Benefits
- Fewer overconfident claims.
- Better calibration between evidence and conclusion.
- Stronger reader trust over time.
- More durable posts that age better after new evidence appears.
References
- OpenAI, Why language models hallucinate: https://openai.com/index/why-language-models-hallucinate
- NIST, AI Risk Management Framework (AI RMF 1.0): https://www.nist.gov/itl/ai-risk-management-framework
- NIST, AI RMF: Generative AI Profile: https://doi.org/10.6028/NIST.AI.600-1
- OWASP Cheat Sheet, LLM Prompt Injection Prevention: https://cheatsheetseries.owasp.org/cheatsheets/LLM_Prompt_Injection_Prevention_Cheat_Sheet.html
- Ganguli et al., Red Teaming Language Models to Reduce Harms (arXiv:2209.07858): https://arxiv.org/abs/2209.07858
Final Take
Fast drafting is not the hard part anymore. Honest conclusions are.
A Disagreement Pass gives you a lightweight way to pressure-test claims before publishing. If you only adopt one new editing habit this week, make it this: force your draft to face its best objection before you ship it.
Changelog
- 2026-03-23: Initial publish with the Disagreement Pass method for AI-assisted posts.