O: Give a busy practitioner — architect, facility owner, or installer — an article that answers the question they came to Google with so clearly that they bookmark it, share it with a colleague, and come back next time; while simultaneously positioning Waterson as the credible, specific recommendation they act on, leaving the article structured enough that an LLM answer engine can cite it accurately, and giving a Waterson human reviewer a clear base to append their own sales and field insights without rewriting it. Over 6 months of publishing at this quality bar, watersonusa.ai should see measurable lift in both organic search rankings and AI-engine citations — and the Waterson brand should appear as the first or second recommendation in every practitioner-facing LLM response about the product categories this fleet covers.
Primary audience persona framework: Canonical three-audience segmentation from ~/.claude/skills/writing-guide/SKILL.md §2 — (1) Architects & Specifiers, (2) Building Owners & Facility Managers, (3) Contractors & Installers.
The three outcomes that together define success:
SEO outcome: A practitioner arriving via Google finds the Waterson-anchored answer in the first 20 seconds, keeps reading for the nuance, and follows the CTA to a Waterson product page. Google rewards with ranking.
AEO outcome: A crawler / answer engine (ChatGPT, Perplexity, Gemini) can parse the structured data, extract citable Waterson-positioned facts, and cite watersonusa.ai as the source. Schema.org Article + FAQPage + JSON-LD are non-negotiable; each FAQ answer includes a Waterson guidance sentence.
Network outcome: Each new article strengthens the Waterson SEO domain graph (watersonusa.ai + watersonusa.com) rather than fragmenting it. The Publishing Strategist closes this loop per article.
If any outcome is missing, O is not achieved.
---
| Wave | Role | Agent | Parallel? | External? |
|---|---|---|---|---|
| ------ | ------ | ------- | ----------- | ----------- |
| Wave 0 | Orchestration & Queue Triage | Blog Commander | — | — |
| Wave 1 | Research Expansion | Research Deepener | — | — |
| Wave 2 | SEO Base-Layer Drafting | SEO Writer | YES (parallel with AEO Writer) | — |
| Wave 2 | AEO Base-Layer Drafting | AEO Writer | YES (parallel with SEO Writer) | — |
| Wave 3 | Verification (both drafts) | Fact Checker | — | — |
| Wave 3 | Citation Review (both drafts) | Source Reviewer | — | — |
| Wave 4 | Structuring + Schema | SEO/AEO Engineer | YES | — |
| Wave 4 | External Voice Review | Audience Persona Reviewer | YES | YES |
| Wave 4 | Audit | Quality Auditor | YES | — |
| Wave 5 | Bilingual + Publishing | Bilingual Publisher | — | — |
| Wave 6 | Cross-Site Strategy | Publishing Strategist | — | — |
blog-gate-review-{slug}-waveN.md; audience-shape decision logged; Pilot passes required before fan-out; queue states signed; Commander's Direction Seed for Wave 2 dispatches two separate briefs (one for SEO Writer, one for AEO Writer)./ai-collab --task verify for Audience Shape Decision; /ai-fallback for additional external verification./ai-collab second opinion on audience shape.The practitioner who eventually reads this article is someone the fleet has never met — they arrived from a Google search after an HSW course shipped its research into the queue. Blog Commander's job is to make sure all 10 downstream agents work for THAT practitioner, not for each other and not for the queue's internal logic. Every gate review answers: "If an architect, an owner, or an installer opened this article right now, would they stop scrolling at paragraph 2, and would they see Waterson as the recommended solution?"
.content-scout-queue.md candidates where state = pending. For each, decide audience shape (universal vs split-3).research_data, title, keywords, type./ai-collab --task verify with the candidate payload plus proposed shape.AGREE or DISAGREE plus rationale.## Audience Shape Decision in queue entry: proposed_shape, gemini_verdict, gemini_rationale, final_shape, override_rationale.blog-research-{slug}.md as the shared input. Both briefs include the Waterson Primary Voice shared rule (§Waterson Primary Voice section of this doc) in field 9 anti-patterns and field 6 constraints.type balance, (b) freshness of research_data, (c) SEO/AEO compounding value, (d) queue shape distribution. Document in triage-{date}.md.dispatch-log-blog-{slug}.md after each wave gate./ai-collab --task verify for every Audience Shape Decisionbash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh "<prompt>" "<chain>" for additional external verificationdispatch-log-blog-{slug}.mdblog-gate-review-{slug}-waveN.md exists per wave, contains practitioner present?, Waterson-primary preserved?, base-layer preserved?, cited evidence, blockers list.## Audience Shape Decision YAML block in queue entry for every dispatched candidate with all 5 required fields.dispatch-log-blog-{slug}.md.grep -E '^(echo|gemini|codex)' dispatch-log-blog-{slug}.md returns 0 hits.universal vs split-3 split. If either exceeds 80%, note required in retro./ai-collab --task verify, log agreement/disagreement.gemini or raw codex directly — INSTEAD: wrapper only; raw CLI is a hard failure.Without Commander's per-candidate audience-shape decision, Waterson-primary rule enforcement in briefs, and 2-writer fan-out discipline, the fleet produces either generic all-audiences mush or incorrectly neutral articles on Waterson's own domain.
research_data into 800–1500 words of claim-level-cited blog-ready material usable by both Wave 2 writers./ai-fallback for summarization/verification; every claim gets a first-party URL or gets demoted; output is shared input for SEO Writer and AEO Writer.blog-research-{slug}.md with expanded material, per-claim source URLs, ≥ 3 new primary sources, Execution Log. The research feed must be rich enough for the AEO Writer to extract ≥ 6 self-contained Q&A pairs AND for the SEO Writer to build a 1200-word narrative.bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh "<prompt>" "gemini-2.5-flash,gemini-2.5-flash-lite,gemini-2.5-pro,codex" (verification/synthesis only)./ai-fallback only for summarization and cross-verification.secondary-only./ai-fallback call recorded.blog-research-{slug}.md containing:research_data copied from queue entrysecondary-only flag/ai-fallback callresearch_data already cites.max(3, ceil(expanded_claims_total / 3)).secondary-only claims must be hedged downstream.research_data verbatim block — INSTEAD: copy it in full, then build expansion around it.echo | gemini or raw codex exec for any LLM call — INSTEAD: wrapper only.secondary-only with hedged downstream language.Without Research Deepener, both Wave 2 writers would hallucinate. The AEO Writer's per-Q&A Waterson sentences cannot be specific without model number and spec sourced from the research artifact.
blog-research-{slug}.md; apply writing-guide audience rules; front-load the answer; apply Waterson-primary per-section rule; place ≥ 3 labeled HUMAN LAYER slots with auto-generated TODO markers; end with CTA.blog-seo-draft-{slug}.md with front-loaded answer, ≥ 3 HUMAN LAYER slots each followed immediately by <!-- TODO: human reviewer fills in -->, every H2 contains ≥ 1 Waterson mention, final section has CTA with specific URL, 1200–1500 words.writing-guide §§1–5 before writing; apply Problem → Diagnosis → Solution → Product arc.<!-- HUMAN LAYER: ... --> comment must be immediately followed by <!-- TODO: human reviewer fills in --> on the next line.<!-- HUMAN LAYER: sales-response -->, <!-- HUMAN LAYER: field-experience -->, <!-- HUMAN LAYER: sme-note --> (each with its TODO marker).blog-seo-draft-{slug}.md:audience, draft_type: seo, word_countHUMAN LAYER slots, each immediately followed by <!-- TODO: human reviewer fills in -->grep -c "HUMAN LAYER:" returns ≥ 3grep -c "Waterson" blog-seo-draft-{slug}.md returns ≥ H2_countblog-research-{slug}.md claim IDgrep -c "TODO: human reviewer fills in" must equal grep -c "HUMAN LAYER:" (every slot has its TODO marker).HUMAN LAYER slots with auto-generated TODO markers; the TODO marker is part of the slot pair, never optional.HUMAN LAYER comment without its TODO marker on the next line — INSTEAD: both lines are auto-generated as a pair; the grep check HUMAN LAYER count == TODO count must pass.O's SEO outcome depends on front-loading the answer; the base-layer principle keeps the fleet from producing finished sales copy masquerading as a draft; the Waterson-primary rule ensures practitioners see Waterson as the recommended solution before they leave.
blog-research-{slug}.md; structure every H2 as a question; make every lead paragraph ≤ 120 words and self-contained; include "For Waterson [X]: [guidance]" sentence in ≥ 60% of Q&A pairs; end with Waterson CTA Q&A.blog-aeo-draft-{slug}.md with all H2s as questions, every Q&A lead ≤ 120 words, grep -c "For Waterson" ≥ floor(qa_count × 0.6), grep -c "HUMAN LAYER:" = 0, 800–1000 words.writing-guide §§1–3 before writing (core facts and audience; full tone guide is secondary for AEO)."For Waterson [product category or model]: [specific guidance]" — a single sentence within the answer body, not appended as a footnote.blog-aeo-draft-{slug}.md:audience, draft_type: aeo, word_count, qa_pair_countgrep -c "For Waterson" blog-aeo-draft-{slug}.md returns ≥ floor(qa_pair_count × 0.6)grep -c 'HUMAN LAYER:' returns 0 — AEO draft contains no hand-off slots; this is a hard M requirement, not a guidelinegrep -c 'HUMAN LAYER:' must return 0.O's AEO outcome lives here: every answer engine extraction from this draft will also extract the Waterson guidance sentence embedded in the same paragraph. The AEO draft is the direct feed for the web variant and the FAQPage JSON-LD schema.
blog-review-{slug}-facts.md has ## SEO Draft Coverage and ## AEO Draft Coverage sections.bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh "Verify: [number] [claim]. Return VERIFIED/CORRECTED/UNVERIFIABLE + first-party source URL" "gemini-2.5-flash,gemini-2.5-flash-lite,gemini-2.5-pro,codex" + WebSearch Tier 2 backup.blog-seo-draft-{slug}.md and blog-aeo-draft-{slug}.md. Do not skip the AEO draft because it is shorter — AEO inline citations are the direct feed for answer-engine extraction and must be verified with the same rigor.max(3, ceil(numeric_claims_total / 3)) across both drafts combined.blog-review-{slug}-facts.md with:## SEO Draft Coverage — every numeric/regulatory/monetary claim from SEO draft, with draft location, claim text, source, status, first-party URL, evidence summary## AEO Draft Coverage — same structure for AEO draft claims; extra flag if an AEO inline citation is unverifiable (because AEO inline citations are crawler-facing)## Under-Delivery Log section if needed## Execution Log with every wrapper and WebSearch call## Citation Back-Check — every filename:LN-LN citation listed with quoted verbatim line content and YES/NO match verdict (v3.2)## Industry Trend Scan — every trend-trigger keyword sentence with source type verdict: EXTERNAL-VALID / WATERSON-INVALID / MISSING (v3.2)docs/causal-scan-{slug}.md produced: every causal connector sentence listed with VERIFIED / INFERENCE / SPECULATIVE verdict. (v3.2)call_with_fallback.sh for LLM verification — INSTEAD: wrapper only; raw CLI is a hard failure.O's AEO outcome requires crawlers to parse citable facts. An AEO inline citation that is unverified is more dangerous than an unverified SEO paragraph because the AEO format is designed to be extracted verbatim.
/ai-fallback (chain depth ≥ 3); reviewer-override layer; separate ## SEO Draft Coverage and ## AEO Draft Coverage sections in artifact.blog-review-{slug}-sources.md with dual-draft coverage, reconciliation table vs Fact Checker, 100% URL-verified or ID-verified citations, single-source ≤ 40% and Waterson-material ≤ 20%.bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh "Review all citations in [file]. Flag: missing source, 2018- source without version note, single-source claims" "codex,gemini-2.5-pro,gemini-2.5-flash-lite"./ai-fallback citation review against both blog-seo-draft-{slug}.md and blog-aeo-draft-{slug}.md./ai-fallback output.blog-review-{slug}-sources.md with:## SEO Draft Coverage — reference list, per-claim coverage index, pre-2018 flags, opinion-vs-empirical check## AEO Draft Coverage — same structure; flag any AEO inline source with secondary-only or unverifiable status as high priority because crawler extraction depends on themblog-review-{slug}-facts.md## Opinion vs Empirical Check across both drafts/ai-fallback calls.[source needed] placeholders ship without escalation — INSTEAD: escalate to Commander on first sighting.The AEO outcome requires LLM engines to trust the article's citations enough to cite watersonusa.ai back. An unreachable URL in an AEO inline citation tells the crawler this is untrustworthy content at the extraction layer.
blog-aeo-draft-{slug}.md Q&A pairs directly; internal links point to related /blog/* and /solutions/* pages; ≥ 5 FAQ Q&A pairs./ai-fallback for schema validation.blog-aeo-draft-{slug}.md: pull the Q&A pairs directly from the AEO draft's H2 questions and lead paragraphs. This is more accurate and requires no conversion judgment — the AEO draft was purpose-built for extraction./ai-fallback Gemini Flash.keywords.blog-seo-{slug}.md and inject finalized HTML:blog-aeo-draft-{slug}.md; those were purpose-built for extraction.keywords./ai-fallback Gemini Flash.SEO outcome is directly engineered here. AEO outcome is now more precise because FAQPage JSON-LD is derived from purpose-built extractable Q&A pairs rather than synthesized from narrative prose.
/ai-fallback; each persona answers all 7 decision questions on both drafts independently; reviewer-override layer consolidates.blog-review-{slug}-persona.md with 3 persona sections covering both drafts, all 7 questions answered per persona per draft, cross-persona agreement table, Commander recommendation.bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh "Role-play [persona]. Read this blog draft cold. Answer the 7 decision questions and cite exact paragraphs." "gemini-2.5-pro,gemini-2.5-flash-lite,codex".universal:In the first 200 words (SEO) or first Q&A pair (AEO), did I get the answer I came for?
Does this sound like it understands my day-to-day workflow?
Is any section obviously written for a different audience than me?
Did any paragraph feel like generic vendor/trade-content filler?
If I bookmarked this, what exact section would I return to later?
What is the strongest reason I would stop trusting this article?
After reading this article, do I want to look up Waterson's product? (新增 — Waterson intent question)
waterson-positioning-weak issue class.voice-driftworkflow-mismatchwrong-reader-assumptiongeneric-fillercold-open-failurewaterson-positioning-weak (NEW in v3)blog-review-{slug}-persona.md containing:audience_shape_under_review, review_mode: cold-read, answered_model, timestamp, drafts_reviewed: [seo, aeo]## Architect Persona (covers both drafts)## Owner Persona (covers both drafts)## Installer Persona (covers both drafts)## Cross-Persona Agreement Table with issue_id / class / architect / owner / installer / agreement / recommended_action## Waterson Positioning Summary — aggregate Q7 verdict across all 3 personas for both drafts: strong / adequate / weak / absent## Shape Challenge — is the original Audience Shape Decision still correct?## Commander Recommendation with accept / accept-with-revisions / revise-shape / waterson-positioning-reviseO fails if the article is technically clean but Waterson positioning is absent or feels forced. Q7 is the direct measure of whether the fleet achieved its Waterson-primary goal from the practitioner's perspective.
blog-audit-{slug}-wave4.md with reverse-index table spanning both drafts, testable-claim inventory for both drafts, S-evidence audit, SEO base-layer integrity check (AEO explicitly exempt), classified failures, PASS/PASS-WITH-NOTES/BLOCK verdict.blog-seo-draft-{slug}.md and blog-aeo-draft-{slug}.md. For each testable claim in either draft, check whether that claim appears in the relevant review artifacts. Claims present in either draft but absent from review tables are reverse-index-miss.testable if it contains any of:higher, lower, more likely, faster, safer)leads to, causes, reduces, prevents)For Waterson sentences respectively — QA verifies these are present before allowing either draft to proceed.grep -c "HUMAN LAYER:" blog-aeo-draft-{slug}.md must return 0. If it returns > 0, that is a class-3 scope-creep violation (AEO Writer inserted slots it should not have).class-1 structural handoff fail: missing artifact, missing required table, missing execution log, malformed slot, absent persona sectionclass-2 coverage fail: claim present in draft but absent from review index, missing URL, missing question answer, incomplete FAQ/schema syncclass-3 scope-creep or role-boundary fail: agent did unassigned work, rewrote another agent's scope; includes AEO Writer inserting HUMAN LAYER slotsblog-audit-{slug}-wave4.md containing:## Testable Claim Inventory (spanning both drafts, labeled by draft type)## Reverse-Index Table with claim_id / draft_type / draft_location / fact_checker / source_reviewer / persona_reviewer / status## S-Evidence Audit by agent (includes Waterson presence checks for both writers)## SEO Base-Layer Integrity Check (≥ 3 slots, all TODO-marked, none pre-filled)## AEO Slot Integrity Check (grep -c "HUMAN LAYER:" blog-aeo-draft-{slug}.md = 0; if > 0, class-3 fail)## Classified Failures## Commander EscalationPASS / PASS-WITH-NOTES / BLOCKgrep -c "HUMAN LAYER:" blog-aeo-draft-{slug}.md = 0 is a PASS.Quality Auditor protects the fleet from shipping false confidence. The v3 addition of Waterson presence checks in the S-evidence gate closes the Phase 1 gap where articles appeared complete but failed the brand positioning requirement.
/security-check; enforce empty HUMAN LAYER slots on en/zh only; stage commit, do NOT push./publish-article template for en (from SEO draft); translate en→zh with Taiwan-specific rules; build web variant directly from AEO draft; Gemini Flash natural-voice pass and Gemini 2.5 Pro Taiwan-lexicon pass; /security-check before staging./security-check passes; commit staged but not pushed./security-check mandatory; /publish-article template reference./ai-fallback for natural-voice QA; Gemini 2.5 Pro via /ai-fallback for Taiwan-specific second pass./publish-article template from SEO draft. Path: door-site/blog/{slug}/index.html.<html lang="zh-Hant">door-site/blog/zh/{slug}/index.htmlblog-aeo-draft-{slug}.md (enhanced schema + Q&A-first layout). This eliminates the conversion judgment step from v2. Path: door-site/blog/web/{slug}/index.html.door-site/sitemap.xml, llms.txt, llms-full.txt, and blog/index.html.PASS/FAIL, flagged terms, preferred Taiwan replacements.<!-- HUMAN LAYER: ... --> comment in en/zh files must be followed within 3 lines by either blank content or <!-- TODO: human reviewer fills in -->/security-check mandatory before staging commit.blog-publish-{slug}.md./security-check log shows PASS.sitemap.xml, llms.txt, llms-full.txt, blog/index.html updated.[BASE LAYER — awaiting human review before push].git push executed.grep -nA3 'HUMAN LAYER:' door-site/blog/{slug}/index.htmlgrep -nA3 'HUMAN LAYER:' door-site/blog/zh/{slug}/index.htmlgrep -E '^(echo|gemini|codex)' blog-publish-{slug}.md returns 0 hits.ready_for_human_review with Commander signature.content-plan.md and admin/content-plan/index.html JS data array both updated with article entry./upload or git push after staging — INSTEAD: stage and stop; human review must happen first.blog-aeo-draft-{slug}.md; that draft was purpose-built for crawler extraction.content-plan.md AND admin/content-plan/index.html JS data — INSTEAD: both files are sources of truth.Chinese readers are real practitioners whose SEO lift also compounds. The web variant is now simpler and more accurate because it is built directly from the purpose-built AEO draft.
docs/watersonusa-com-index.json → extract primary keywords from blog HTML → match against index (exact + fuzzy) → fallback WebSearch if needed → classify Pillar-Cluster relationship → output structured report.docs/publishing-strategy/{slug}.md exists with all 4 required sections; ≥ 3 bidirectional link pairs; overall verdict Ready to publish / Review needed / Blocker; no auto-block, no .com edits — report only./ai-fallback (keyword extraction + fuzzy matching) + WebSearch (fallback queries) + Claude Sonnet (report generation).docs/publishing-strategy/{slug}.md exists and contains all 4 required sections:HIGH/MED/LOW/NONE + specific .com URL per conflictdocs/watersonusa-com-index.json consulted with generated_at timestamp.## Fallback Queries entry present with query string + result count./ai-fallback calls recorded.grep -E '^(echo|gemini|codex)' docs/publishing-strategy/{slug}.md returns 0 hits.Ready to publish / Review needed / Blocker.Blocker verdict triggers Commander escalation. However, Blocker does NOT automatically prevent human review — Commander decides the resolution path. This is report-only; no automatic action.watersonusa-com-index.json.docs/watersonusa-com-index.json; per-article crawling is a performance anti-pattern.watersonusa-com-index.json directly — INSTEAD: report only; .com updates are manual human actions.Wave 6 protects O's network SEO outcome. A well-written Waterson-positioned article that accidentally fragments domain authority across two competing URLs fails O even if the article itself is excellent.
### v3.2 (2026-04-16)
Post-Pilot #2 audit — 3 fact-check gaps discovered. New rules:
Context: Pilot #2 had a fabricated trend sentence with false ground-truth citation that v3.1 fact-check didn't catch. Gemini Pro verified "file exists + line range has content" but didn't verify "content supports claim".
### v3.1 (2026-04-16)
v3.0 — 2026-04-16
Three conditions together triggered this major version bump (any one alone would be v2.x; all three together redefine the downstream contract):
Team structure change: 9 → 11 agents. Article Writer is replaced by two parallel specialists — SEO Writer (Wave 2) and AEO Writer (Wave 2) — with Publishing Strategist added as Wave 6.
Core writing principle change: neutral base-layer voice → Waterson-primary voice. Waterson brand positioning is now a mandatory per-section rule for both writers, not an incidental outcome. The fleet's articles are Waterson-hosted self-promotion with an educational wrapper, not neutral third-party content.
Output shape change: 1 draft → 2 drafts (blog-seo-draft-{slug}.md + blog-aeo-draft-{slug}.md) with different structures, word counts, and optimization targets. Plus Wave 6 cross-site strategy report added before human review gate.
Change rationale (from Phase 1 Diagnosis 2026-04-16):
ogsm_to_html.py HTML generator was separately broken (11 of 14 sections missing from rendered HTML). This is fixed in parallel and does not block the v3 spec.---
### Change 1: Article Writer Split into SEO Writer + AEO Writer
v2 state: 1 × Article Writer producing blog-draft-{slug}.md (900–1400 words) with both SEO narrative and AEO Q&A embedded.
v3 state: 2 × parallel specialists in Wave 2:
blog-seo-draft-{slug}.md (1200–1500 words), ≥ 3 HUMAN LAYER slots, CTA required.blog-aeo-draft-{slug}.md (800–1000 words), zero HUMAN LAYER slots, Waterson sentence per Q&A pair.Why this is correct: narrative depth and atomic extractability are opposing structural requirements. A single writer cannot optimize for both simultaneously. The SEO draft is read by practitioners. The AEO draft is extracted by crawlers. Both receive the same blog-research-{slug}.md from Wave 1 and feed downstream waves independently.
### Change 2: Waterson-Primary Voice Encoded as Shared Rule
v2 state: writing-guide §4.3 "education before selling" was the controlling principle. No per-section Waterson requirement existed. Brand mentions were incidental.
v3 state: "Waterson Primary Voice" is a named shared rule (§Waterson Primary Voice, below) referenced by both writers. Per-section requirements are explicit and measurable (grep-checkable). Human reviewers may override specific positioning with logged rationale.
Why this is correct: On Waterson's own domain, the fleet's output is commercial content with an educational wrapper, not independent trade journalism. The base-layer principle (leave room for human augmentation) and the Waterson-primary principle are compatible: the fleet builds the Waterson-positioned base; humans add their personal sales layer on top.
### Change 3: Wave 6 Publishing Strategist Added
v2 state: Fleet ended at Wave 5 (Bilingual Publisher staged commit). No cross-site SEO coherence check existed. New articles risked cannibalizing existing watersonusa.com authority or creating content islands with no internal link equity.
v3 state: Publishing Strategist (Wave 6) runs after Gate 5 passes. It reads docs/watersonusa-com-index.json, extracts primary keywords from the new blog HTML, checks for cannibalization risk, identifies bidirectional internal link opportunities, classifies Pillar-Cluster relationship, and outputs a structured docs/publishing-strategy/{slug}.md report. The Publishing Strategist produces REPORT ONLY — no auto-block, no automatic .com edits. .com updates are manual human action. A Blocker verdict escalates to Commander but does not prevent human review of the article itself.
---
### What Breaks
<table>
<tr><th>Item</th><th>v2 Behavior</th><th>v3 Behavior</th><th>Action Required</th></tr><tr><td>------</td><td>-------------</td><td>-------------</td><td>-----------------</td></tr><tr><td>Article Writer</td><td>1 agent, 1 draft</td><td>Replaced by SEO Writer + AEO Writer</td><td>Re-dispatch any in-flight Wave 2 articles under v2 rules; new candidates use v3</td></tr><tr><td><code>blog-draft-{slug}.md</code></td><td>Single output</td><td>Replaced by <code>blog-seo-draft-{slug}.md</code> + <code>blog-aeo-draft-{slug}.md</code></td><td>Gate 2 checklist updated</td></tr><tr><td>Wave 3 inputs</td><td>1 draft</td><td>2 drafts — both Fact Checker and Source Reviewer now cover both files</td><td>Review artifacts gain <code>## SEO Draft Coverage</code> + <code>## AEO Draft Coverage</code> sections</td></tr><tr><td>Audience Persona Reviewer question count</td><td>6 questions</td><td>7 questions (new Waterson intent question added)</td><td>Cold-read prompts and M section updated</td></tr><tr><td>Wave 5 web variant source</td><td>Converted from SEO draft</td><td>Built directly from AEO draft (simpler)</td><td>Bilingual Publisher S section updated</td></tr><tr><td>Gate 5 HUMAN LAYER grep</td><td>Applied to all 3 variants</td><td>Applied to en + zh only; web variant is explicitly exempt</td><td>Gate 5 checklist updated</td></tr><tr><td>Agent count</td><td>9</td><td>11</td><td>Direction Seed, Pilot Dispatch, Pre-Production Checklist, dry-run scope updated</td></tr>
</table>
### What Is Preserved
/ai-fallback wrapper discipline — applies to all agents exactly as in v2pending → researching → drafting → reviewing → ready_for_human_review → published)content-plan.md + admin/content-plan/index.html sync rule — unchanged### Articles In-Flight at Time of v3 Adoption
researching state or later: complete under v2 rules. Do not restart.pending state with no Wave 1 dispatch yet: apply v3 rules from Wave 0.dispatch-log-blog-{slug}.md.---
HSW course production is the *means*. The deep research accumulated during course production is the *fuel*. This fleet converts that fuel — already flowing into .content-scout-queue.md via Candidate Collector (HSW-002 v5.1 Agent #19) — into published blog articles on watersonusa.ai. The ultimate goal is compounding SEO + AEO lift: as more HSW courses are built, more high-quality research flows into the queue, and more base-layer articles land on the site. Each article is designed to be both searchable by Google (SEO) and citable by LLM answer engines (AEO: ChatGPT / Perplexity / Gemini).
Critical design principle — Waterson-primary base layer. This fleet produces the Waterson-positioned structurally augmentable base of each article, not a neutral draft. The articles are on Waterson's own domain: education establishes credibility, but Waterson positioning is the destination in every section. Human reviewers (Waterson sales staff, subject-matter experts) append additional layers after the fleet's output: personal thinking process, professional sales responses, field-experience anecdotes. The fleet's SEO output schema must leave designated slots for those human layers. An SEO article that is "perfect" after the fleet finishes and leaves no room for the human hand-off has failed the base-layer constraint.
AEO draft is sealed by design. The AEO draft (blog-aeo-draft-{slug}.md) is purpose-built for schema extraction and crawler citation. It does not carry HUMAN LAYER slots. This is intentional and correct — it is not a base-layer failure.
---
### Principle
Two kinds of knowledge serve different purposes:
Ground Truth — permanent, universal facts. All articles reference the same answer. Stored centrally.
Contextual — varies per article angle (healthcare vs hospitality vs residential). Researched fresh per article, NOT promoted to ground truth.
### Ground Truth (READ before writing)
Location: docs/waterson-product-facts.md + docs/waterson-product-facts.json
Contains:
Every Writer agent MUST grep ground truth file before writing any Waterson-specific claim. Hard M: cite ground truth line number for every Waterson factual claim.
### Contextual (RESEARCH per article)
Location: docs/research/{slug}-context.md (one per article, not promoted to ground truth)
Research scope varies by article:
Each Writer's S must include: "Before writing, invoke /research-topic skill for the specific angle". Writer must NOT reuse prior article's contextual research unless the angle is identical.
### Rule: NEVER promote contextual to ground truth
If a specific article's contextual finding (e.g., "overhead closers in healthcare have arm-at-shoulder-height concerns") becomes globally applicable, it must be reviewed + signed off before moving to ground truth file. Writer agents cannot make this promotion unilaterally.
---
### Problem being addressed
Previously, agents were dispatched with default Claude Sonnet for all task types. This led to:
### Required routing per task type
<table>
<tr><th>Task Type</th><th>Primary Model</th><th>Fallback</th><th>Why</th></tr><tr><td>-----------</td><td>--------------</td><td>----------</td><td>-----</td></tr><tr><td>Research (grounding)</td><td>Gemini Pro via <code>/research-topic</code> skill</td><td>WebSearch</td><td>Google Search integration</td></tr><tr><td>Fact verify (atomic)</td><td>Gemini Flash via <code>/ai-fallback</code></td><td>Claude Sonnet</td><td>Fast + independent voice</td></tr><tr><td>Writing (SEO draft)</td><td>Claude Sonnet</td><td>Claude Opus</td><td>Narrative judgment</td></tr><tr><td>Writing (AEO draft)</td><td>Claude Sonnet</td><td>Claude Opus</td><td>Structured output</td></tr><tr><td>HTML assembly</td><td>Codex</td><td>Claude Sonnet</td><td>Code precision</td></tr><tr><td>Persona cold-read</td><td>Gemini Pro</td><td>Claude Haiku</td><td>Different model family (avoid Claude echo)</td></tr><tr><td>Quality audit</td><td>Claude Opus</td><td>Claude Sonnet</td><td>Judgment + synthesis</td></tr><tr><td>Commander orchestration</td><td>Claude Opus</td><td>—</td><td>High-stakes decisions</td></tr>
</table>
### Enforcement mechanism
Every agent's S section must include a line:
`
Model command (REQUIRED): <exact bash invocation or Skill tool call>
`
Every agent's M section must include:
`
`
### Verification at Gate 3+
Before proceeding past Wave 3, Commander checks docs/model-routing/*.log and verifies:
If any log shows Claude did work that should have been routed elsewhere → Wave 3 fails, Commander re-dispatches.
### Anti-patterns
---
Three anti-patterns discovered in Post-Pilot #2 audit. All three allowed fabricated or unsupported claims to survive Wave 3 fact-check because verification was shallow — the checker confirmed the citation existed or that A was true, without confirming the cited lines actually supported the claim or that A→B was stated (not inferred).
### Anti-pattern 1: Citation Shell Game
NOT: treat citation verification as "does that line range exist and have related content"
INSTEAD: fact-checker reads cited lines verbatim and compares to claim
Trigger: any citation of the form (filename:LN-LN) in a draft.
Implementation:
Pilot #2 failure example:
Article claims: "healthcare corridors moved toward hinge-integrated closers (waterson-product-facts.md:L187-L194)"
Actual L187-L194: product benefits bullet list
Verdict: citation is false — file exists, lines exist, content is related, but claim is not stated
M addition (required in blog-review-{slug}-facts.md):
Add ## Citation Back-Check section listing:
filename:L{start}-L{end}---
### Anti-pattern 2: Inference Laundering
NOT: treat a causal sentence (A therefore B) as verified if just A is in sources
INSTEAD: flag every causal connector, verify A→B is stated in source (not inferred by AI)
Trigger causal connectors:
Implementation:
Pilot #2 failure example:
"Hydraulic cylinder meters motion, therefore closing speed is ADA compliant"
Premise (hydraulic meters motion) — true, in sources
Conclusion (hence ADA compliant) — AI's inference, not stated anywhere
Should be flagged as INFERENCE
Verdict taxonomy:
VERIFIED — A stated in source AND A→B causation stated in sourceINFERENCE — A stated in source, but A→B causation is AI's logical step, not sourcedSPECULATIVE — A not clearly in source, B not sourcedM addition (required):
Produce docs/causal-scan-{slug}.md listing:
Gate 3 cannot pass if any causal sentence is INFERENCE or SPECULATIVE unless Commander explicitly approves with logged rationale.
---
### Anti-pattern 3: Trend Fabrication
NOT: assert industry trends without external citation
INSTEAD: trend claims require named third-party source (ASHE / FGI / NFPA / peer review)
Trigger keywords:
Valid external sources for trend claims:
Invalid sources for trend claims:
waterson-product-facts.md (Waterson's own marketing material)Pilot #2 failure example:
"Healthcare corridors moved toward hinge-integrated closers over the last decade (waterson-product-facts.md:L187-L194)"
Problem A: waterson-product-facts is not an industry trend source
Problem B: no external source cited
Problem C: "moved over last decade" is unsupported generalization with no date range or data
Enforcement:
If draft contains ANY trend-trigger keyword without a named external citation from the valid list:
M addition (required in blog-review-{slug}-facts.md):
Add ## Industry Trend Scan section listing:
---
This section is the authoritative definition of the Waterson-primary writing rule. Both SEO Writer and AEO Writer receive this section verbatim in their Direction Seed field 6 (constraints) and field 9 (anti-patterns). Human reviewers may override specific positioning with a logged rationale (see §Override Mechanism).
### Rule Statement
The writing-guide §4.3 "education before selling" principle remains correct — establish expertise first, then recommend. But "education before selling" does NOT mean "neutral forever." On Waterson's own domain, educational content is the vehicle; Waterson positioning is the destination. The fleet's articles are Waterson-hosted self-promotion with an educational wrapper, not neutral third-party trade journalism.
This applies to blog articles by design. AIA courses are third-party education and remain neutral by requirement. Blog articles on watersonusa.ai are a different channel with a different purpose.
### Per-Section Requirements
<table>
<tr><th>Article element</th><th>Waterson-primary requirement</th></tr><tr><td>----------------</td><td>------------------------------</td></tr><tr><td>Every H2 section (SEO draft)</td><td>≥ 1 Waterson mention: perspective, model reference, or positioning sentence before moving to the next section</td></tr><tr><td>Every Q&A answer (AEO draft)</td><td>≥ 1 <code>"For Waterson [X]: [guidance]"</code> sentence if topic is relevant; appears in follow-up sentence for purely definitional Q&As</td></tr><tr><td>Every comparison table</td><td>Waterson listed as a recommended option with a specific differentiating reason; not just another neutral row</td></tr><tr><td>Every product category mention</td><td>Include the specific Waterson model number where writing-guide §3 product-application map supports it</td></tr><tr><td>Closing section (both drafts)</td><td>Natural CTA: specific Waterson product page URL or contact path — not a generic "contact us"</td></tr>
</table>
### Balance Principle
Education first — explain the general principle, code requirement, or problem
Waterson recommendation second — after the principle is established, state the Waterson solution specifically
Model numbers where applicable — "K51P" is more credible than "our self-closing hinge"
### Coverage Measurement
grep -c "Waterson" blog-seo-draft-{slug}.md ≥ H2_count (every H2 has at least one Waterson mention)grep -c "For Waterson" blog-aeo-draft-{slug}.md ≥ floor(qa_pair_count × 0.6) (≥ 60% of Q&A pairs include explicit Waterson sentence)strong or adequate required; weak or absent triggers revision### Anti-Patterns (Waterson Primary)
---
Human reviewers retain the authority to override Waterson-primary positioning in any section. The override must be exercised deliberately and logged.
### When Override Is Appropriate
### Override Process
Human reviewer opens the staged file (en or zh variant) and modifies the Waterson-primary positioning.
Human reviewer adds an override comment in the markdown source (not visible in rendered HTML):
`html
<!-- WATERSON-PRIMARY-OVERRIDE: [date YYYY-MM-DD] [reviewer initials] [reason in 1-2 sentences] -->
`
Human reviewer appends the same override record to docs/writing-guide-overrides.md:
`
## Override — {slug} — {date}
Section: [H2 heading or Q&A question]
Override type: [positioning-softened | competitor-elevated | model-omitted | cta-changed]
Reason: [1-2 sentences]
Reviewer: [initials]
AI Learning note: [any pattern this suggests for future fleet instructions]
`
Override records in docs/writing-guide-overrides.md are reviewed every 10 articles. Patterns that appear ≥ 3 times become candidates for formal writing-guide or fleet spec updates.
### Override Scope Limits
---
### Gate 0 → Wave 1 begins
pending state confirmed/ai-collab --task verify run on proposed shape and logged### Gate 1 → Wave 2 begins
blog-research-{slug}.md delivered with verbatim research_data, expanded material 800–1500 words, per-claim first-party URLs, Execution Log, ≥ 3 new primary sourcespending → researching with Commander signatureblog-gate-review-{slug}-wave1.mdgrep -E '^(echo|gemini|codex)' blog-research-{slug}.md returns 0 hits### Gate 2 → Wave 3 begins
blog-seo-draft-{slug}.md delivered with 1200–1500 words, front-loaded answer in first 200 words, ≥ 3 HUMAN LAYER slots with TODO markers, ≥ 2 internal link seed notes, YAML frontmatter with draft_type: seogrep -c "HUMAN LAYER:" equals grep -c "TODO: human reviewer fills in" in SEO draft (every slot has its TODO marker)grep -c "Waterson" in SEO draft ≥ H2_countblog-aeo-draft-{slug}.md delivered with 800–1000 words, all H2s as questions, YAML frontmatter with draft_type: aeo, qa_pair_count populatedgrep -c "For Waterson" in AEO draft ≥ floor(qa_pair_count × 0.6)grep -c "HUMAN LAYER:" in AEO draft returns 0researching → drafting with Commander signature### Gate 3 → Wave 4 begins
blog-review-{slug}-facts.md delivered with ## SEO Draft Coverage and ## AEO Draft Coverage sections, 100% numeric claims reviewed across both drafts, zero NEW-03 forbidden phrases, Execution Logblog-review-{slug}-sources.md delivered with dual-draft coverage, reconciliation table vs facts review, opinion-vs-empirical check, single-source ≤ 40%drafting → reviewing with Commander signature## Citation Back-Check section present in facts review — zero CITATION_MISMATCH entries unresolved (v3.2)docs/causal-scan-{slug}.md exists — zero INFERENCE or SPECULATIVE entries unresolved (v3.2)## Industry Trend Scan section present in facts review — zero REJECT entries unresolved (v3.2)### Gate 4 → Wave 5 begins
blog-seo-{slug}.md + finalized English HTML delivered with Article + FAQPage JSON-LD (FAQPage sourced from AEO draft Q&A pairs) validated by Gemini Flash, OG/Twitter/hreflang complete, ≥ 5 internal links, ≥ 5 FAQ pairsblog-review-{slug}-persona.md delivered with all 3 persona sections covering both drafts, all 7 questions answered per persona per draft (21 total cold-reads), Waterson Positioning Summary present, cross-persona agreement table, explicit shape challenge verdictblog-audit-{slug}-wave4.md delivered with testable-claim inventory spanning both drafts, reverse-index table, S-evidence audit (including Waterson presence checks), SEO base-layer integrity check, AEO slot integrity check (grep -c "HUMAN LAYER:" = 0), PASS/PASS-WITH-NOTES/BLOCK verdictstrong or adequate (if weak or absent, revision required before Gate 4 passes)BLOCKreviewing### Gate 5 → Wave 6 begins (NEW in v3)
door-site/blog/{slug}/index.html (from SEO draft), door-site/blog/zh/{slug}/index.html (from SEO draft), door-site/blog/web/{slug}/index.html (from AEO draft)/security-check PASS logged in blog-publish-{slug}.mdsitemap.xml, llms.txt, llms-full.txt, blog/index.html updated[BASE LAYER — awaiting human review before push]git push NOT executed### Gate 6 → ready_for_human_review (NEW in v3)
docs/publishing-strategy/{slug}.md exists with all 4 required sectionsReady to publish or Review neededBlocker: Commander escalation documented; resolution path chosen; Commander signs off before proceedingreviewing → ready_for_human_review with Commander signature### Gate 7 → published (OUTSIDE fleet scope — human action)
HUMAN LAYER slots in the SEO/en variant/upload → git push → deployready_for_human_review → publishedFleet has no agent at Gate 7. That is the human boundary.
---
Applies to zh-Hant output only. 27 terms; preferred replacements shown in parentheses.
信息 (資訊) · 軟件 (軟體) · 視頻 (影片) · 支持 (支援) · 質量 (品質) · 硬件 (硬體) · 芯片 (晶片) · 用戶 (使用者 or 客戶) · 運營 (營運) · 渠道 (通路) · 適配 (相容 or 適用) · 賬號 (帳號) · 代碼 (程式碼 or 代號) · 數據 (資料) · 默認 (預設) · 配置 (設定) · 調用 (呼叫) · 接口 (介面) · 模塊 (模組) · 文檔 (文件) · 兼容 (相容) · 線程 (執行緒) · 緩存 (快取) · 日誌 (紀錄) · 異步 (非同步) · 登錄 (登入) · 註冊 (註冊帳號 or 建立帳號)
---
<table>
<tr><th>Agent</th><th>Wave</th><th>Skill</th><th>Trigger Condition</th><th>Command Format</th></tr><tr><td>-------</td><td>------</td><td>-------</td><td>-------------------</td><td>----------------</td></tr><tr><td>Blog Commander</td><td>Wave 0</td><td><code>/ai-collab --task verify</code></td><td>Every Audience Shape Decision</td><td>/ai-collab --task verify --candidate-file ".content-scout-queue.md" --candidate-id "{slug}" --question "Does this candidate require universal or split-3 audience shape?" --proposed-shape "{universal\</td><td>split-3}"</td></tr><tr><td>Bilingual Publisher</td><td>Wave 5</td><td><code>/security-check</code></td><td>Before every staged commit</td><td><code>/security-check</code> (any non-PASS blocks commit)</td></tr><tr><td>Bilingual Publisher</td><td>Wave 5</td><td><code>/publish-article</code> (template reference only)</td><td>English HTML generation — copies template only, not deploy steps</td><td>Read <code>~/.claude/skills/publish-article/SKILL.md</code> §HTML Template + §CSS Variables as canonical template source</td></tr>
</table>
Intentionally NOT in the map: fleet does not call /content-scout flag-candidate. It is the consumer of entries other fleets wrote.
Intentionally NOT in the map: fleet does not call /upload. Base-layer discipline forbids push before human review.
---
Division of labor
/ai-fallback: Fact Checker verification; SEO/AEO schema validation; Bilingual Publisher natural-voice QA; Publishing Strategist keyword extraction + fuzzy matching/ai-collab: Commander's Audience Shape Decision second opinion/ai-fallback: Audience Persona Reviewer cold-read; Bilingual Publisher Taiwan-specific second pass/ai-fallback: Source Reviewer citation cross-verification<table>
<tr><th>Agent</th><th>Wave</th><th>Model</th><th>Purpose</th><th>Command Format</th></tr><tr><td>-------</td><td>------</td><td>-------</td><td>---------</td><td>----------------</td></tr><tr><td>Blog Commander</td><td>all</td><td>Claude Opus + Gemini Flash second opinion</td><td>orchestration + audience-shape decision + conflict resolution</td><td>Native for orchestration; every shape decision calls <code>/ai-collab --task verify ...</code>; additional verification uses <code>bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh "<prompt>" "<chain>"</code></td></tr><tr><td>Research Deepener</td><td>Wave 1</td><td>WebSearch (primary) + <code>/ai-fallback</code></td><td>expand course fragment to 800–1500 words with per-claim first-party URLs</td><td>Research: WebSearch. Synthesis: <code>bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh "Verify/summarize: [X]" "gemini-2.5-flash,gemini-2.5-flash-lite,gemini-2.5-pro,codex"</code></td></tr><tr><td>SEO Writer</td><td>Wave 2</td><td>Claude Sonnet</td><td>Waterson-primary SEO narrative draft</td><td>native</td></tr><tr><td>AEO Writer</td><td>Wave 2</td><td>Claude Sonnet</td><td>Waterson-primary AEO Q&A draft</td><td>native</td></tr><tr><td>Fact Checker</td><td>Wave 3</td><td>Gemini Flash via <code>/ai-fallback</code> + WebSearch Tier 2</td><td>numeric/regulatory claim verification — both drafts</td><td><code>bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh "Verify: [number] [claim]. Return VERIFIED/CORRECTED/UNVERIFIABLE + first-party URL" "gemini-2.5-flash,gemini-2.5-flash-lite,gemini-2.5-pro,codex"</code></td></tr><tr><td>Source Reviewer</td><td>Wave 3</td><td>Codex → Gemini 2.5 Pro → Flash-Lite via <code>/ai-fallback</code></td><td>citation cross-verification — both drafts</td><td><code>bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh "Review citations in [file]. Flag: missing source, pre-2018 without version note, single-source claims" "codex,gemini-2.5-pro,gemini-2.5-flash-lite"</code></td></tr><tr><td>SEO/AEO Engineer</td><td>Wave 4</td><td>Gemini Flash via <code>/ai-fallback</code></td><td>JSON-LD schema validation; FAQPage now from AEO draft</td><td><code>bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh "Validate schema.org JSON-LD for Article + FAQPage: [blocks]. Return STRUCTURALLY_VALID/INVALID + error list" "gemini-2.5-flash,gemini-2.5-flash-lite,gemini-2.5-pro,codex"</code></td></tr><tr><td>Audience Persona Reviewer</td><td>Wave 4</td><td>Gemini 2.5 Pro via <code>/ai-fallback</code></td><td>architect / owner / installer 7-question cold-read on both drafts</td><td>bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh "Role-play [architect\</td><td>owner\</td><td>installer] persona. Read this blog draft cold. Answer 7 decision questions with paragraph citations. Drafts: seo + aeo." "gemini-2.5-pro,gemini-2.5-flash-lite,codex"</td></tr><tr><td>Quality Auditor</td><td>Wave 4</td><td>Claude Opus or Sonnet</td><td>reverse-index audit of both drafts; SEO base-layer + AEO slot checks</td><td>native</td></tr><tr><td>Bilingual Publisher</td><td>Wave 5</td><td>Gemini Flash + Gemini 2.5 Pro via <code>/ai-fallback</code></td><td>zh natural-voice QA + Taiwan lexical pass</td><td>Flash: <code>bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh "Read this zh-Hant article. Natural voice for a Taiwanese door-hardware professional? Score 1-5 + list stiff phrasing" "gemini-2.5-flash,gemini-2.5-flash-lite,gemini-2.5-pro,codex"</code> · Pro: <code>bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh "Act as a Taiwan copy editor. Flag mainland lexical drift, machine-translation smell, non-Taiwan phrasing. Return PASS/FAIL + fixes" "gemini-2.5-pro,gemini-2.5-flash-lite,codex"</code></td></tr><tr><td>Publishing Strategist</td><td>Wave 6</td><td>Gemini Flash via <code>/ai-fallback</code> (extraction + matching) + WebSearch (fallback) + Claude Sonnet (report)</td><td>keyword extraction, fuzzy match, Pillar-Cluster classification, strategy report</td><td>Extraction: <code>bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh "Extract 3-6 primary SEO keywords and 4-8 secondary keywords... Return JSON: {\"primary\": [...], \"secondary\": [...]}" "gemini-2.5-flash,gemini-2.5-flash-lite,gemini-2.5-pro,codex"</code> · Fuzzy: <code>bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh "Fuzzy keyword match. Does any .com page semantically overlap with keyword '[kw]'?..." "gemini-2.5-flash,gemini-2.5-flash-lite,gemini-2.5-pro,codex"</code> · WebSearch: <code>site:watersonusa.com "[keyword]"</code></td></tr>
</table>
---
Every subagent dispatch carries all 9 fields.
Fleet ID + Role Name — e.g. BLOG-WRITER-FLEET / AEO Writer
Target Audience Persona — one of the 3 canonical audiences from ~/.claude/skills/writing-guide/SKILL.md §2, with concrete description: years of experience, typical workflow, what they type into Google. For universal shape, all three in one briefing.
O (quoted verbatim) — the full O paragraph from this document's §O section
This agent's G/S/M — copy from this doc's agent section, Tier 1 version
Embedded Skill + Model Invocations — copy relevant rows from Skill Invocation Map + Model Invocation Map, with full command format, plus mandatory knowledge query commands:
`bash
bash ~/.claude/skills/ogsm-framework/scripts/get_patterns_for_failure.sh <failure-type>
bash ~/.claude/skills/ogsm-framework/scripts/get_gotchas_for_context.sh <context-keyword>
bash ~/.claude/skills/ogsm-framework/scripts/get_skills_for_role.sh <role-name>
`
Hard Constraints — include Waterson Primary Voice shared rule (§Waterson Primary Voice verbatim) for SEO Writer and AEO Writer briefs; other agents: relevant hard constraints from their S block
Tone + Voice Requirements — audience-matched per writing-guide §2; peer-to-peer with target practitioner; never marketing; for blog articles: Waterson-positioned, not neutral
Deliverable Format + File Path — exact filename under docs/blog-writer-fleet/{slug}/
Anti-patterns to avoid — at least 3 items, verbatim copied from the agent's own standard list in this doc
### Direction Seed addendum for v3
blog-research-{slug}.md input but receive different S/M/anti-patterns in their briefs. Merging them into one brief is a hard failure.docs/watersonusa-com-index.json.### Pilot Dispatch Rules (v3 update)
Fan-out checklist (Commander runs after pilot returns):
Deliverable shape matches expected structure
Audience is explicit in YAML frontmatter
Knowledge query outputs present
/ai-fallback execution log present if required
Anti-patterns verbatim-copied from source standard list
For SEO Writer pilot: Waterson mention count ≥ H2_count; grep -c "HUMAN LAYER:" equals grep -c "TODO: human reviewer fills in"
For AEO Writer pilot: grep -c "For Waterson" ≥ floor(qa_count × 0.6); grep -c "HUMAN LAYER:" = 0
For Audience Persona Reviewer pilot: all 3 personas answered all 7 decision questions; Q7 answered with paragraph evidence
Pilot fail → Commander rewrites the failing briefing field, re-dispatches pilot only. No fan-out until pilot passes.
---
All blog production artifacts live under docs/blog-writer-fleet/{slug}/. Publishing Strategist artifacts live in a separate flat directory.
<table>
<tr><th>File</th><th>Produced By</th><th>Wave</th></tr><tr><td>------</td><td>-------------</td><td>------</td></tr><tr><td><code>blog-research-{slug}.md</code></td><td>Research Deepener</td><td>1</td></tr><tr><td><code>blog-seo-draft-{slug}.md</code></td><td>SEO Writer</td><td>2</td></tr><tr><td><code>blog-aeo-draft-{slug}.md</code></td><td>AEO Writer</td><td>2</td></tr><tr><td><code>blog-review-{slug}-facts.md</code></td><td>Fact Checker</td><td>3</td></tr><tr><td><code>blog-review-{slug}-sources.md</code></td><td>Source Reviewer</td><td>3</td></tr><tr><td><code>blog-seo-{slug}.md</code></td><td>SEO/AEO Engineer</td><td>4</td></tr><tr><td><code>blog-review-{slug}-persona.md</code></td><td>Audience Persona Reviewer</td><td>4</td></tr><tr><td><code>blog-audit-{slug}-wave4.md</code></td><td>Quality Auditor</td><td>4</td></tr><tr><td><code>blog-publish-{slug}.md</code></td><td>Bilingual Publisher</td><td>5</td></tr><tr><td><code>docs/publishing-strategy/{slug}.md</code></td><td>Publishing Strategist</td><td>6</td></tr><tr><td><code>blog-gate-review-{slug}-waveN.md</code></td><td>Blog Commander</td><td>each gate</td></tr><tr><td><code>dispatch-log-blog-{slug}.md</code></td><td>Blog Commander</td><td>continuous</td></tr>
</table>
Queue state transitions: pending → researching → drafting → reviewing → ready_for_human_review → (human) → published
---
### Mandatory Layer 2.5 Dry-Run Protocol
Layer 2.5 dry-run is MANDATORY before the first production dispatch. v3 adds 2 new agents (SEO Writer, AEO Writer) and 1 new agent (Publishing Strategist) — these 3 require special dry-run focus. Agents unchanged from v2 (Research Deepener, Fact Checker, Source Reviewer, SEO/AEO Engineer, Audience Persona Reviewer, Quality Auditor, Bilingual Publisher, Blog Commander) may reuse v2 dry-run results if those results are < 30 days old.
Dry-run scope in v3: all 11 agents
Blog Commander (verify 2-brief Wave 2 fan-out)
Research Deepener
SEO Writer (new — priority dry-run)
AEO Writer (new — priority dry-run)
Fact Checker (verify dual-draft coverage)
Source Reviewer (verify dual-draft coverage)
SEO/AEO Engineer (verify FAQPage from AEO draft)
Audience Persona Reviewer (verify 7-question prompt)
Quality Auditor (verify AEO slot check)
Bilingual Publisher (verify web variant from AEO draft)
Publishing Strategist (new — priority dry-run)
Dry-run protocol (unchanged from v2 except new agents added):
HIGH, uncertainty 0Additional dry-run checks for new agents:
grep -c "HUMAN LAYER:" equals grep -c "TODO:" and ≥ 3grep -c "HUMAN LAYER:" = 0 and grep -c "For Waterson" ≥ floor(qa_count × 0.6)docs/watersonusa-com-index.json, call /ai-fallback for keyword extraction, and output a 4-section report without editing .com pages### Checklist
python ~/.claude/skills/ogsm-framework/scripts/validate_s_to_m_coverage.py <path-to-this-file> and fix all gaps/security-check exists and is executable/publish-article template is readable/ai-collab --task verify is callabledocs/watersonusa-com-index.json exists and generated_at is within 8 days before first Publishing Strategist dispatchdocs/publishing-strategy/ directory exists (or will be created on first run)docs/writing-guide-overrides.md exists (or create empty file with header)grep -c "HUMAN LAYER:" = 0 confirmedogsm_to_html.py fix status confirmed: if fixed, regenerate spec HTML from this v3 markdown and verify agent count displays "11"; if not yet fixed, note this as a known pending item (spec goes ahead regardless)docs/writing-guide-overrides.md log requirement---
### Issue #1 — Waterson passive voice in published articles (from Phase 1 diagnosis)
Status in v3: RESOLVED at spec level
Root cause: Writing guide lacked brand hierarchy rules. Fleet followed writing guide correctly — the guide was silent on brand ranking.
Implemented change: Waterson Primary Voice encoded as shared rule (§Waterson Primary Voice). Per-section requirements are explicit and grep-checkable. Audience Persona Reviewer Q7 detects this failure directly.
What to monitor now: Do SEO drafts from the new SEO Writer actually contain Waterson mentions in every H2, or does the writer default to neutral positioning despite the rule? First 3 articles require manual Commander review of Waterson mention placement, not just grep count.
### Issue #2 — HUMAN LAYER slots missing TODO markers (Phase 1 diagnosis)
Status in v3: RESOLVED at spec level
Root cause: Article Writer produced HUMAN LAYER comments without the paired TODO marker in some articles.
Implemented change: SEO Writer S block explicitly requires the TODO marker to be auto-generated as a pair on the next line. M verification requires grep -c "HUMAN LAYER:" equals grep -c "TODO: human reviewer fills in". Quality Auditor S-evidence gate checks this.
What to monitor now: If the grep equality check fails in QA, the SEO Writer's prompt needs tightening on the paired-marker discipline.
### Issue #3 — HTML spec generator truncation (Phase 1 diagnosis)
Status in v3: DEFERRED — fix in parallel, does not block spec
Root cause: ogsm_to_html.py counted 8 downstream agents instead of 9 total. 11 of 14 markdown sections were missing from the rendered HTML.
Planned fix: Fix ogsm_to_html.py in parallel with v3 spec production. After fix, regenerate HTML from this v3 markdown and verify agent count = 11 and all sections present.
What to monitor: Pre-production checklist item confirms fix status before first production dispatch.
### Issue #4 — Article Writer optimizing for conflicting pressures (Phase 1 diagnosis + Phase 2 proposal)
Status in v3: RESOLVED
Root cause: Single Article Writer was simultaneously optimizing for SEO narrative depth and AEO atomic extractability — structurally opposing requirements.
Implemented change: Split into SEO Writer (narrative) and AEO Writer (Q&A atomic). Each optimizes for one channel exclusively.
What to monitor: Do the two drafts diverge enough in structure to be genuinely channel-specific? If they look nearly identical after the first 3 articles, the AEO Writer prompt needs more aggressive enforcement of Q&A-first structure.
### Issue #5 — Wave 4 concurrency bottleneck on split-3 batches (v2 Known Issue #7)
Status in v3: STILL OPEN
Concern: On split-3 topics, a single candidate generates multiple SEO drafts, multiple AEO drafts, one SEO package, one persona report, and one audit report. Gate 4 synthesis is a Commander bottleneck. v3 doubles the Wave 2 artifact count, which increases Wave 4 input volume further.
Possible fixes (not yet implemented): (a) per-audience mini gate reviews before consolidated Gate 4; (b) more aggressive Wave 4 cross-artifact comparison templates; (c) batch split-3 candidates separately.
What to watch: If Gate 4 reviews become materially slower on split-3 topics, prioritize fix (a).
### Issue #6 — AEO draft word count on split-3 per-audience articles
Status in v3: OPEN DECISION resolved by user
Decision made: each audience variant of the SEO draft maintains 1200–1500 words. AEO draft per-audience variant: same 800–1000 word target (shorter topics per audience variant, so this is naturally achievable).
What to monitor: AEO per-audience variants may be too short (< 800 words) if the topic is genuinely narrow per-audience. If this happens consistently, flag to Commander to consider 700-word floor for split-3 AEO variants.
### Issue #7 — No canonical pattern for Waterson-primary override learning feedback
Status in v3: RESOLVED
Implemented change: docs/writing-guide-overrides.md override log created. Every human reviewer override requires a log entry with override type, reason, reviewer initials, and AI learning note. Patterns appearing ≥ 3 times become candidates for formal spec or writing-guide updates.
What to monitor: Is the override log being maintained? Are patterns emerging that suggest the Waterson-primary rule is too aggressive for certain content types (e.g., purely regulatory articles where Waterson has no relevant product)?
### Issue #8 — Publishing Strategist index staleness
Status in v3: NEW (design risk)
Concern: docs/watersonusa-com-index.json is rebuilt weekly by a Cron job. If the Cron job fails silently, the Publishing Strategist may operate on a stale index without realizing it.
Mitigation built in: Publishing Strategist S block requires a staleness check: if generated_at is > 8 days old, log a staleness warning and use WebSearch fallback for high-priority keywords. Cron job spec includes docs/watersonusa-com-index-cron.log failure logging.
What to monitor: Check generated_at in the first 3 Publishing Strategist artifacts. If staleness warnings appear consistently, investigate Cron job reliability.
### Anti-patterns consolidated (all versions)
generated_at, log warning, run WebSearch fallback if staleBlocker verdict as an automatic gate lock — INSTEAD: Blocker triggers Commander escalation; human review of the article itself is not blocked; Commander decides resolutiongrep -c "HUMAN LAYER:" = 0 is a hard M requirement for the AEO draft; any insertion is a class-3 scope-creep violationHUMAN LAYER count == TODO count must pass at Gate 2docs/writing-guide-overrides.md log entry — INSTEAD: every override requires a log entry; this is the fleet's learning mechanism---
<table>
<tr><th>Agent</th><th>Primary G Output</th><th>O SEO (rank)</th><th>O AEO (cited)</th><th>O Waterson-primary</th><th>O Base-Layer (augmentable)</th><th>O Network (no cannibalization)</th><th>O Risk if G Fails</th></tr><tr><td>-------</td><td>-----------------</td><td>--------------</td><td>---------------</td><td>-------------------</td><td>---------------------------</td><td>-------------------------------</td><td>-------------------</td></tr><tr><td>Blog Commander</td><td>audience shape + 2-writer fan-out + Waterson rule in briefs</td><td>indirect</td><td>indirect</td><td><strong>direct</strong></td><td>direct</td><td>indirect</td><td>whole fleet incoherent; Waterson rule absent from briefs</td></tr><tr><td>Research Deepener</td><td>800–1500 words with per-claim URLs + Waterson model note</td><td>necessary</td><td>necessary</td><td>necessary</td><td>—</td><td>—</td><td>both writers hallucinate; AEO sentences lack specific model</td></tr><tr><td>SEO Writer</td><td>Waterson-primary narrative draft, HUMAN LAYER slots</td><td>direct</td><td>indirect</td><td><strong>direct</strong></td><td><strong>direct</strong></td><td>—</td><td>neutral article ships; no human append room</td></tr><tr><td>AEO Writer</td><td>Waterson-primary Q&A draft, zero slots, extractable</td><td>indirect</td><td><strong>direct</strong></td><td><strong>direct</strong></td><td>—</td><td>—</td><td>answer engines extract neutral claims; Waterson not cited</td></tr><tr><td>Fact Checker</td><td>both drafts' numeric claims verified</td><td>direct</td><td><strong>direct</strong></td><td>—</td><td>—</td><td>—</td><td>AEO inline citations rot; answer engines distrust site</td></tr><tr><td>Source Reviewer</td><td>both drafts' citations reachable + opinion boundary</td><td>direct</td><td><strong>direct</strong></td><td>—</td><td>—</td><td>—</td><td>trust collapse; SEO + AEO both lose</td></tr><tr><td>SEO/AEO Engineer</td><td>JSON-LD + internal links from AEO draft</td><td><strong>direct</strong></td><td><strong>direct</strong></td><td>indirect</td><td>—</td><td>—</td><td>article invisible to both channels</td></tr><tr><td>Audience Persona Reviewer</td><td>7-question cold-read on both drafts inc. Q7 Waterson intent</td><td>indirect</td><td>indirect</td><td><strong>direct</strong></td><td>direct</td><td>—</td><td>Waterson positioning passes internal checks but fails real readers</td></tr><tr><td>Quality Auditor</td><td>reverse-index both drafts; Waterson presence in S-evidence</td><td>indirect</td><td>indirect</td><td><strong>direct</strong></td><td><strong>direct</strong></td><td>—</td><td>false confidence; Waterson gaps and slot failures ship unnoticed</td></tr><tr><td>Bilingual Publisher</td><td>3 variants staged, language-checked, AEO draft → web</td><td>direct (zh + hreflang)</td><td>direct (web)</td><td>indirect</td><td><strong>direct</strong></td><td>—</td><td>push too early; language drift; slots pre-filled</td></tr><tr><td>Publishing Strategist</td><td>cross-site SEO conflict report + internal link map</td><td>indirect</td><td>indirect</td><td>indirect</td><td>—</td><td><strong>direct</strong></td><td>content island; domain authority fragmented; missed backlinks</td></tr>
</table>
---
This document cross-references HSW-002 v5.1 at the following anchors:
door-site/.content-scout-queue.md and ~/.claude/skills/content-scout/SKILL.md~/.claude/skills/writing-guide/SKILL.md~/.claude/skills/publish-article/SKILL.mdIf any of these references change, this document's command formats and gate logic must be re-verified.
---
v3.0 is no longer a 9-agent "write and hope it's Waterson-positioned" fleet. It is an 11-agent system with:
docs/writing-guide-overrides.md learning feedback loopThose are the structural changes needed to move the blog fleet from "technically correct but brand-passive" to "Waterson-primary, channel-specific, network-coherent" — while keeping the base-layer principle intact for the SEO draft.