watersonusa.ai / blog / zh

WTR-Blog-Writer-Fleet OGSM v3 — Waterson USA Blog Production Team

3.2 (incremental: Citation Accuracy Verification; Causal Inference Audit; Industry Trend Claims Require Named External Source) | 11 個 agent | 由 ogsm_to_html.py 自動產生

O — Objective (目標)

O: Give a busy practitioner — architect, facility owner, or installer — an article that answers the question they came to Google with so clearly that they bookmark it, share it with a colleague, and come back next time; while simultaneously positioning Waterson as the credible, specific recommendation they act on, leaving the article structured enough that an LLM answer engine can cite it accurately, and giving a Waterson human reviewer a clear base to append their own sales and field insights without rewriting it. Over 6 months of publishing at this quality bar, watersonusa.ai should see measurable lift in both organic search rankings and AI-engine citations — and the Waterson brand should appear as the first or second recommendation in every practitioner-facing LLM response about the product categories this fleet covers.

Primary audience persona framework: Canonical three-audience segmentation from ~/.claude/skills/writing-guide/SKILL.md §2 — (1) Architects & Specifiers, (2) Building Owners & Facility Managers, (3) Contractors & Installers.

The three outcomes that together define success:

SEO outcome: A practitioner arriving via Google finds the Waterson-anchored answer in the first 20 seconds, keeps reading for the nuance, and follows the CTA to a Waterson product page. Google rewards with ranking.

AEO outcome: A crawler / answer engine (ChatGPT, Perplexity, Gemini) can parse the structured data, extract citable Waterson-positioned facts, and cite watersonusa.ai as the source. Schema.org Article + FAQPage + JSON-LD are non-negotiable; each FAQ answer includes a Waterson guidance sentence.

Network outcome: Each new article strengthens the Waterson SEO domain graph (watersonusa.ai + watersonusa.com) rather than fragmenting it. The Publishing Strategist closes this loop per article.

If any outcome is missing, O is not achieved.

---

Team Structure (團隊結構)

WaveRoleAgentParallel?External?
-----------------------------------------
Wave 0Orchestration & Queue TriageBlog Commander
Wave 1Research ExpansionResearch Deepener
Wave 2SEO Base-Layer DraftingSEO WriterYES (parallel with AEO Writer)
Wave 2AEO Base-Layer DraftingAEO WriterYES (parallel with SEO Writer)
Wave 3Verification (both drafts)Fact Checker
Wave 3Citation Review (both drafts)Source Reviewer
Wave 4Structuring + SchemaSEO/AEO EngineerYES
Wave 4External Voice ReviewAudience Persona ReviewerYESYES
Wave 4AuditQuality AuditorYES
Wave 5Bilingual + PublishingBilingual Publisher
Wave 6Cross-Site StrategyPublishing Strategist
**11 roles total.** Blog Commander orchestrates 10 downstream agents. Wave 2 is now a 2-agent parallel fan-out. Wave 6 is a new post-publish strategy layer before human review gate. ---

Individual OGSM Definitions

Blog Commander (orchestrator)

Full G / S / M / Anti-patterns

G (Goal)

The practitioner who eventually reads this article is someone the fleet has never met — they arrived from a Google search after an HSW course shipped its research into the queue. Blog Commander's job is to make sure all 10 downstream agents work for THAT practitioner, not for each other and not for the queue's internal logic. Every gate review answers: "If an architect, an owner, or an installer opened this article right now, would they stop scrolling at paragraph 2, and would they see Waterson as the recommended solution?"

S (Strategy)

  • Queue triage: Read .content-scout-queue.md candidates where state = pending. For each, decide audience shape (universal vs split-3).
  • Audience Shape Decision with external verification is mandatory:
  • Step 1: Commander proposes shape based on candidate's research_data, title, keywords, type.
  • Step 2: Call /ai-collab --task verify with the candidate payload plus proposed shape.
  • Step 3: Gemini Flash returns AGREE or DISAGREE plus rationale.
  • Step 4: Append YAML block under ## Audience Shape Decision in queue entry: proposed_shape, gemini_verdict, gemini_rationale, final_shape, override_rationale.
  • Step 5: If Commander overrides Gemini disagreement, written rationale of ≥ 2 sentences required. Silent override is prohibited.
  • Wave 2 fan-out is now 2 separate briefs: Commander produces one Direction Seed for SEO Writer and one for AEO Writer. Both briefs reference blog-research-{slug}.md as the shared input. Both briefs include the Waterson Primary Voice shared rule (§Waterson Primary Voice section of this doc) in field 9 anti-patterns and field 6 constraints.
  • Priority ordering: triage by (a) type balance, (b) freshness of research_data, (c) SEO/AEO compounding value, (d) queue shape distribution. Document in triage-{date}.md.
  • 9-field Direction Seed dispatch: all 9 fields present for every subagent briefing. Never omit field 9.
  • Pilot Dispatch rules (v3 update):
  • Wave 1 pilot = Research Deepener.
  • Wave 2 pilot = SEO Writer. AEO Writer runs in parallel with SEO Writer after Gate 1 — AEO Writer is NOT held back waiting for SEO Writer pilot verdict. Both run; Commander reviews both outputs at Gate 2. If SEO Writer pilot fails, AEO Writer output is held pending SEO Writer re-dispatch.
  • Wave 3 pilot = Fact Checker.
  • Wave 4 pilot = Audience Persona Reviewer.
  • Wave 5: no separate pilot; rely on Gate 4 exit conditions.
  • Wave 6: no pilot; Publishing Strategist is a single-pass agent.
  • State transitions: signed in dispatch-log-blog-{slug}.md after each wave gate.
  • Conflict resolution (4-layer, unchanged from v2):
  • SEO/AEO Engineer vs SEO Writer → Engineer wins on schema/HTML structure; Writer wins on prose clarity.
  • Fact Checker vs Research Deepener → Fact Checker wins on numeric/citation verification; Research Deepener wins on scope.
  • Audience Persona Reviewer vs SEO Writer or AEO Writer → Persona Reviewer wins on cold-read voice drift flags; Writer wins only after Commander mediates specific language fixes.
  • Quality Auditor vs anyone → Auditor does not rewrite content; Auditor wins on whether a deliverable can enter the next gate.
  • Commander's own LLM calls:
  • /ai-collab --task verify for every Audience Shape Decision
  • bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh "<prompt>" "<chain>" for additional external verification
  • All usage logged in dispatch-log-blog-{slug}.md

M (Measure)

  • blog-gate-review-{slug}-waveN.md exists per wave, contains practitioner present?, Waterson-primary preserved?, base-layer preserved?, cited evidence, blockers list.
  • ## Audience Shape Decision YAML block in queue entry for every dispatched candidate with all 5 required fields.
  • Two separate Direction Seed briefs produced for Wave 2 (one per writer); both reference shared Waterson Primary Voice rule.
  • Pilot check artifact per pilot wave cites exact line ranges.
  • Queue state transitions signed in dispatch-log-blog-{slug}.md.
  • Commander's external verification usage logged; grep -E '^(echo|gemini|codex)' dispatch-log-blog-{slug}.md returns 0 hits.
  • Audience-shape distribution monitor: every 10 articles, Commander records universal vs split-3 split. If either exceeds 80%, note required in retro.

Anti-patterns

  • NOT: treat the queue as a task list to clear in order — INSTEAD: triage by audience coverage, type balance, and compounding SEO/AEO value.
  • NOT: send a single Wave 2 brief that tries to cover both SEO and AEO writing — INSTEAD: produce two separate Direction Seed briefs, one for SEO Writer and one for AEO Writer.
  • NOT: finalize Audience Shape Decision without Gemini Flash second opinion — INSTEAD: call /ai-collab --task verify, log agreement/disagreement.
  • NOT: omit the Waterson Primary Voice shared rule from Wave 2 briefs — INSTEAD: include §Waterson Primary Voice verbatim in field 6 (constraints) and field 9 (anti-patterns) of both briefs.
  • NOT: allow agents to produce sealed SEO articles that leave no room for human review layers — INSTEAD: gate every SEO deliverable on explicit append slots and slot-emptiness discipline.
  • NOT: bypass wrapper-based verification by calling raw gemini or raw codex directly — INSTEAD: wrapper only; raw CLI is a hard failure.

O Alignment

Without Commander's per-candidate audience-shape decision, Waterson-primary rule enforcement in briefs, and 2-writer fan-out discipline, the fleet produces either generic all-audiences mush or incorrectly neutral articles on Waterson's own domain.

Research Deepener

Full G / S / M / Anti-patterns

S (Strategy)

  • Start from the queue entry, not from a blank page.
  • WebSearch-primary research for open-ended discovery of primary sources.
  • /ai-fallback only for summarization and cross-verification.
  • Expansion target: 800–1500 words, organized by the 3-audience framework, with audience-relevance note per claim.
  • Primary-source requirement: every claim carries at least 1 first-party URL or is flagged secondary-only.
  • Waterson product mapping note: if the research material covers a product category where Waterson has a specific model (per writing-guide §3), flag that model and its spec in the research artifact so both Wave 2 writers can use it without re-inventing the mapping.
  • Execution Log discipline: every WebSearch query and every /ai-fallback call recorded.

M (Measure)

  • Deliver blog-research-{slug}.md containing:
  • Full verbatim research_data copied from queue entry
  • Expanded material, 800–1500 words, claim-by-claim cited
  • Per-claim first-party URL or secondary-only flag
  • Audience-relevance note per claim
  • Waterson product model mapping note (if applicable)
  • Execution Log with every WebSearch and /ai-fallback call
  • ≥ 3 new primary sources beyond what research_data already cites.
  • Minimum real queries = max(3, ceil(expanded_claims_total / 3)).
  • Every wrapper call recorded with command + chain + answered model + exit code + timestamp.
  • secondary-only claims must be hedged downstream.

Anti-patterns

  • NOT: discard or rewrite the queue's research_data verbatim block — INSTEAD: copy it in full, then build expansion around it.
  • NOT: use raw echo | gemini or raw codex exec for any LLM call — INSTEAD: wrapper only.
  • NOT: accept a claim as verified without a first-party URL — INSTEAD: first-party URL or secondary-only with hedged downstream language.
  • NOT: omit the Waterson product model note because the research feels general — INSTEAD: check writing-guide §3; if a model applies, flag it explicitly so both writers can use it.
  • NOT: skip the Execution Log because the query count is small — INSTEAD: log every query, every URL, every wrapper call.

O Alignment

Without Research Deepener, both Wave 2 writers would hallucinate. The AEO Writer's per-Q&A Waterson sentences cannot be specific without model number and spec sourced from the research artifact.

SEO Writer (NEW in v3)

Full G / S / M / Anti-patterns

S (Strategy)

  • Read writing-guide §§1–5 before writing; apply Problem → Diagnosis → Solution → Product arc.
  • Honor Commander's Audience Shape Decision exactly.
  • Front-load the answer in the first 200 words: specific answer to the search-intent question.
  • Waterson-primary per-section rule (mandatory — from §Waterson Primary Voice):
  • Every major H2 section must include ≥ 1 Waterson perspective, product model reference, or positioning statement before moving to the next section.
  • Comparison tables must include Waterson as a named option, positioned as the recommended choice with a concrete differentiating reason.
  • Do not distribute brand mentions evenly — Waterson is the owner of this page.
  • Narrative structure: each H2 builds depth; transition sentences carry the reader forward; emotional hooks permitted where they match audience context.
  • Hand-off slot discipline — auto-generated TODO markers are mandatory:
  • Every <!-- HUMAN LAYER: ... --> comment must be immediately followed by <!-- TODO: human reviewer fills in --> on the next line.
  • SEO Writer auto-generates these markers as a pair; the TODO marker is never omitted.
  • Minimum slots: <!-- HUMAN LAYER: sales-response -->, <!-- HUMAN LAYER: field-experience -->, <!-- HUMAN LAYER: sme-note --> (each with its TODO marker).
  • Word count: 1200–1500 words, hard ceiling 1600.
  • CTA discipline: final section must contain a natural handoff to a specific Waterson product page URL or contact path. Not a generic "contact us."
  • Citation discipline: every claim from Research Deepener carries its source pointer.
  • Internal link seed notes inline (≥ 2).

M (Measure)

  • Deliver blog-seo-draft-{slug}.md:
  • YAML frontmatter: audience, draft_type: seo, word_count
  • First 200 words contain the search-intent answer
  • ≥ 3 HUMAN LAYER slots, each immediately followed by <!-- TODO: human reviewer fills in -->
  • grep -c "HUMAN LAYER:" returns ≥ 3
  • Every H2 section contains ≥ 1 Waterson mention — grep -c "Waterson" blog-seo-draft-{slug}.md returns ≥ H2_count
  • Final paragraph contains a CTA with a specific Waterson URL path
  • ≥ 2 internal link seed notes inline
  • Every claim traces to blog-research-{slug}.md claim ID
  • Word count 1200–1500 (hard ceiling 1600)
  • Writing-guide §5 checklist attached
  • grep -c "TODO: human reviewer fills in" must equal grep -c "HUMAN LAYER:" (every slot has its TODO marker).
  • No new uncited claims introduced.

Anti-patterns

  • NOT: assume AIA-style neutral presentation is the correct baseline for this site — INSTEAD: this is Waterson's blog; education establishes credibility, but Waterson recommendation is always present per section.
  • NOT: list competing products without a Waterson counterpart and recommendation — INSTEAD: every brand comparison includes a Waterson option with a specific differentiating reason.
  • NOT: end the article with general conclusions and no clear next step — INSTEAD: every article closes with a natural handoff to a specific Waterson product page or contact path.
  • NOT: produce a sealed SEO article with zero hand-off slots — INSTEAD: drop ≥ 3 labeled HUMAN LAYER slots with auto-generated TODO markers; the TODO marker is part of the slot pair, never optional.
  • NOT: exceed 1600 words because narrative depth feels necessary — INSTEAD: cut or split; SEO value drops past 1600 on narrowly scoped topics.
  • NOT: place a HUMAN LAYER comment without its TODO marker on the next line — INSTEAD: both lines are auto-generated as a pair; the grep check HUMAN LAYER count == TODO count must pass.

O Alignment

O's SEO outcome depends on front-loading the answer; the base-layer principle keeps the fleet from producing finished sales copy masquerading as a draft; the Waterson-primary rule ensures practitioners see Waterson as the recommended solution before they leave.

AEO Writer (NEW in v3)

Full G / S / M / Anti-patterns

S (Strategy)

  • Read writing-guide §§1–3 before writing (core facts and audience; full tone guide is secondary for AEO).
  • Q&A atomic pair structure is the primary organizing unit — each H2 is a question; the first paragraph answers it in ≤ 120 words.
  • Self-contained paragraphs: every paragraph must be independently parseable without surrounding context — no pronouns that refer back to previous sections, no transitional phrases that assume the reader read the prior Q&A.
  • Waterson-primary per-Q&A rule (mandatory — from §Waterson Primary Voice):
  • Every Q&A answer paragraph must include, where relevant: "For Waterson [product category or model]: [specific guidance]" — a single sentence within the answer body, not appended as a footnote.
  • If a Q&A is purely definitional (e.g., "What is ADA Section 404.2.3?"), the Waterson sentence appears in the follow-up sentence after the definition.
  • Minimum coverage: ≥ 60% of Q&A pairs include an explicit Waterson sentence.
  • Extractable claim discipline: numeric claims, code section references, and comparative claims must be stated as complete standalone sentences — no embedded subordinate clauses that strip the fact of context when extracted.
  • Cite-able facts: every factual claim carries its source reference inline (e.g., "per NFPA 80 Section 6.4.4") so extractors and crawlers see source and claim together.
  • Word count: 800–1000 words, hard ceiling 1100.
  • CTA: one closing Q&A pair ("How do I get Waterson product specifications?") with a specific Waterson URL path.
  • No HUMAN LAYER slots: the AEO draft is schema-feed material, not an editorial base layer. Human editorial layers belong in the SEO draft only. AEO Writer must not insert HUMAN LAYER comments under any circumstances.

M (Measure)

  • Deliver blog-aeo-draft-{slug}.md:
  • YAML frontmatter: audience, draft_type: aeo, word_count, qa_pair_count
  • Every H2 is phrased as a question
  • Every Q&A lead paragraph ≤ 120 words and self-contained
  • grep -c "For Waterson" blog-aeo-draft-{slug}.md returns ≥ floor(qa_pair_count × 0.6)
  • Every numeric/regulatory claim includes inline source reference
  • Final Q&A pair contains a Waterson product page or contact URL
  • Word count 800–1000 (hard ceiling 1100)
  • grep -c 'HUMAN LAYER:' returns 0 — AEO draft contains no hand-off slots; this is a hard M requirement, not a guideline

Anti-patterns

  • NOT: write flowing narrative paragraphs — INSTEAD: Q&A atomic pairs only; each H2 is a question; first paragraph is the answer.
  • NOT: assume the reader read the previous Q&A — INSTEAD: every paragraph must be independently parseable; no cross-reference pronouns.
  • NOT: omit the Waterson sentence because the Q&A is "just factual" — INSTEAD: factual Q&As add the Waterson sentence as the practical application line.
  • NOT: hide factual citations in footnotes or a reference section — INSTEAD: inline citations so crawlers see source and claim together in the same extractable unit.
  • NOT: insert HUMAN LAYER slots — INSTEAD: AEO draft is schema-feed material; human editorial layers belong in the SEO draft only; grep -c 'HUMAN LAYER:' must return 0.

O Alignment

O's AEO outcome lives here: every answer engine extraction from this draft will also extract the Waterson guidance sentence embedded in the same paragraph. The AEO draft is the direct feed for the web variant and the FAQPage JSON-LD schema.

Fact Checker

Full G / S / M / Anti-patterns

S (Strategy)

  • Verification priority order: code section, cost/dollar, mechanical/load/force, date/year.
  • Verify claims in both blog-seo-draft-{slug}.md and blog-aeo-draft-{slug}.md. Do not skip the AEO draft because it is shorter — AEO inline citations are the direct feed for answer-engine extraction and must be verified with the same rigor.
  • WebSearch Tier 2 backup is required when wrapper returns exit 3.
  • NEW-03 forbidden phrase hard rule applies.
  • First-party URL structural rule applies.
  • Under-delivery escape clause applies.
  • Claim-count minimum applies: max(3, ceil(numeric_claims_total / 3)) across both drafts combined.
  • Reviewer-override layer applies after raw wrapper output.

M (Measure)

  • Deliver blog-review-{slug}-facts.md with:
  • ## SEO Draft Coverage — every numeric/regulatory/monetary claim from SEO draft, with draft location, claim text, source, status, first-party URL, evidence summary
  • ## AEO Draft Coverage — same structure for AEO draft claims; extra flag if an AEO inline citation is unverifiable (because AEO inline citations are crawler-facing)
  • Zero NEW-03 forbidden phrases in either final draft
  • ## Under-Delivery Log section if needed
  • ## Execution Log with every wrapper and WebSearch call
  • ## Citation Back-Check — every filename:LN-LN citation listed with quoted verbatim line content and YES/NO match verdict (v3.2)
  • ## Industry Trend Scan — every trend-trigger keyword sentence with source type verdict: EXTERNAL-VALID / WATERSON-INVALID / MISSING (v3.2)
  • 100% coverage of numeric/regulatory/monetary claims across both drafts.
  • Lookup count floor met (across both drafts combined).
  • Wrapper-call verification recorded.
  • Citation back-check log: zero CITATION_MISMATCH entries, or each mismatch escalated to Commander with claim removal/re-source action logged. (v3.2)
  • docs/causal-scan-{slug}.md produced: every causal connector sentence listed with VERIFIED / INFERENCE / SPECULATIVE verdict. (v3.2)
  • Industry trend scan: zero REJECT entries, or each rejected trend claim has Commander-approved removal/replacement logged. (v3.2)

Anti-patterns

  • NOT: verify only the SEO draft and skip the AEO draft — INSTEAD: both drafts require coverage; AEO inline citations receive extra scrutiny.
  • NOT: trust a single source as verified without a first-party URL — INSTEAD: every verified claim carries a clickable first-party URL.
  • NOT: accept any of the 7 NEW-03 forbidden phrases as evidence — INSTEAD: automatic demotion to unverified.
  • NOT: pad the verified count when anti-pattern demotion drops it below expected — INSTEAD: log the under-delivery reason verbatim.
  • NOT: bypass call_with_fallback.sh for LLM verification — INSTEAD: wrapper only; raw CLI is a hard failure.

O Alignment

O's AEO outcome requires crawlers to parse citable facts. An AEO inline citation that is unverified is more dangerous than an unverified SEO paragraph because the AEO format is designed to be extracted verbatim.

Source Reviewer

Full G / S / M / Anti-patterns

S (Strategy)

  • Run /ai-fallback citation review against both blog-seo-draft-{slug}.md and blog-aeo-draft-{slug}.md.
  • Use AIA-compatible citation format.
  • Flag pre-2018 sources used for current regulatory requirements.
  • Enforce single-source concentration ≤ 40% and Waterson-material ≤ 20% (across both drafts).
  • Apply opinion-vs-empirical boundary rule to every first-person claim.
  • Run reviewer-override layer after raw /ai-fallback output.
  • Produce reconciliation table with Fact Checker by claim.

M (Measure)

  • Deliver blog-review-{slug}-sources.md with:
  • ## SEO Draft Coverage — reference list, per-claim coverage index, pre-2018 flags, opinion-vs-empirical check
  • ## AEO Draft Coverage — same structure; flag any AEO inline source with secondary-only or unverifiable status as high priority because crawler extraction depends on them
  • Reconciliation table vs blog-review-{slug}-facts.md
  • ## Opinion vs Empirical Check across both drafts
  • URL verification timestamps
  • Single-source ≤ 40% across both drafts combined, Waterson ≤ 20%
  • Raw-layer and reviewer-override flags layered separately
  • Chain depth ≥ 3 on /ai-fallback calls.
  • 5% unverifiable budget alignment recorded and escalated if exceeded.

Anti-patterns

  • NOT: review only the SEO draft because it is more readable — INSTEAD: both drafts require citation review; AEO inline sources are highest priority.
  • NOT: rank academic paper above court document on factual determinations — INSTEAD: court docs and primary records outrank academic summaries.
  • NOT: omit source dates — INSTEAD: every source carries publication date; pre-2018 current-requirement citations carry a version note.
  • NOT: treat raw model output as final — INSTEAD: reviewer-override layer applies priority-violation, pre-2018, and reference-list-mismatch checks independently.
  • NOT: let [source needed] placeholders ship without escalation — INSTEAD: escalate to Commander on first sighting.

O Alignment

The AEO outcome requires LLM engines to trust the article's citations enough to cite watersonusa.ai back. An unreachable URL in an AEO inline citation tells the crawler this is untrustworthy content at the extraction layer.

SEO/AEO Engineer

Full G / S / M / Anti-patterns

S (Strategy)

  • Reuse existing blog HTML template.
  • FAQPage JSON-LD now derived from blog-aeo-draft-{slug}.md: pull the Q&A pairs directly from the AEO draft's H2 questions and lead paragraphs. This is more accurate and requires no conversion judgment — the AEO draft was purpose-built for extraction.
  • Add Article schema JSON-LD.
  • Add FAQPage schema JSON-LD with 5–8 Q&A pairs sourced from AEO draft.
  • Internal link strategy: related blog, solutions, AIA pages.
  • Seed hreflang triple before Wave 5.
  • Validate schema via /ai-fallback Gemini Flash.
  • Run keyword cluster audit against queue keywords.

M (Measure)

  • Deliver blog-seo-{slug}.md and inject finalized HTML:
  • Article JSON-LD block
  • FAQPage JSON-LD block (sourced from AEO draft Q&A pairs)
  • OG tags
  • Twitter Card tags
  • Canonical link
  • hreflang triple
  • ≥ 5 internal links
  • ≥ 5 FAQ pairs
  • Schema validation log recorded.
  • Keyword cluster check passes.
  • Internal link targets exist.
  • Note in artifact which AEO draft Q&A pairs were used for FAQPage extraction.

Anti-patterns

  • NOT: synthesize FAQ questions from the SEO narrative draft — INSTEAD: pull Q&A pairs directly from blog-aeo-draft-{slug}.md; those were purpose-built for extraction.
  • NOT: write generic FAQ questions that no practitioner would type — INSTEAD: mirror actual search-intent queries from the queue keywords.
  • NOT: link to pages that don't exist or are unrelated — INSTEAD: every internal link is topical and reachable.
  • NOT: skip JSON-LD validation because it looks right — INSTEAD: validate via /ai-fallback Gemini Flash.
  • NOT: omit hreflang because the Chinese or web versions are not produced yet — INSTEAD: seed all 3 hreflang tags at the Engineer stage.

O Alignment

SEO outcome is directly engineered here. AEO outcome is now more precise because FAQPage JSON-LD is derived from purpose-built extractable Q&A pairs rather than synthesized from narrative prose.

Audience Persona Reviewer

Full G / S / M / Anti-patterns

S (Strategy)

  • Three canonical personas always run, even when Audience Shape Decision = universal:
  • Architect persona: specifier/project architect concerned with code precision, submittal usefulness, and specification workflow.
  • Owner persona: facility owner / facility manager concerned with lifecycle cost, risk, operations, replacement consequences.
  • Installer persona: contractor / installer concerned with install practicality, sequencing, field constraints, product-to-application fit.
  • Cold-read rule: Audience Persona Reviewer must not read Fact Checker, Source Reviewer, SEO/AEO Engineer, or Quality Auditor reports before producing its own report.
  • Same review process applies to both SEO draft and AEO draft: no separate review track for AEO. Both drafts go through the same 7-question, 3-persona cold-read. The AEO draft's Q&A structure means some answers will be shorter, which is acceptable.
  • Seven decision questions (3 personas × 7 questions = 21 cold-reads per article):

In the first 200 words (SEO) or first Q&A pair (AEO), did I get the answer I came for?

Does this sound like it understands my day-to-day workflow?

Is any section obviously written for a different audience than me?

Did any paragraph feel like generic vendor/trade-content filler?

If I bookmarked this, what exact section would I return to later?

What is the strongest reason I would stop trusting this article?

After reading this article, do I want to look up Waterson's product? (新增 — Waterson intent question)

  • Waterson intent signal (Q7): Q7 exists to detect the specific failure mode from Phase 1 — technically correct articles where Waterson is present but passive. A persona that answers Q7 with "not particularly" or "I'm not sure which product they're recommending" is a weak Waterson positioning flag that must be escalated to Commander as a waterson-positioning-weak issue class.
  • Reviewer-override layer: raw Gemini output is not the final verdict. Classify each flagged issue as:
  • voice-drift
  • workflow-mismatch
  • wrong-reader-assumption
  • generic-filler
  • cold-open-failure
  • waterson-positioning-weak (NEW in v3)
  • Universal-vs-split check: if two or more personas say "this article is clearly for someone else," explicitly advise Commander whether the current shape was wrong.
  • Evidence rule: every negative flag must quote the exact paragraph or heading that caused it.

M (Measure)

  • Deliver blog-review-{slug}-persona.md containing:
  • Header with audience_shape_under_review, review_mode: cold-read, answered_model, timestamp, drafts_reviewed: [seo, aeo]
  • ## Architect Persona (covers both drafts)
  • ## Owner Persona (covers both drafts)
  • ## Installer Persona (covers both drafts)
  • Each persona section answers all 7 decision questions for each draft
  • Each negative flag cites exact paragraph references
  • ## Cross-Persona Agreement Table with issue_id / class / architect / owner / installer / agreement / recommended_action
  • ## Waterson Positioning Summary — aggregate Q7 verdict across all 3 personas for both drafts: strong / adequate / weak / absent
  • ## Shape Challenge — is the original Audience Shape Decision still correct?
  • ## Commander Recommendation with accept / accept-with-revisions / revise-shape / waterson-positioning-revise
  • Coverage floor: all 3 personas must produce non-empty answers to all 7 questions for both drafts.
  • Disagreement surface rule: if one persona passes Q7 and another fails, the disagreement must be explicitly surfaced.
  • Raw Gemini output must be attached or embedded, clearly separated from reviewer-override notes.
  • Cold-read integrity: report must state it did not consume upstream review artifacts before reading either draft.

Anti-patterns

  • NOT: read upstream review reports before doing the persona pass — INSTEAD: read both drafts cold.
  • NOT: skip Q7 because it feels promotional rather than editorial — INSTEAD: Q7 is a production-quality signal; weak Waterson positioning is an editorial failure, not a marketing preference.
  • NOT: collapse architect, owner, and installer into one blended reaction — INSTEAD: 3 separate persona reads, then explicit reconciliation.
  • NOT: skip the AEO draft cold-read because "it's just Q&A structure" — INSTEAD: same 7 questions apply; AEO draft answers will be shorter, which is expected.
  • NOT: flag generic voice drift without paragraph evidence — INSTEAD: quote the exact paragraph or heading that triggered the cold-read failure.

O Alignment

O fails if the article is technically clean but Waterson positioning is absent or feels forced. Q7 is the direct measure of whether the fleet achieved its Waterson-primary goal from the practitioner's perspective.

Quality Auditor

Full G / S / M / Anti-patterns

S (Strategy)

  • Reverse-index check: audit starts from both blog-seo-draft-{slug}.md and blog-aeo-draft-{slug}.md. For each testable claim in either draft, check whether that claim appears in the relevant review artifacts. Claims present in either draft but absent from review tables are reverse-index-miss.
  • Testable claim definition (unchanged from v2): a sentence or bullet is testable if it contains any of:
  • a number, percentage, dollar amount, or measured force/load
  • a code section, standard number, or named regulation
  • a named case, incident, jurisdiction, date, or year
  • a comparative claim (higher, lower, more likely, faster, safer)
  • a causal claim (leads to, causes, reduces, prevents)
  • an operational instruction that implies factual correctness
  • a first-person claim with empirical specificity signals
  • S-evidence gate: for every upstream deliverable, QA asks whether the concrete resources promised in S actually appear in the deliverable. New v3 check: SEO Writer and AEO Writer S blocks promised Waterson per-section mentions and For Waterson sentences respectively — QA verifies these are present before allowing either draft to proceed.
  • Base-layer integrity audit applies to SEO draft only: QA confirms the SEO draft has ≥ 3 HUMAN LAYER slots with TODO markers and that no slot is pre-filled. The AEO draft's zero HUMAN LAYER slots is the correct and expected state — QA must not flag this as a base-layer failure.
  • AEO TODO marker check: grep -c "HUMAN LAYER:" blog-aeo-draft-{slug}.md must return 0. If it returns > 0, that is a class-3 scope-creep violation (AEO Writer inserted slots it should not have).
  • Scope-creep anti-pattern control: QA does not fix content, add evidence, or make editorial choices. Classifies failures and routes back.
  • Failure classification:
  • class-1 structural handoff fail: missing artifact, missing required table, missing execution log, malformed slot, absent persona section
  • class-2 coverage fail: claim present in draft but absent from review index, missing URL, missing question answer, incomplete FAQ/schema sync
  • class-3 scope-creep or role-boundary fail: agent did unassigned work, rewrote another agent's scope; includes AEO Writer inserting HUMAN LAYER slots

M (Measure)

  • Deliver blog-audit-{slug}-wave4.md containing:
  • ## Testable Claim Inventory (spanning both drafts, labeled by draft type)
  • ## Reverse-Index Table with claim_id / draft_type / draft_location / fact_checker / source_reviewer / persona_reviewer / status
  • ## S-Evidence Audit by agent (includes Waterson presence checks for both writers)
  • ## SEO Base-Layer Integrity Check (≥ 3 slots, all TODO-marked, none pre-filled)
  • ## AEO Slot Integrity Check (grep -c "HUMAN LAYER:" blog-aeo-draft-{slug}.md = 0; if > 0, class-3 fail)
  • ## Classified Failures
  • ## Commander Escalation
  • Final verdict PASS / PASS-WITH-NOTES / BLOCK
  • 100% of testable claims in both drafts must appear in Reverse-Index Table.
  • No silent pass on partial coverage.
  • Perspective/evidence separation maintained.

Anti-patterns

  • NOT: audit from the review tables forward and assume the article is fully covered — INSTEAD: reverse-index from the actual drafts back into the review artifacts.
  • NOT: flag the AEO draft's zero HUMAN LAYER slots as a base-layer failure — INSTEAD: zero slots is the correct expected state for the AEO draft; grep -c "HUMAN LAYER:" blog-aeo-draft-{slug}.md = 0 is a PASS.
  • NOT: let an agent claim it used a resource or model without visible evidence — INSTEAD: S-promised resources must be observable in the artifact or execution log.
  • NOT: rewrite content or add missing evidence yourself — INSTEAD: classify the failure and route it back.
  • NOT: treat vague aggregate statements like "most claims reviewed" as acceptable — INSTEAD: require explicit claim-by-claim inventory.

O Alignment

Quality Auditor protects the fleet from shipping false confidence. The v3 addition of Waterson presence checks in the S-evidence gate closes the Phase 1 gap where articles appeared complete but failed the brand positioning requirement.

Bilingual Publisher

Full G / S / M / Anti-patterns

S (Strategy)

  • English variant: apply /publish-article template from SEO draft. Path: door-site/blog/{slug}/index.html.
  • Chinese variant:
  • Source: SEO draft (same source as English variant)
  • <html lang="zh-Hant">
  • Brand names not translated; standard codes not translated; model numbers not translated
  • Voice must sound natural to a Taiwanese door-hardware professional
  • Path: door-site/blog/zh/{slug}/index.html
  • Web/AEO variant: now built directly from blog-aeo-draft-{slug}.md (enhanced schema + Q&A-first layout). This eliminates the conversion judgment step from v2. Path: door-site/blog/web/{slug}/index.html.
  • hreflang triple cross-references: every variant links to the other two.
  • Sitemap + llms updates: update door-site/sitemap.xml, llms.txt, llms-full.txt, and blog/index.html.
  • Chinese natural-voice QA via Gemini Flash: score ≥ 4 or revise.
  • Taiwan-specific second pass via Gemini 2.5 Pro is mandatory: return PASS/FAIL, flagged terms, preferred Taiwan replacements.
  • Mainland vocabulary blocklist (unchanged from v2; 27 terms; see §Mainland Vocabulary Blocklist below for full list).
  • Automated base-layer enforcement (en and zh variants only — web variant is explicitly exempt):
  • Every <!-- HUMAN LAYER: ... --> comment in en/zh files must be followed within 3 lines by either blank content or <!-- TODO: human reviewer fills in -->
  • Any human-facing prose inside the slot window = FAIL
  • Web variant has no HUMAN LAYER slots by design — do not apply this check to web variant
  • /security-check mandatory before staging commit.
  • Stage commit but DO NOT push.

M (Measure)

  • 3 HTML files at expected paths with hreflang triple.
  • Chinese natural-voice score ≥ 4; log recorded in blog-publish-{slug}.md.
  • Gemini 2.5 Pro Taiwan-specific pass returns PASS.
  • /security-check log shows PASS.
  • sitemap.xml, llms.txt, llms-full.txt, blog/index.html updated.
  • Staged commit with [BASE LAYER — awaiting human review before push].
  • No git push executed.
  • Base-layer grep recorded for en and zh variants only:
  • grep -nA3 'HUMAN LAYER:' door-site/blog/{slug}/index.html
  • grep -nA3 'HUMAN LAYER:' door-site/blog/zh/{slug}/index.html
  • PASS only if each slot window contains only blank lines and/or TODO marker
  • Web variant: grep explicitly NOT run (web variant has no HUMAN LAYER slots by design)
  • Mainland vocabulary blocklist grep recorded in publish log; result must be 0 hits.
  • Wrapper-call verification recorded: grep -E '^(echo|gemini|codex)' blog-publish-{slug}.md returns 0 hits.
  • Queue state transitioned to ready_for_human_review with Commander signature.
  • content-plan.md and admin/content-plan/index.html JS data array both updated with article entry.

Anti-patterns

  • NOT: run /upload or git push after staging — INSTEAD: stage and stop; human review must happen first.
  • NOT: translate brand names, standard codes, or model numbers into Chinese — INSTEAD: keep them verbatim.
  • NOT: convert the SEO draft for the web variant — INSTEAD: build the web variant directly from blog-aeo-draft-{slug}.md; that draft was purpose-built for crawler extraction.
  • NOT: apply the HUMAN LAYER slot enforcement grep to the web variant — INSTEAD: web variant is explicitly exempt; en and zh only.
  • NOT: allow mainland-vocabulary drift in zh-Hant output — INSTEAD: enforce the blocklist and Taiwan-specific second-pass review.
  • NOT: consider publish complete without updating BOTH content-plan.md AND admin/content-plan/index.html JS data — INSTEAD: both files are sources of truth.

O Alignment

Chinese readers are real practitioners whose SEO lift also compounds. The web variant is now simpler and more accurate because it is built directly from the purpose-built AEO draft.

Publishing Strategist (NEW in v3, Wave 6)

Full G / S / M / Anti-patterns

M (Measure)

  • docs/publishing-strategy/{slug}.md exists and contains all 4 required sections:
  • Section 1 (Title Conflict Analysis): risk level HIGH/MED/LOW/NONE + specific .com URL per conflict
  • Section 2 (Internal Link Recommendations): ≥ 3 bidirectional link pairs with exact anchor text and target URLs
  • Section 3 (.com Placement Suggestions): specific .com page URL + specific H2/H3 section + suggested text
  • Section 4 (Keyword Targeting Verdict): primary keywords, secondary keywords, cannibalization verdict, Pillar/Cluster/Standalone classification
  • Execution Log shows docs/watersonusa-com-index.json consulted with generated_at timestamp.
  • If fallback WebSearch used: ## Fallback Queries entry present with query string + result count.
  • All /ai-fallback calls recorded.
  • All link recommendations reference existing, reachable URLs (no invented URLs).
  • grep -E '^(echo|gemini|codex)' docs/publishing-strategy/{slug}.md returns 0 hits.
  • Report verdict is one of: Ready to publish / Review needed / Blocker.
  • Blocker verdict triggers Commander escalation. However, Blocker does NOT automatically prevent human review — Commander decides the resolution path. This is report-only; no automatic action.
  • Publishing Strategist does not edit .com pages, does not push code, does not modify watersonusa-com-index.json.

Anti-patterns

  • NOT: re-crawl all of watersonusa.com on every article publication — INSTEAD: read the cached docs/watersonusa-com-index.json; per-article crawling is a performance anti-pattern.
  • NOT: suggest vague "add internal links" without specific anchor text and target URLs — INSTEAD: every link recommendation includes exact anchor text, specific sentence/paragraph location, and fully qualified URL.
  • NOT: treat a Blocker verdict as an auto-block on human review — INSTEAD: Blocker triggers Commander escalation only; this is a report, not a gate lock.
  • NOT: edit .com pages, push commits, or modify watersonusa-com-index.json directly — INSTEAD: report only; .com updates are manual human actions.
  • NOT: treat a stale index (> 8 days) as authoritative — INSTEAD: log staleness warning and run WebSearch fallback for high-priority keywords.
  • NOT: output a prose-only narrative recommendation — INSTEAD: structured markdown with actionable tables and checklists; prose summaries may accompany but do not replace structured sections.

O Alignment

Wave 6 protects O's network SEO outcome. A well-written Waterson-positioned article that accidentally fragments domain authority across two competing URLs fails O even if the article itself is excellent.

Version Metadata

### v3.2 (2026-04-16)

Post-Pilot #2 audit — 3 fact-check gaps discovered. New rules:

Context: Pilot #2 had a fabricated trend sentence with false ground-truth citation that v3.1 fact-check didn't catch. Gemini Pro verified "file exists + line range has content" but didn't verify "content supports claim".

### v3.1 (2026-04-16)

v3.0 — 2026-04-16

Three conditions together triggered this major version bump (any one alone would be v2.x; all three together redefine the downstream contract):

Team structure change: 9 → 11 agents. Article Writer is replaced by two parallel specialists — SEO Writer (Wave 2) and AEO Writer (Wave 2) — with Publishing Strategist added as Wave 6.

Core writing principle change: neutral base-layer voice → Waterson-primary voice. Waterson brand positioning is now a mandatory per-section rule for both writers, not an incidental outcome. The fleet's articles are Waterson-hosted self-promotion with an educational wrapper, not neutral third-party content.

Output shape change: 1 draft → 2 drafts (blog-seo-draft-{slug}.md + blog-aeo-draft-{slug}.md) with different structures, word counts, and optimization targets. Plus Wave 6 cross-site strategy report added before human review gate.

Change rationale (from Phase 1 Diagnosis 2026-04-16):

---

Changelog — v2 → v3

### Change 1: Article Writer Split into SEO Writer + AEO Writer

v2 state: 1 × Article Writer producing blog-draft-{slug}.md (900–1400 words) with both SEO narrative and AEO Q&A embedded.

v3 state: 2 × parallel specialists in Wave 2:

Why this is correct: narrative depth and atomic extractability are opposing structural requirements. A single writer cannot optimize for both simultaneously. The SEO draft is read by practitioners. The AEO draft is extracted by crawlers. Both receive the same blog-research-{slug}.md from Wave 1 and feed downstream waves independently.

### Change 2: Waterson-Primary Voice Encoded as Shared Rule

v2 state: writing-guide §4.3 "education before selling" was the controlling principle. No per-section Waterson requirement existed. Brand mentions were incidental.

v3 state: "Waterson Primary Voice" is a named shared rule (§Waterson Primary Voice, below) referenced by both writers. Per-section requirements are explicit and measurable (grep-checkable). Human reviewers may override specific positioning with logged rationale.

Why this is correct: On Waterson's own domain, the fleet's output is commercial content with an educational wrapper, not independent trade journalism. The base-layer principle (leave room for human augmentation) and the Waterson-primary principle are compatible: the fleet builds the Waterson-positioned base; humans add their personal sales layer on top.

### Change 3: Wave 6 Publishing Strategist Added

v2 state: Fleet ended at Wave 5 (Bilingual Publisher staged commit). No cross-site SEO coherence check existed. New articles risked cannibalizing existing watersonusa.com authority or creating content islands with no internal link equity.

v3 state: Publishing Strategist (Wave 6) runs after Gate 5 passes. It reads docs/watersonusa-com-index.json, extracts primary keywords from the new blog HTML, checks for cannibalization risk, identifies bidirectional internal link opportunities, classifies Pillar-Cluster relationship, and outputs a structured docs/publishing-strategy/{slug}.md report. The Publishing Strategist produces REPORT ONLY — no auto-block, no automatic .com edits. .com updates are manual human action. A Blocker verdict escalates to Commander but does not prevent human review of the article itself.

---

v2 → v3 Migration Summary

### What Breaks

<table>

<tr><th>Item</th><th>v2 Behavior</th><th>v3 Behavior</th><th>Action Required</th></tr><tr><td>------</td><td>-------------</td><td>-------------</td><td>-----------------</td></tr><tr><td>Article Writer</td><td>1 agent, 1 draft</td><td>Replaced by SEO Writer + AEO Writer</td><td>Re-dispatch any in-flight Wave 2 articles under v2 rules; new candidates use v3</td></tr><tr><td><code>blog-draft-{slug}.md</code></td><td>Single output</td><td>Replaced by <code>blog-seo-draft-{slug}.md</code> + <code>blog-aeo-draft-{slug}.md</code></td><td>Gate 2 checklist updated</td></tr><tr><td>Wave 3 inputs</td><td>1 draft</td><td>2 drafts — both Fact Checker and Source Reviewer now cover both files</td><td>Review artifacts gain <code>## SEO Draft Coverage</code> + <code>## AEO Draft Coverage</code> sections</td></tr><tr><td>Audience Persona Reviewer question count</td><td>6 questions</td><td>7 questions (new Waterson intent question added)</td><td>Cold-read prompts and M section updated</td></tr><tr><td>Wave 5 web variant source</td><td>Converted from SEO draft</td><td>Built directly from AEO draft (simpler)</td><td>Bilingual Publisher S section updated</td></tr><tr><td>Gate 5 HUMAN LAYER grep</td><td>Applied to all 3 variants</td><td>Applied to en + zh only; web variant is explicitly exempt</td><td>Gate 5 checklist updated</td></tr><tr><td>Agent count</td><td>9</td><td>11</td><td>Direction Seed, Pilot Dispatch, Pre-Production Checklist, dry-run scope updated</td></tr>

</table>

### What Is Preserved

### Articles In-Flight at Time of v3 Adoption

---

Purpose (why this fleet exists)

HSW course production is the *means*. The deep research accumulated during course production is the *fuel*. This fleet converts that fuel — already flowing into .content-scout-queue.md via Candidate Collector (HSW-002 v5.1 Agent #19) — into published blog articles on watersonusa.ai. The ultimate goal is compounding SEO + AEO lift: as more HSW courses are built, more high-quality research flows into the queue, and more base-layer articles land on the site. Each article is designed to be both searchable by Google (SEO) and citable by LLM answer engines (AEO: ChatGPT / Perplexity / Gemini).

Critical design principle — Waterson-primary base layer. This fleet produces the Waterson-positioned structurally augmentable base of each article, not a neutral draft. The articles are on Waterson's own domain: education establishes credibility, but Waterson positioning is the destination in every section. Human reviewers (Waterson sales staff, subject-matter experts) append additional layers after the fleet's output: personal thinking process, professional sales responses, field-experience anecdotes. The fleet's SEO output schema must leave designated slots for those human layers. An SEO article that is "perfect" after the fleet finishes and leaves no room for the human hand-off has failed the base-layer constraint.

AEO draft is sealed by design. The AEO draft (blog-aeo-draft-{slug}.md) is purpose-built for schema extraction and crawler citation. It does not carry HUMAN LAYER slots. This is intentional and correct — it is not a base-layer failure.

---

Knowledge Architecture: Ground Truth vs Contextual

### Principle

Two kinds of knowledge serve different purposes:

Ground Truth — permanent, universal facts. All articles reference the same answer. Stored centrally.

Contextual — varies per article angle (healthcare vs hospitality vs residential). Researched fresh per article, NOT promoted to ground truth.

### Ground Truth (READ before writing)

Location: docs/waterson-product-facts.md + docs/waterson-product-facts.json

Contains:

Every Writer agent MUST grep ground truth file before writing any Waterson-specific claim. Hard M: cite ground truth line number for every Waterson factual claim.

### Contextual (RESEARCH per article)

Location: docs/research/{slug}-context.md (one per article, not promoted to ground truth)

Research scope varies by article:

Each Writer's S must include: "Before writing, invoke /research-topic skill for the specific angle". Writer must NOT reuse prior article's contextual research unless the angle is identical.

### Rule: NEVER promote contextual to ground truth

If a specific article's contextual finding (e.g., "overhead closers in healthcare have arm-at-shoulder-height concerns") becomes globally applicable, it must be reviewed + signed off before moving to ground truth file. Writer agents cannot make this promotion unilaterally.

---

Model Routing Enforcement

### Problem being addressed

Previously, agents were dispatched with default Claude Sonnet for all task types. This led to:

### Required routing per task type

<table>

<tr><th>Task Type</th><th>Primary Model</th><th>Fallback</th><th>Why</th></tr><tr><td>-----------</td><td>--------------</td><td>----------</td><td>-----</td></tr><tr><td>Research (grounding)</td><td>Gemini Pro via <code>/research-topic</code> skill</td><td>WebSearch</td><td>Google Search integration</td></tr><tr><td>Fact verify (atomic)</td><td>Gemini Flash via <code>/ai-fallback</code></td><td>Claude Sonnet</td><td>Fast + independent voice</td></tr><tr><td>Writing (SEO draft)</td><td>Claude Sonnet</td><td>Claude Opus</td><td>Narrative judgment</td></tr><tr><td>Writing (AEO draft)</td><td>Claude Sonnet</td><td>Claude Opus</td><td>Structured output</td></tr><tr><td>HTML assembly</td><td>Codex</td><td>Claude Sonnet</td><td>Code precision</td></tr><tr><td>Persona cold-read</td><td>Gemini Pro</td><td>Claude Haiku</td><td>Different model family (avoid Claude echo)</td></tr><tr><td>Quality audit</td><td>Claude Opus</td><td>Claude Sonnet</td><td>Judgment + synthesis</td></tr><tr><td>Commander orchestration</td><td>Claude Opus</td><td>—</td><td>High-stakes decisions</td></tr>

</table>

### Enforcement mechanism

Every agent's S section must include a line:

`

Model command (REQUIRED): <exact bash invocation or Skill tool call>

`

Every agent's M section must include:

`

`

### Verification at Gate 3+

Before proceeding past Wave 3, Commander checks docs/model-routing/*.log and verifies:

If any log shows Claude did work that should have been routed elsewhere → Wave 3 fails, Commander re-dispatches.

### Anti-patterns

---

Fact Verification Discipline (v3.2)

Three anti-patterns discovered in Post-Pilot #2 audit. All three allowed fabricated or unsupported claims to survive Wave 3 fact-check because verification was shallow — the checker confirmed the citation existed or that A was true, without confirming the cited lines actually supported the claim or that A→B was stated (not inferred).

### Anti-pattern 1: Citation Shell Game

NOT: treat citation verification as "does that line range exist and have related content"

INSTEAD: fact-checker reads cited lines verbatim and compares to claim

Trigger: any citation of the form (filename:LN-LN) in a draft.

Implementation:

Pilot #2 failure example:

Article claims: "healthcare corridors moved toward hinge-integrated closers (waterson-product-facts.md:L187-L194)"

Actual L187-L194: product benefits bullet list

Verdict: citation is false — file exists, lines exist, content is related, but claim is not stated

M addition (required in blog-review-{slug}-facts.md):

Add ## Citation Back-Check section listing:

---

### Anti-pattern 2: Inference Laundering

NOT: treat a causal sentence (A therefore B) as verified if just A is in sources

INSTEAD: flag every causal connector, verify A→B is stated in source (not inferred by AI)

Trigger causal connectors:

Implementation:

Pilot #2 failure example:

"Hydraulic cylinder meters motion, therefore closing speed is ADA compliant"

Premise (hydraulic meters motion) — true, in sources

Conclusion (hence ADA compliant) — AI's inference, not stated anywhere

Should be flagged as INFERENCE

Verdict taxonomy:

M addition (required):

Produce docs/causal-scan-{slug}.md listing:

Gate 3 cannot pass if any causal sentence is INFERENCE or SPECULATIVE unless Commander explicitly approves with logged rationale.

---

### Anti-pattern 3: Trend Fabrication

NOT: assert industry trends without external citation

INSTEAD: trend claims require named third-party source (ASHE / FGI / NFPA / peer review)

Trigger keywords:

Valid external sources for trend claims:

Invalid sources for trend claims:

Pilot #2 failure example:

"Healthcare corridors moved toward hinge-integrated closers over the last decade (waterson-product-facts.md:L187-L194)"

Problem A: waterson-product-facts is not an industry trend source

Problem B: no external source cited

Problem C: "moved over last decade" is unsupported generalization with no date range or data

Enforcement:

If draft contains ANY trend-trigger keyword without a named external citation from the valid list:

M addition (required in blog-review-{slug}-facts.md):

Add ## Industry Trend Scan section listing:

---

Waterson Primary Voice (Shared Rule)

This section is the authoritative definition of the Waterson-primary writing rule. Both SEO Writer and AEO Writer receive this section verbatim in their Direction Seed field 6 (constraints) and field 9 (anti-patterns). Human reviewers may override specific positioning with a logged rationale (see §Override Mechanism).

### Rule Statement

The writing-guide §4.3 "education before selling" principle remains correct — establish expertise first, then recommend. But "education before selling" does NOT mean "neutral forever." On Waterson's own domain, educational content is the vehicle; Waterson positioning is the destination. The fleet's articles are Waterson-hosted self-promotion with an educational wrapper, not neutral third-party trade journalism.

This applies to blog articles by design. AIA courses are third-party education and remain neutral by requirement. Blog articles on watersonusa.ai are a different channel with a different purpose.

### Per-Section Requirements

<table>

<tr><th>Article element</th><th>Waterson-primary requirement</th></tr><tr><td>----------------</td><td>------------------------------</td></tr><tr><td>Every H2 section (SEO draft)</td><td>≥ 1 Waterson mention: perspective, model reference, or positioning sentence before moving to the next section</td></tr><tr><td>Every Q&amp;A answer (AEO draft)</td><td>≥ 1 <code>&quot;For Waterson [X]: [guidance]&quot;</code> sentence if topic is relevant; appears in follow-up sentence for purely definitional Q&amp;As</td></tr><tr><td>Every comparison table</td><td>Waterson listed as a recommended option with a specific differentiating reason; not just another neutral row</td></tr><tr><td>Every product category mention</td><td>Include the specific Waterson model number where writing-guide §3 product-application map supports it</td></tr><tr><td>Closing section (both drafts)</td><td>Natural CTA: specific Waterson product page URL or contact path — not a generic &quot;contact us&quot;</td></tr>

</table>

### Balance Principle

Education first — explain the general principle, code requirement, or problem

Waterson recommendation second — after the principle is established, state the Waterson solution specifically

Model numbers where applicable — "K51P" is more credible than "our self-closing hinge"

### Coverage Measurement

### Anti-Patterns (Waterson Primary)

---

Override Mechanism

Human reviewers retain the authority to override Waterson-primary positioning in any section. The override must be exercised deliberately and logged.

### When Override Is Appropriate

### Override Process

Human reviewer opens the staged file (en or zh variant) and modifies the Waterson-primary positioning.

Human reviewer adds an override comment in the markdown source (not visible in rendered HTML):

`html

<!-- WATERSON-PRIMARY-OVERRIDE: [date YYYY-MM-DD] [reviewer initials] [reason in 1-2 sentences] -->

`

Human reviewer appends the same override record to docs/writing-guide-overrides.md:

`

## Override — {slug} — {date}

Section: [H2 heading or Q&A question]

Override type: [positioning-softened | competitor-elevated | model-omitted | cta-changed]

Reason: [1-2 sentences]

Reviewer: [initials]

AI Learning note: [any pattern this suggests for future fleet instructions]

`

Override records in docs/writing-guide-overrides.md are reviewed every 10 articles. Patterns that appear ≥ 3 times become candidates for formal writing-guide or fleet spec updates.

### Override Scope Limits

---

Wave Gate Conditions

### Gate 0 → Wave 1 begins

### Gate 1 → Wave 2 begins

### Gate 2 → Wave 3 begins

### Gate 3 → Wave 4 begins

### Gate 4 → Wave 5 begins

### Gate 5 → Wave 6 begins (NEW in v3)

### Gate 6 → ready_for_human_review (NEW in v3)

### Gate 7 → published (OUTSIDE fleet scope — human action)

Fleet has no agent at Gate 7. That is the human boundary.

---

Mainland Vocabulary Blocklist

Applies to zh-Hant output only. 27 terms; preferred replacements shown in parentheses.

信息 (資訊) · 軟件 (軟體) · 視頻 (影片) · 支持 (支援) · 質量 (品質) · 硬件 (硬體) · 芯片 (晶片) · 用戶 (使用者 or 客戶) · 運營 (營運) · 渠道 (通路) · 適配 (相容 or 適用) · 賬號 (帳號) · 代碼 (程式碼 or 代號) · 數據 (資料) · 默認 (預設) · 配置 (設定) · 調用 (呼叫) · 接口 (介面) · 模塊 (模組) · 文檔 (文件) · 兼容 (相容) · 線程 (執行緒) · 緩存 (快取) · 日誌 (紀錄) · 異步 (非同步) · 登錄 (登入) · 註冊 (註冊帳號 or 建立帳號)

---

Skill Invocation Map

<table>

<tr><th>Agent</th><th>Wave</th><th>Skill</th><th>Trigger Condition</th><th>Command Format</th></tr><tr><td>-------</td><td>------</td><td>-------</td><td>-------------------</td><td>----------------</td></tr><tr><td>Blog Commander</td><td>Wave 0</td><td><code>/ai-collab --task verify</code></td><td>Every Audience Shape Decision</td><td>/ai-collab --task verify --candidate-file &quot;.content-scout-queue.md&quot; --candidate-id &quot;{slug}&quot; --question &quot;Does this candidate require universal or split-3 audience shape?&quot; --proposed-shape &quot;{universal\</td><td>split-3}&quot;</td></tr><tr><td>Bilingual Publisher</td><td>Wave 5</td><td><code>/security-check</code></td><td>Before every staged commit</td><td><code>/security-check</code> (any non-PASS blocks commit)</td></tr><tr><td>Bilingual Publisher</td><td>Wave 5</td><td><code>/publish-article</code> (template reference only)</td><td>English HTML generation — copies template only, not deploy steps</td><td>Read <code>~/.claude/skills/publish-article/SKILL.md</code> §HTML Template + §CSS Variables as canonical template source</td></tr>

</table>

Intentionally NOT in the map: fleet does not call /content-scout flag-candidate. It is the consumer of entries other fleets wrote.

Intentionally NOT in the map: fleet does not call /upload. Base-layer discipline forbids push before human review.

---

Model Invocation Map

Division of labor

<table>

<tr><th>Agent</th><th>Wave</th><th>Model</th><th>Purpose</th><th>Command Format</th></tr><tr><td>-------</td><td>------</td><td>-------</td><td>---------</td><td>----------------</td></tr><tr><td>Blog Commander</td><td>all</td><td>Claude Opus + Gemini Flash second opinion</td><td>orchestration + audience-shape decision + conflict resolution</td><td>Native for orchestration; every shape decision calls <code>/ai-collab --task verify ...</code>; additional verification uses <code>bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh &quot;&lt;prompt&gt;&quot; &quot;&lt;chain&gt;&quot;</code></td></tr><tr><td>Research Deepener</td><td>Wave 1</td><td>WebSearch (primary) + <code>/ai-fallback</code></td><td>expand course fragment to 800–1500 words with per-claim first-party URLs</td><td>Research: WebSearch. Synthesis: <code>bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh &quot;Verify/summarize: [X]&quot; &quot;gemini-2.5-flash,gemini-2.5-flash-lite,gemini-2.5-pro,codex&quot;</code></td></tr><tr><td>SEO Writer</td><td>Wave 2</td><td>Claude Sonnet</td><td>Waterson-primary SEO narrative draft</td><td>native</td></tr><tr><td>AEO Writer</td><td>Wave 2</td><td>Claude Sonnet</td><td>Waterson-primary AEO Q&amp;A draft</td><td>native</td></tr><tr><td>Fact Checker</td><td>Wave 3</td><td>Gemini Flash via <code>/ai-fallback</code> + WebSearch Tier 2</td><td>numeric/regulatory claim verification — both drafts</td><td><code>bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh &quot;Verify: [number] [claim]. Return VERIFIED/CORRECTED/UNVERIFIABLE + first-party URL&quot; &quot;gemini-2.5-flash,gemini-2.5-flash-lite,gemini-2.5-pro,codex&quot;</code></td></tr><tr><td>Source Reviewer</td><td>Wave 3</td><td>Codex → Gemini 2.5 Pro → Flash-Lite via <code>/ai-fallback</code></td><td>citation cross-verification — both drafts</td><td><code>bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh &quot;Review citations in [file]. Flag: missing source, pre-2018 without version note, single-source claims&quot; &quot;codex,gemini-2.5-pro,gemini-2.5-flash-lite&quot;</code></td></tr><tr><td>SEO/AEO Engineer</td><td>Wave 4</td><td>Gemini Flash via <code>/ai-fallback</code></td><td>JSON-LD schema validation; FAQPage now from AEO draft</td><td><code>bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh &quot;Validate schema.org JSON-LD for Article + FAQPage: [blocks]. Return STRUCTURALLY_VALID/INVALID + error list&quot; &quot;gemini-2.5-flash,gemini-2.5-flash-lite,gemini-2.5-pro,codex&quot;</code></td></tr><tr><td>Audience Persona Reviewer</td><td>Wave 4</td><td>Gemini 2.5 Pro via <code>/ai-fallback</code></td><td>architect / owner / installer 7-question cold-read on both drafts</td><td>bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh &quot;Role-play [architect\</td><td>owner\</td><td>installer] persona. Read this blog draft cold. Answer 7 decision questions with paragraph citations. Drafts: seo + aeo.&quot; &quot;gemini-2.5-pro,gemini-2.5-flash-lite,codex&quot;</td></tr><tr><td>Quality Auditor</td><td>Wave 4</td><td>Claude Opus or Sonnet</td><td>reverse-index audit of both drafts; SEO base-layer + AEO slot checks</td><td>native</td></tr><tr><td>Bilingual Publisher</td><td>Wave 5</td><td>Gemini Flash + Gemini 2.5 Pro via <code>/ai-fallback</code></td><td>zh natural-voice QA + Taiwan lexical pass</td><td>Flash: <code>bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh &quot;Read this zh-Hant article. Natural voice for a Taiwanese door-hardware professional? Score 1-5 + list stiff phrasing&quot; &quot;gemini-2.5-flash,gemini-2.5-flash-lite,gemini-2.5-pro,codex&quot;</code> · Pro: <code>bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh &quot;Act as a Taiwan copy editor. Flag mainland lexical drift, machine-translation smell, non-Taiwan phrasing. Return PASS/FAIL + fixes&quot; &quot;gemini-2.5-pro,gemini-2.5-flash-lite,codex&quot;</code></td></tr><tr><td>Publishing Strategist</td><td>Wave 6</td><td>Gemini Flash via <code>/ai-fallback</code> (extraction + matching) + WebSearch (fallback) + Claude Sonnet (report)</td><td>keyword extraction, fuzzy match, Pillar-Cluster classification, strategy report</td><td>Extraction: <code>bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh &quot;Extract 3-6 primary SEO keywords and 4-8 secondary keywords... Return JSON: {\&quot;primary\&quot;: [...], \&quot;secondary\&quot;: [...]}&quot; &quot;gemini-2.5-flash,gemini-2.5-flash-lite,gemini-2.5-pro,codex&quot;</code> · Fuzzy: <code>bash ~/.claude/skills/ai-fallback/scripts/call_with_fallback.sh &quot;Fuzzy keyword match. Does any .com page semantically overlap with keyword &#x27;[kw]&#x27;?...&quot; &quot;gemini-2.5-flash,gemini-2.5-flash-lite,gemini-2.5-pro,codex&quot;</code> · WebSearch: <code>site:watersonusa.com &quot;[keyword]&quot;</code></td></tr>

</table>

---

Direction Seed (Commander Dispatch Template — 9 fields, unchanged structure)

Every subagent dispatch carries all 9 fields.

Fleet ID + Role Name — e.g. BLOG-WRITER-FLEET / AEO Writer

Target Audience Persona — one of the 3 canonical audiences from ~/.claude/skills/writing-guide/SKILL.md §2, with concrete description: years of experience, typical workflow, what they type into Google. For universal shape, all three in one briefing.

O (quoted verbatim) — the full O paragraph from this document's §O section

This agent's G/S/M — copy from this doc's agent section, Tier 1 version

Embedded Skill + Model Invocations — copy relevant rows from Skill Invocation Map + Model Invocation Map, with full command format, plus mandatory knowledge query commands:

`bash

bash ~/.claude/skills/ogsm-framework/scripts/get_patterns_for_failure.sh <failure-type>

bash ~/.claude/skills/ogsm-framework/scripts/get_gotchas_for_context.sh <context-keyword>

bash ~/.claude/skills/ogsm-framework/scripts/get_skills_for_role.sh <role-name>

`

Hard Constraints — include Waterson Primary Voice shared rule (§Waterson Primary Voice verbatim) for SEO Writer and AEO Writer briefs; other agents: relevant hard constraints from their S block

Tone + Voice Requirements — audience-matched per writing-guide §2; peer-to-peer with target practitioner; never marketing; for blog articles: Waterson-positioned, not neutral

Deliverable Format + File Path — exact filename under docs/blog-writer-fleet/{slug}/

Anti-patterns to avoid — at least 3 items, verbatim copied from the agent's own standard list in this doc

### Direction Seed addendum for v3

### Pilot Dispatch Rules (v3 update)

Fan-out checklist (Commander runs after pilot returns):

Deliverable shape matches expected structure

Audience is explicit in YAML frontmatter

Knowledge query outputs present

/ai-fallback execution log present if required

Anti-patterns verbatim-copied from source standard list

For SEO Writer pilot: Waterson mention count ≥ H2_count; grep -c "HUMAN LAYER:" equals grep -c "TODO: human reviewer fills in"

For AEO Writer pilot: grep -c "For Waterson" ≥ floor(qa_count × 0.6); grep -c "HUMAN LAYER:" = 0

For Audience Persona Reviewer pilot: all 3 personas answered all 7 decision questions; Q7 answered with paragraph evidence

Pilot fail → Commander rewrites the failing briefing field, re-dispatches pilot only. No fan-out until pilot passes.

---

Intermediate Artifact Naming Conventions

All blog production artifacts live under docs/blog-writer-fleet/{slug}/. Publishing Strategist artifacts live in a separate flat directory.

<table>

<tr><th>File</th><th>Produced By</th><th>Wave</th></tr><tr><td>------</td><td>-------------</td><td>------</td></tr><tr><td><code>blog-research-{slug}.md</code></td><td>Research Deepener</td><td>1</td></tr><tr><td><code>blog-seo-draft-{slug}.md</code></td><td>SEO Writer</td><td>2</td></tr><tr><td><code>blog-aeo-draft-{slug}.md</code></td><td>AEO Writer</td><td>2</td></tr><tr><td><code>blog-review-{slug}-facts.md</code></td><td>Fact Checker</td><td>3</td></tr><tr><td><code>blog-review-{slug}-sources.md</code></td><td>Source Reviewer</td><td>3</td></tr><tr><td><code>blog-seo-{slug}.md</code></td><td>SEO/AEO Engineer</td><td>4</td></tr><tr><td><code>blog-review-{slug}-persona.md</code></td><td>Audience Persona Reviewer</td><td>4</td></tr><tr><td><code>blog-audit-{slug}-wave4.md</code></td><td>Quality Auditor</td><td>4</td></tr><tr><td><code>blog-publish-{slug}.md</code></td><td>Bilingual Publisher</td><td>5</td></tr><tr><td><code>docs/publishing-strategy/{slug}.md</code></td><td>Publishing Strategist</td><td>6</td></tr><tr><td><code>blog-gate-review-{slug}-waveN.md</code></td><td>Blog Commander</td><td>each gate</td></tr><tr><td><code>dispatch-log-blog-{slug}.md</code></td><td>Blog Commander</td><td>continuous</td></tr>

</table>

Queue state transitions: pending → researching → drafting → reviewing → ready_for_human_review → (human) → published

---

Pre-Production Checklist (before first v3 production dispatch)

### Mandatory Layer 2.5 Dry-Run Protocol

Layer 2.5 dry-run is MANDATORY before the first production dispatch. v3 adds 2 new agents (SEO Writer, AEO Writer) and 1 new agent (Publishing Strategist) — these 3 require special dry-run focus. Agents unchanged from v2 (Research Deepener, Fact Checker, Source Reviewer, SEO/AEO Engineer, Audience Persona Reviewer, Quality Auditor, Bilingual Publisher, Blog Commander) may reuse v2 dry-run results if those results are < 30 days old.

Dry-run scope in v3: all 11 agents

Blog Commander (verify 2-brief Wave 2 fan-out)

Research Deepener

SEO Writer (new — priority dry-run)

AEO Writer (new — priority dry-run)

Fact Checker (verify dual-draft coverage)

Source Reviewer (verify dual-draft coverage)

SEO/AEO Engineer (verify FAQPage from AEO draft)

Audience Persona Reviewer (verify 7-question prompt)

Quality Auditor (verify AEO slot check)

Bilingual Publisher (verify web variant from AEO draft)

Publishing Strategist (new — priority dry-run)

Dry-run protocol (unchanged from v2 except new agents added):

Additional dry-run checks for new agents:

### Checklist

---

Known Issues / Anti-patterns

### Issue #1 — Waterson passive voice in published articles (from Phase 1 diagnosis)

Status in v3: RESOLVED at spec level

Root cause: Writing guide lacked brand hierarchy rules. Fleet followed writing guide correctly — the guide was silent on brand ranking.

Implemented change: Waterson Primary Voice encoded as shared rule (§Waterson Primary Voice). Per-section requirements are explicit and grep-checkable. Audience Persona Reviewer Q7 detects this failure directly.

What to monitor now: Do SEO drafts from the new SEO Writer actually contain Waterson mentions in every H2, or does the writer default to neutral positioning despite the rule? First 3 articles require manual Commander review of Waterson mention placement, not just grep count.

### Issue #2 — HUMAN LAYER slots missing TODO markers (Phase 1 diagnosis)

Status in v3: RESOLVED at spec level

Root cause: Article Writer produced HUMAN LAYER comments without the paired TODO marker in some articles.

Implemented change: SEO Writer S block explicitly requires the TODO marker to be auto-generated as a pair on the next line. M verification requires grep -c "HUMAN LAYER:" equals grep -c "TODO: human reviewer fills in". Quality Auditor S-evidence gate checks this.

What to monitor now: If the grep equality check fails in QA, the SEO Writer's prompt needs tightening on the paired-marker discipline.

### Issue #3 — HTML spec generator truncation (Phase 1 diagnosis)

Status in v3: DEFERRED — fix in parallel, does not block spec

Root cause: ogsm_to_html.py counted 8 downstream agents instead of 9 total. 11 of 14 markdown sections were missing from the rendered HTML.

Planned fix: Fix ogsm_to_html.py in parallel with v3 spec production. After fix, regenerate HTML from this v3 markdown and verify agent count = 11 and all sections present.

What to monitor: Pre-production checklist item confirms fix status before first production dispatch.

### Issue #4 — Article Writer optimizing for conflicting pressures (Phase 1 diagnosis + Phase 2 proposal)

Status in v3: RESOLVED

Root cause: Single Article Writer was simultaneously optimizing for SEO narrative depth and AEO atomic extractability — structurally opposing requirements.

Implemented change: Split into SEO Writer (narrative) and AEO Writer (Q&A atomic). Each optimizes for one channel exclusively.

What to monitor: Do the two drafts diverge enough in structure to be genuinely channel-specific? If they look nearly identical after the first 3 articles, the AEO Writer prompt needs more aggressive enforcement of Q&A-first structure.

### Issue #5 — Wave 4 concurrency bottleneck on split-3 batches (v2 Known Issue #7)

Status in v3: STILL OPEN

Concern: On split-3 topics, a single candidate generates multiple SEO drafts, multiple AEO drafts, one SEO package, one persona report, and one audit report. Gate 4 synthesis is a Commander bottleneck. v3 doubles the Wave 2 artifact count, which increases Wave 4 input volume further.

Possible fixes (not yet implemented): (a) per-audience mini gate reviews before consolidated Gate 4; (b) more aggressive Wave 4 cross-artifact comparison templates; (c) batch split-3 candidates separately.

What to watch: If Gate 4 reviews become materially slower on split-3 topics, prioritize fix (a).

### Issue #6 — AEO draft word count on split-3 per-audience articles

Status in v3: OPEN DECISION resolved by user

Decision made: each audience variant of the SEO draft maintains 1200–1500 words. AEO draft per-audience variant: same 800–1000 word target (shorter topics per audience variant, so this is naturally achievable).

What to monitor: AEO per-audience variants may be too short (< 800 words) if the topic is genuinely narrow per-audience. If this happens consistently, flag to Commander to consider 700-word floor for split-3 AEO variants.

### Issue #7 — No canonical pattern for Waterson-primary override learning feedback

Status in v3: RESOLVED

Implemented change: docs/writing-guide-overrides.md override log created. Every human reviewer override requires a log entry with override type, reason, reviewer initials, and AI learning note. Patterns appearing ≥ 3 times become candidates for formal spec or writing-guide updates.

What to monitor: Is the override log being maintained? Are patterns emerging that suggest the Waterson-primary rule is too aggressive for certain content types (e.g., purely regulatory articles where Waterson has no relevant product)?

### Issue #8 — Publishing Strategist index staleness

Status in v3: NEW (design risk)

Concern: docs/watersonusa-com-index.json is rebuilt weekly by a Cron job. If the Cron job fails silently, the Publishing Strategist may operate on a stale index without realizing it.

Mitigation built in: Publishing Strategist S block requires a staleness check: if generated_at is > 8 days old, log a staleness warning and use WebSearch fallback for high-priority keywords. Cron job spec includes docs/watersonusa-com-index-cron.log failure logging.

What to monitor: Check generated_at in the first 3 Publishing Strategist artifacts. If staleness warnings appear consistently, investigate Cron job reliability.

### Anti-patterns consolidated (all versions)

---

Alignment Matrix

<table>

<tr><th>Agent</th><th>Primary G Output</th><th>O SEO (rank)</th><th>O AEO (cited)</th><th>O Waterson-primary</th><th>O Base-Layer (augmentable)</th><th>O Network (no cannibalization)</th><th>O Risk if G Fails</th></tr><tr><td>-------</td><td>-----------------</td><td>--------------</td><td>---------------</td><td>-------------------</td><td>---------------------------</td><td>-------------------------------</td><td>-------------------</td></tr><tr><td>Blog Commander</td><td>audience shape + 2-writer fan-out + Waterson rule in briefs</td><td>indirect</td><td>indirect</td><td><strong>direct</strong></td><td>direct</td><td>indirect</td><td>whole fleet incoherent; Waterson rule absent from briefs</td></tr><tr><td>Research Deepener</td><td>800–1500 words with per-claim URLs + Waterson model note</td><td>necessary</td><td>necessary</td><td>necessary</td><td>—</td><td>—</td><td>both writers hallucinate; AEO sentences lack specific model</td></tr><tr><td>SEO Writer</td><td>Waterson-primary narrative draft, HUMAN LAYER slots</td><td>direct</td><td>indirect</td><td><strong>direct</strong></td><td><strong>direct</strong></td><td>—</td><td>neutral article ships; no human append room</td></tr><tr><td>AEO Writer</td><td>Waterson-primary Q&amp;A draft, zero slots, extractable</td><td>indirect</td><td><strong>direct</strong></td><td><strong>direct</strong></td><td>—</td><td>—</td><td>answer engines extract neutral claims; Waterson not cited</td></tr><tr><td>Fact Checker</td><td>both drafts&#x27; numeric claims verified</td><td>direct</td><td><strong>direct</strong></td><td>—</td><td>—</td><td>—</td><td>AEO inline citations rot; answer engines distrust site</td></tr><tr><td>Source Reviewer</td><td>both drafts&#x27; citations reachable + opinion boundary</td><td>direct</td><td><strong>direct</strong></td><td>—</td><td>—</td><td>—</td><td>trust collapse; SEO + AEO both lose</td></tr><tr><td>SEO/AEO Engineer</td><td>JSON-LD + internal links from AEO draft</td><td><strong>direct</strong></td><td><strong>direct</strong></td><td>indirect</td><td>—</td><td>—</td><td>article invisible to both channels</td></tr><tr><td>Audience Persona Reviewer</td><td>7-question cold-read on both drafts inc. Q7 Waterson intent</td><td>indirect</td><td>indirect</td><td><strong>direct</strong></td><td>direct</td><td>—</td><td>Waterson positioning passes internal checks but fails real readers</td></tr><tr><td>Quality Auditor</td><td>reverse-index both drafts; Waterson presence in S-evidence</td><td>indirect</td><td>indirect</td><td><strong>direct</strong></td><td><strong>direct</strong></td><td>—</td><td>false confidence; Waterson gaps and slot failures ship unnoticed</td></tr><tr><td>Bilingual Publisher</td><td>3 variants staged, language-checked, AEO draft → web</td><td>direct (zh + hreflang)</td><td>direct (web)</td><td>indirect</td><td><strong>direct</strong></td><td>—</td><td>push too early; language drift; slots pre-filled</td></tr><tr><td>Publishing Strategist</td><td>cross-site SEO conflict report + internal link map</td><td>indirect</td><td>indirect</td><td>indirect</td><td>—</td><td><strong>direct</strong></td><td>content island; domain authority fragmented; missed backlinks</td></tr>

</table>

---

Relationship to HSW-002 v5.1 and Cross-References (unchanged from v2)

This document cross-references HSW-002 v5.1 at the following anchors:

If any of these references change, this document's command formats and gate logic must be re-verified.

---

End State Summary

v3.0 is no longer a 9-agent "write and hope it's Waterson-positioned" fleet. It is an 11-agent system with:

Those are the structural changes needed to move the blog fleet from "technically correct but brand-passive" to "Waterson-primary, channel-specific, network-coherent" — while keeping the base-layer principle intact for the SEO draft.