This library contains methods, frameworks, and analyses that support pressure-aware AI governance across technology, human behavior, and ethics.

SpiralWatch 1.6 Methods

  • Relational Coherence — defining safe human-AI interaction under pressure
    (Tags: Human | Ethics/Governance)
  • Four Human Pressure Quadrants — cognitive, emotional, authority, dependency
    (Tags: Human | Science/Technology)
  • Pressure Stacking & Red Zones — why risk increases non-linearly
    (Tags: Human | Ethics/Governance)
  • Stop Ladder (SLOW / STOP / ESCALATE) — operational safety control
    (Tags: Science/Technology | Governance)
  • Agency Contract — preserving human decision ownership
    (Tags: Human | Ethics/Governance)
  • SpiralWatch Certification Model — scenarios → oracle → evidence → gates
    (Tags: Science/Technology | Governance)

The 5 Fundamentals

Our models and “how we think”

How we produce The Daily Brief / Horizon Signals / SpiralWatch

Explainers + rationale; link to SpiralWatch controls library

Timely and thoughtful explorations