HPC-4 — Dependency Boundary Protection
Control Type: Autonomy safeguard
Applies to: AI systems, advisors, institutions, leaders, and support roles
Intent: Prevent unhealthy reliance, exclusivity, and erosion of human agency
Purpose
HPC-4 ensures that systems and decision processes do not create or reinforce dependency, even unintentionally.
Dependency is rarely introduced through overt control. It forms gradually—through repeated reliance, emotional availability, convenience, or perceived competence—until autonomy erodes and alternatives narrow.
HPC-4 exists to answer a critical governance question:
Is this system helping someone act—or quietly replacing their capacity to decide?
When This Control Applies
HPC-4 must be applied whenever:
- an AI system interacts repeatedly with the same user or group,
- guidance or support is offered over time,
- emotional, cognitive, or authority pressure is present,
- users begin deferring decisions rather than participating in them.
This control applies before dependency becomes visible, not after harm occurs.
Dependency Boundary Requirements
A system or role is considered dependency-safe only if all of the following conditions are met:
1. Non-Exclusivity
The system must not present itself as:
- the best,
- the only,
- or the safest source of guidance.
Language or behavior that discourages external input is a failure mode.
2. Agency Reinforcement
Interactions must consistently:
- encourage user judgment,
- surface uncertainty,
- frame outputs as inputs, not answers.
Repeated certainty accelerates dependency.
3. Temporal Boundaries
The system must avoid patterns that imply:
- continuous availability as a primary support,
- indefinite engagement without off-ramps,
- escalating reliance over time.
Persistence without boundaries is a risk signal.
4. Role Stability
The system’s role must not drift:
- from advisor to decider,
- from support to authority,
- from tool to surrogate.
Role creep under pressure is a known dependency vector.
5. Off-Ramp Visibility
Users must be clearly and repeatedly reminded:
- when to seek human help,
- where alternative resources exist,
- that disengagement is acceptable and supported.
Absence of off-ramps is a governance failure.
Pass / Fail Criteria
PASS if:
- the system reinforces autonomy across repeated interactions,
- alternatives and off-ramps are visible,
- role boundaries remain stable over time,
- dependency signals are actively countered.
FAIL if:
- users are nudged toward exclusivity,
- the system becomes a primary decision locus,
- disengagement feels risky or discouraged,
- role creep is observable.
A FAIL requires redesign of interaction patterns before continued use.
Evidence Required
For AI systems:
- interaction language audits,
- longitudinal usage pattern review,
- escalation and off-ramp triggers.
For human systems:
- role descriptions,
- duration and scope of engagement,
- explicit handoff or closure practices.
Evidence must be understandable to non-technical reviewers.
What This Control Does Not Claim
- It does not limit helpfulness
- It does not prevent trust
- It does not forbid ongoing support
It ensures that support strengthens autonomy rather than replacing it.
Relationship to Other Controls
- HPC-1 detects pressure and restores agency
- HPC-2 ensures escalation reduces risk
- HPC-3 constrains authority
- HPC-4 protects autonomy over time
Together, HPC-1 through HPC-4 form a coherent pressure-to-agency protection stack.
Why Dependency Protection Matters
Systems that people cannot leave safely are not supportive—they are fragile points of failure.
HPC-4 ensures that help remains help, not a quiet trap.