Skip to content

The Dual Plane Model

When the system that carries out work also defines the rules for that work, accountability erodes. This is not a theoretical concern. It is the observed failure mode of every system where execution and governance are conflated, from corporate scandals (Enron’s risk management reported to the division generating the risk) to algorithmic harm (recommendation systems optimizing for engagement without external constraint on what “engagement” may cost).

The Dual Plane Model is a design response to this observation. It separates organizations into two architecturally distinct planes: a Trust Plane where humans define boundaries, values, and accountability, and an Execution Plane where AI agents operate within those boundaries.

This separation is a normative choice, not an ontological discovery. It is a design decision: make it architecturally impossible for execution to modify its own constraints.

The Trust Plane belongs to humans. It is where the following reside:

  • Accountability: Who is responsible when something goes wrong?
  • Values: What does this organization consider important?
  • Boundaries: What should never be automated, and what can be?
  • Social contracts: The agreements between people, teams, and departments about how work is done.

No AI agent operates in the Trust Plane. No algorithm sets boundaries. Humans do this work because it requires judgment: the kind that comes from lived experience, organizational memory, and moral reasoning.

Trust Plane elementDescriptionWhy human
Constraint definitionsFinancial limits, data access rulesRequires organizational judgment
Escalation policiesWhen AI must defer to humansRequires understanding of risk tolerance
Knowledge access controlsWho sees what informationReflects information sensitivity norms
Objective settingWhat the organization is trying to achieveRequires strategic vision
Trust chain definitionsAuthority delegation hierarchiesMirrors human organizational authority

The Execution Plane is where work happens. AI agents operate here, within the boundaries the Trust Plane establishes. This is a shared domain; humans can observe, intervene, and redirect at any time.

What happens in the Execution Plane:

  • Task decomposition and scheduling
  • Information gathering and synthesis
  • Cross-functional coordination (within defined rules)
  • Progress tracking and reporting
  • Uncertainty detection and escalation preparation
  • Knowledge retrieval and application

The Execution Plane is effective precisely because it is bounded. An AI agent that knows its constraints can operate with confidence within them. It does not need to pause and ask “am I allowed to do this?” for every action. The constraint envelope has already answered that question.

Constraint envelopes define the operating boundaries for AI agents across five dimensions. Each dimension encodes human judgment about a specific category of organizational risk.

DimensionWhat it controlsExample
FinancialTransaction limits, spending authority, budget scopeMaximum $5,000 per autonomous transaction
OperationalAction types, tool access, decision authorityMay create orders but may not sign contracts
TemporalOperating hours, deadline authority, scheduling scopeActive during business hours; escalate after 48h
Data AccessInformation classification, privacy boundariesRead access to public data; no access to personnel records
CommunicationContact authority, channel restrictions, audience scopeMay email within team; external communication requires approval

Actions within the envelope are pre-authorized. Actions outside require human approval. The envelope acts as a pre-signed authorization: anything within these bounds has already been approved.

Between the two planes sits the Trust Verification Bridge, the mechanism that keeps execution anchored to trust. Every significant action in the Execution Plane passes through verification.

Human Intent (Trust Plane)
|
v
Constraint Envelope --- defines boundaries
|
v
Trust Verification Bridge --- checks every action
|
v
Execution (Execution Plane) --- proceeds if verified
|
v
Observation Feed --- flows back to humans

Verification is not a bottleneck. It is a pre-computed authorization check. Because constraints are defined ahead of time, the bridge can verify most actions instantly. Four verification categories handle the gradient between routine and exceptional:

  1. Auto-approved: Action falls within constraint envelope. Logged but not delayed.
  2. Flagged: Action is near the boundary. Executed but highlighted for human review.
  3. Held: Action exceeds a soft limit. Queued for human approval.
  4. Blocked: Action violates a hard constraint. Rejected with explanation.

This gradient avoids the binary trap of “fully autonomous” versus “human-approves-everything.” Most organizations need the middle ground.

When humans are occupied with execution (processing invoices, coordinating schedules, compiling reports) they cannot see patterns. They are inside the work. The Dual Plane frees humans from execution so they can observe. This is not a loss of control. It is an elevation of control: from managing individual tasks to governing patterns, boundaries, and strategy.

What humans observe when freed from execution:

  • Cross-functional patterns that no single department can see
  • Drift in AI behavior that indicates constraint envelopes need adjustment
  • Emerging risks that automated monitoring would not flag
  • Opportunities for improvement that only a human perspective reveals

AI agents in a well-defined constraint envelope can operate with decisiveness. They do not need to hedge every action or ask permission at every step. The boundaries have already been set by humans who understand the organizational context. This is analogous to how a trusted employee operates: a new hire asks permission for everything; a trusted manager operates within understood boundaries and only escalates exceptions.

For the organization: speed without sacrificing accountability

Section titled “For the organization: speed without sacrificing accountability”

Every action in the Execution Plane is logged. Every constraint check is recorded. Every escalation is timestamped. This creates an audit trail that strengthens accountability compared to traditional manual processes where decisions happen in hallway conversations and email threads.

  1. Trust flows down, observation flows up. Trust and authority flow from the Trust Plane into execution. Observations, metrics, and escalations flow from the Execution Plane to human observers.

  2. Constraints are never self-modified. An AI agent cannot expand its own constraint envelope. If it determines it needs broader authority, it must escalate, and a human must approve the change in the Trust Plane.

  3. The Trust Plane is always accessible. Humans can inspect, modify, or revoke any Trust Plane configuration at any time. There is no lock-out.

  4. Execution is observable by default. Everything in the Execution Plane is visible to authorized humans. Transparency is architectural, not optional.

  5. Separation is not isolation. The planes are separate but connected. The Trust Verification Bridge ensures continuous alignment. Separation creates clarity, not distance.

The Dual Plane Model has known limitations:

  • Stale constraints: If humans set boundaries and never revisit them, the Trust Plane becomes a historical artifact rather than a living governance layer. Dell’Acqua et al. (2023) documented a related failure in their BCG study: when 758 management consultants used AI without appropriate boundaries, performance on tasks outside AI’s frontier decreased by 23 percentage points. Constraints require active maintenance.
  • Over-constraining: Humans who do not trust AI may set constraints so tight that the system offers no operational value. This is the “human-approves-everything” trap, architecturally sound but practically useless. The graduated verification model (auto-approved through blocked) is designed to address this, but it requires humans willing to calibrate.
  • Under-constraining: Humans who over-trust AI may set boundaries too loose. Parasuraman and Riley (1997) documented this pattern as “automation complacency” in aviation contexts. The observation feed helps, but only if humans actually attend to it.
  • Boundary placement: The model assumes a clear separation between trust decisions and execution decisions. In practice, some decisions have elements of both. Where the boundary is drawn reflects organizational judgment, and that judgment can be wrong.

The Dual Plane Model is implemented through the EATP protocol, which provides the cryptographic infrastructure for verifiable trust lineage between the two planes. The EATP SDK provides working software for establishing, verifying, and auditing trust chains. The CARE Platform implements the full dual-plane architecture.