Skip to content

How We Work

The specifications the Terrene Foundation publishes (CARE for governance, EATP for trust verification, CO for methodology) are not theoretical documents. They govern the Foundation’s own operations. This is self-hosting: the organization is the first implementation of the architecture it proposes.

The CARE framework defines the Dual Plane Model: Trust Plane (human authority) separated from Execution Plane (AI operations). At the Terrene Foundation:

  • Trust Plane: The constitution defines the Foundation’s values, boundaries, and accountability. The founder defines constraint envelopes for AI agent operations. Governance decisions, strategic direction, and boundary-setting are human activities.
  • Execution Plane: AI agents perform knowledge base operations (research, drafting, cross-referencing, quality assurance, consistency checking) within constitutional constraints.

The agents do not set their own constraints. They do not modify governance rules. They do not make decisions about what the Foundation should do. They execute within boundaries that the constitution and the founder define.

The EATP protocol provides cryptographic trust lineage. At the Foundation, this means:

  • Every agent capability traces to a human authorization
  • Constraint envelopes specify what each agent may and may not do
  • Trust state escalates monotonically (auto-approved, flagged, held, blocked); it never downgrades silently

Cognitive Orchestration encodes institutional knowledge in five layers:

  1. Intent (Layer 1): Specialized agents with domain-specific knowledge: a standards expert, a security reviewer, a constitution expert, a writing partner
  2. Context (Layer 2): An institutional knowledge base that provides relevant context based on what the agent is currently doing
  3. Guardrails (Layer 3): Enforcement hooks that block violations before they reach the output, deterministic code that pattern-matches against structural rules
  4. Instructions (Layer 4): Structured workflows with approval gates; certain transitions require human review
  5. Learning (Layer 5): Observation logs and pattern analysis that evolve the knowledge base over time

This five-layer architecture persists across sessions. When a new session begins, the institutional knowledge is present. The Foundation does not start from scratch each time.

The Foundation’s AI agent team operates under constitutional constraints using multiple CO domain applications: COC (codegen) for software, COR (research) for academic work, COG (governance) for institutional operations. Each application uses the same five-layer architecture with domain-specific agents. The full list of CO domain applications is on the CO specification page.

Agents are specialized by role:

Analysis and planning:

  • Deep analyst: failure analysis, complexity assessment
  • Requirements analyst: requirements breakdown
  • Framework advisor: implementation approach selection

Standards:

  • CARE expert, EATP expert, CO expert: specification knowledge
  • Constitution expert: 77-clause constitutional knowledge
  • Governance layer expert: governance architecture

Review and quality:

  • Intermediate reviewer: code review after changes
  • Security reviewer: security audit before commits
  • Gold standards validator: compliance checking

Research:

  • Literature researcher, writing partner: research support
  • Claims verifier: factual verification
  • Argument critic: argument quality assessment

Management:

  • Todo manager: task tracking
  • Git release specialist: version control and releases

Each agent operates within defined constraints. A research agent can read and synthesize but cannot publish. A security reviewer can flag issues but cannot make governance decisions. A writing partner can draft content but cannot modify constitutional documents.

This website was produced using the Constrained Organization methodology:

  1. Trust Plane decisions (human): What the website should contain, what voice to use, what claims are permitted, what the constitutional constraints require
  2. Execution Plane work (AI agents): Research source materials, draft content, cross-reference claims against artifacts, check consistency across pages, validate against constitutional provisions
  3. Verification bridge: Every claim on the website links to a verifiable artifact. The constitution was red-teamed by a deep-analyst agent, reviewed by a security-reviewer agent, and validated by a constitution-expert agent. The human reviewed and approved each stage.

The agents did not decide what the Foundation should say. They helped the founder say it accurately, consistently, and with every claim traceable to evidence.

The Foundation’s self-hosting demonstrates three things:

  1. The architecture works: The Dual Plane Model, constraint envelopes, and five-layer CO architecture can govern real operations, not just theoretical scenarios.

  2. Constraints are enforceable: Naming conventions, constitutional provisions, security rules, and content standards are enforced by infrastructure (hooks, validation, guardrails), not by hoping the AI remembers.

  3. Knowledge compounds: Each session builds on previous work. Institutional knowledge (what terminology is correct, what the constitution requires, what claims have been verified) accumulates and persists.

Self-hosting at a single organization, particularly during Phase 1 with one person, is a limited test. The Constrained Organization model needs validation at scale, across multiple organizations, and under real institutional pressure. The Foundation’s self-hosting proves feasibility, not generalizability.