Skip to content

Philosophy

Every major AI governance framework asks the same question: how should organizations constrain their AI? None ask the prior question: what must the organization itself be?

This is not a gap in emphasis. It is a structural omission. Jobin, Ienca, and Vayena (2019) analyzed 84 AI ethics guidelines and found convergence on principles (transparency, fairness, accountability) but persistent divergence on implementation. Hagendorff (2020) identified a “deep gap” between abstract principles and actual practice. The gap persists because the frameworks address the wrong entity. They constrain the AI while leaving the constraining institution unconstrained.

The Terrene Foundation is built on the thesis that you cannot have credible AI governance from an institution that does not govern itself credibly. This is not a preference. It is a structural requirement, derived from extending Fama and Jensen’s (1983) analysis of decision processes to organizations that delegate to autonomous systems.

Three interconnected ideas form the intellectual foundation.

Existing organizational forms (traditional enterprise, AI-assisted enterprise, DAO) are structurally insufficient for governing autonomous systems. The Constrained Organization is the claim that a new form is needed: an enterprise where AI agents perform operational execution within cryptographically verifiable boundaries defined by human authority.

The Terrene Foundation is the first implementation of its own concept. It operates under a 77-clause constitution UEN 202611556G with 11 entrenched provisions, publishes open specifications under CC BY 4.0 SPDX , and builds working software under Apache 2.0 SPDX .

Read more about the Constrained Organization

When the system that carries out work also defines the rules for that work, accountability erodes. The Dual Plane Model prevents this by separating trust from execution architecturally. The Trust Plane (accountability, values, boundaries) belongs permanently to humans. The Execution Plane (task completion, coordination, information processing) is shared with AI operating within human-defined constraints. This separation is not a policy recommendation. It is an infrastructure design, enforced by architecture.

Five constraint dimensions (Financial, Operational, Temporal, Data Access, Communication) define the envelope within which AI agents operate autonomously.

Read more about the Dual Plane Model

What happens when AI handles everything that can be automated? The expected answer is that humans become unnecessary. The observed answer is the opposite: what remains visible are the contributions that were always there but never separately measurable: ethical judgment, relationship capital, contextual wisdom, creative synthesis, emotional intelligence, and cultural navigation.

The Mirror Thesis is ultimately an argument about human dignity: you are more than what you produce.

Read more about the Mirror Thesis

These three concepts are connected by a formal argument extending Fama and Jensen’s (1983) separation of decision management from decision control to organizations that delegate to autonomous systems. The argument produces seven propositions leading to an impossibility result: credible AI governance requires organizational governance. The institutional steward must itself operate as a constrained organization.

The full treatment is in the Constrained Organization thesis (Hong, 2026e), currently in preparation for academic publication. Summaries of the CARE, EATP, and CO specifications are available in the Standards section.