AI_PLUMBER
SYSTEM_INDEX
UPLINK STATUS: OPTIMIZED
ACCESS_LEVEL: ADMIN_ROOT
SESSION_ID: 0x99_PIPE_FLOW
LAST_SYNC: 22.03.2026_04:00_GMT
©2026 AI_PLUMBER_CORP
architecture AI PLUMBER
Home / Framework
Governance-First Architecture

The AI Plumber Framework

A governance-first approach to regulated agentic AI. By Koen Van Lysebetten.

The Core Thesis

In regulated AI, the moat is not the model — it's the governance layer.

Most AI deployments start with model selection and end with a governance retrofit. In regulated environments — banking, healthcare, insurance, public sector — this sequence creates compliance debt that no organization can afford under the EU AI Act, SAMA, GDPR Article 9, or sectoral frameworks.

The AI Plumber framework reverses this: governance becomes the first architectural layer, not an afterthought. Every agent action is attributable and logged, every policy envelope is defined before deployment, and every kill switch is tested before it's needed.

The Deployment Sequences

Traditional (Broken)

  1. Select foundation model
  2. Build application layer
  3. Run pilot with limited scope
  4. Scale to production
  5. Retrofit governance when regulator asks

By step 5: compliance debt, no audit trail, system cannot prove its decisions.

AI Plumber (Fixed)

  1. Define governance requirements + risk classification
  2. Build control plane: logging, attribution, rollback, kill switches
  3. Implement constrained agent identities
  4. Deploy agents within policy envelope
  5. Scale with continuous telemetry + human gates

Result: auditable, reversible, attributable from day one.

Four Foundational Patterns

01

Constrained Agent Identities

Problem: Agents that inherit human privileges create unlimited blast radius and regulatory liability.

Solution: Each agent operates under a narrowly scoped service account with explicit resource and action boundaries.

  • • No agent inherits human user privileges
  • • Service accounts scoped to minimum required permissions
  • • Cryptographic verification at every service boundary
  • • Read-only access as default; write access requires explicit justification

Regulatory alignment: PDPL/SAMA, GDPR, EU AI Act Article 9

02

Attributable Actions

Problem: AI decisions without reasoning trails are black boxes that fail audit requirements.

Solution: Every agent decision is logged with full input context, reasoning trace, and output action.

  • • Timestamp and agent ID
  • • Input context (sanitized for PII)
  • • Reasoning trace or model output
  • • Action taken + confidence score
  • • Decision rationale

Creates a 100% reversible decision trail. If an agent publishes incorrect content, trace back to the exact input, review the reasoning, reverse the action.

Regulatory alignment: EU AI Act Article 12, GDPR Article 22

03

Human-in-the-Loop Gates

Problem: Fully autonomous agents in high-stakes scenarios create unacceptable regulatory and operational risk.

Solution: High-stakes actions require explicit human approval before execution. Human oversight is architecturally enforced — the workflow mechanically pauses and awaits a human authorization token.

High-stakes actions requiring human gates: Financial commitments, legal document publishing, policy changes affecting user data, schema modifications in production, customer-facing communications in regulated industries.

Regulatory alignment: EU AI Act Article 14, SAMA, MiFID II, GDPR Article 5

04

Kill Threshold Monitoring

Problem: Agents that operate without real-time safety monitoring can spiral into costly or dangerous behavior before humans notice.

Solution: Continuous telemetry tracks agent behavior against predefined safety thresholds. Threshold violations trigger automatic suspension and human escalation.

  • Velocity: Actions per minute/hour exceeding baseline
  • Cost: API spend above budget ceiling
  • Error rate: Failed actions above tolerance
  • Confidence decay: Scores trending below acceptable range
  • Policy violations: Attempts to access restricted resources

Cascade: Threshold breach → Agent suspended → Human escalation → Incident log → Manual review + restart authorization.

Regulatory alignment: EU AI Act Article 61, RIZIV, SAMA

Three-Phase Deployment Model

Governance gates scale with automation scope. Each phase unlocks the next only when the prior governance layer is operational and audited.

Phase 1: Read-Only Intelligence

€50K ARR

Objective: Prove value with zero operational risk.

Agent scope: Read-only access, analysis and reporting only, no write actions.

Governance gate: Complete risk register, EU AI Act high-risk classification, GDPR Art.9 data classification map, basic logging and attribution.

Phase 2: Controlled Autonomy

€500K ARR

Objective: Enable agent write actions with full human oversight and rollback.

Agent scope: Write actions to internal systems, API integrations, content generation, constrained agent identities deployed.

Governance gate: Policy envelope defining allowed actions, kill thresholds with automated suspension, human approval gates for high-stakes actions, full rollback capability tested in staging, continuous telemetry dashboard.

Phase 3: Orchestrated Intelligence

€5M+ ARR

Objective: Multi-agent orchestration with enterprise-scale governance.

Agent scope: Multi-agent workflows with handoffs, cross-client policy layer, agent confidence network, full orchestration scope.

Governance gate: Multi-client policy isolation, agent-to-agent attribution chains, distributed kill switch coordination, real-time compliance monitoring, board-level governance reporting.

Agentic AI vs. Traditional Automation

DimensionTraditionalAgentic AI
State spaceFinite, enumerableUnbounded, contextual
Failure modesFully specifiedEmergent
GovernanceChange managementLive policy envelope
AuditIT change logDecision + reasoning trace
Regulatory fitProduct safety / IT change mgmtEU AI Act, SAMA, RIZIV

Critical rule: Use agentic AI only when you can log, attribute, and reverse every contextual judgment in an audit-ready format.

See It in Practice

Governance-first is not a constraint on velocity. It is the only architecture that survives a regulator, a board, and a production incident simultaneously.

Book Architecture Review →