Direct naar hoofdinhoud
SAITS.Online β€” AI Infrastructure Brief

How We Build
AI Infrastructure

AI becomes powerful the moment it can connect to systems. It becomes dangerous the moment it can do that without control. This is how SAITS inserts the harness first.

By Gerard Krom β€” Founder, SAITS.Online
10 min read
LOCALROUTE DEBUGFAIL-CLOSEDTRUTH LAYER
User Prompt
SAITS Editor
Local Brain
Control Layer
Truth Layer
Logged Response
Why this matters

Control happens before execution

The request does not move directly from prompt to model to action. It passes through routing, policy, trust, and storage truth first.

Policy

No hidden internet or cloud path.

Storage

Project, soul, and private truth stay controlled.

Audit

Usage, provenance, and route state remain visible.

Local-first
Runtime posture

Inference can stay inside the user environment instead of defaulting outward.

Fail-closed
Control model

If the route, policy, or context is unclear, the system stops instead of guessing.

Truth-based
Knowledge model

AI only works with controlled storage and verified project context.

Auditable
Operational outcome

Requests, routes, usage, provenance, and outputs remain traceable.

The new reality

AI is not the problem. Uncontrolled connection is.

AI is now being connected to data, APIs, systems, and automation almost everywhere. The mistake is not capability. The mistake is connecting that capability directly to execution before trust and policy are in place.

AI is being connected too early

Many teams connect models directly to data, APIs, automation, and execution before they insert policy, trust, or observability.

The real risk is not AI in isolation

The dangerous state is AI that can talk to the outside world, write, trigger, or act without a control layer between intent and execution.

Security cannot be bolted on later

Once AI becomes a workflow dependency, missing controls become system risk, not just model risk.

Architecture

Three layers, one controlled chain.

We keep the model, the control plane, and the truth layer clearly separated. That is what makes the whole system understandable, governable, and auditable.

01
AI runtime

Local Brain

The model can run locally, for example through Ollama, without hidden outbound dependencies or silent cloud fallback.

  • Can operate offline
  • Keeps inference close to the user
  • Supports controlled project-side execution
02
The harness

Control Layer

This is the decision layer between prompt and model. It governs routing, trust, policy, fallback, and execution boundaries.

  • Fail-closed by default
  • Route control before execution
  • No invisible path to internet or cloud
03
Controlled context

Storage and Truth

Workspace data, private storage, and truth layers define what the system is allowed to treat as valid context.

  • Project truth
  • Soul truth
  • Private encrypted user context
Flow

Every request moves through a visible path.

A user request does not jump straight to a model. The route is chosen, the harness checks the path, truth constrains the context, and the final response is logged.

01

User prompt enters the editor

A request starts as user intent, not as execution. The editor becomes the controlled entry point.

02

Route selection happens first

The system decides whether the work stays local, whether internet is allowed, and whether the route is permitted at all.

03

The local brain handles inference

The model runs inside the chosen runtime instead of making silent unmanaged calls outward.

04

The harness checks the path

Policy, trust, and routing logic validate what can be read, what can be used, and what is allowed to happen next.

05

Truth constrains the response

Only controlled project knowledge and private storage are treated as valid working context.

06

Everything is stored and traceable

Responses, usage, provenance, and route information are retained so the system can be reviewed later.

Security by design

We do not build AI first and secure it later.

The system is built around control from the start. That means no hidden path to the outside world, no silent fallback, and no execution without bounded trust.

Fail-closed when route, context, or policy is unclear
No hidden routes to cloud, APIs, or internet
Private storage remains encrypted and user-bound
Execution is controlled, not implied by a prompt
Route debug and provenance are part of normal operation
Observability is built in before scale begins
Why SAITS

The goal is not more AI. The goal is better control over AI.

What makes SAITS different is not only that AI can run locally. It is that the whole operating model is designed to stay visible, bounded, and owned by the user.

Local-first AI

SAITS starts from the assumption that AI should remain close to the environment it serves.

Truth over hallucination

Controlled truth layers reduce the gap between generated output and actual project reality.

Ticket-first workflows

Work can be traced, reviewed, and completed inside a real operating chain instead of scattered chat history.

No hidden dependency model

The system does not quietly switch to unmanaged cloud behavior behind the operator’s back.

Future direction

The future belongs to AI that is controlled before it is trusted.

We believe the next generation of AI systems will not be defined by raw capability alone. They will be defined by how well they keep data close, actions bounded, and operational truth visible.

Data should stay with the user whenever possible.
AI should be bounded before it is trusted.
Centralized AI without control becomes a fragile dependency.
The next generation of AI systems will be defined by control, not only by capability.