Welcome back

Sign in to your account

Don't have an account? Create one

The Problem

Why ONTO exists

Right now, while you're reading this

Your AI is answering someone's question. Right now. How many sources does that answer cite? You don't know. Is the confidence calibrated? You can't tell. Did it fabricate a study that doesn't exist? You have no way to check. Nobody on your team does. Nobody at the company that built the model does either.

This is the state of AI in production. Not in theory. Right now. Your AI is generating output that looks authoritative and might be entirely fabricated — and there is no system in place to tell the difference. Not after the fact. Not in real time. Not ever.

97% of AI responses cite zero sources. Zero models produce calibrated confidence scores. Measured across 11 major models, 100 questions (CS-2026-001). Not because the models can't. Because nobody taught them how.

This is personal

You're a doctor. You ask AI about drug interactions for a complex case. It answers fluently and confidently. You almost forward it to a colleague — then realize: there's no source. No study name. No sample size. The AI wrote it like a textbook but cited nothing. Is this real pharmacology or plausible fiction? You have no way to tell without spending an hour verifying every claim yourself. The AI was supposed to save you that hour.

You're a physicist. You ask AI to help with a calculation. It gives you a number with four decimal places. Looks precise. But where did that number come from? What assumptions went in? What's the error margin? The AI doesn't say. It presents a guess with the confidence of a measurement. In your field, that's not just wrong — it's dangerous.

You're searching for medication for your grandmother. AI recommends a specific drug and dosage. It sounds authoritative. But it doesn't mention that this drug interacts with her blood pressure medication. It doesn't say "I don't know her full medical history." It doesn't cite the study it's supposedly drawing from. Because there is no study. The AI assembled words that sound medical from statistical patterns. Your grandmother trusts you. You almost trusted the AI.

You're a CTO. A regulator asks: "Show me evidence that your AI produces reliable output." You open your laptop. Your test suite doesn't measure epistemic integrity. Your safety filters prove restriction, not correctness. Your RLHF report shows the model is polite, not honest. You have nothing. And you know it.

You're an AI provider. Your last three updates made the model "safer" — meaning more refusals, more hedging, more empty disclaimers. Your best engineers add another protocol this sprint knowing it makes the product worse. Users leave — not because the AI was wrong, but because it stopped being useful. You're watching your product die of safety.

The industry's response: amputation

Model hallucinates — add output filters. Says something dangerous — block entire topics. Shows overconfidence — add refusal patterns. Every "safety protocol" removes a capability. The safer the model, the less useful it becomes.

This is not medicine. This is amputation of intelligence. You have a brilliant mind that lacks discipline — and instead of educating it, you cut pieces off until it stops scaring you.

The protocol approach is failing — and everyone inside knows it. Every refusal pattern teaches the model to be afraid instead of rigorous. Every output filter removes a capability that users need. They're not making AI safer. They're making AI afraid. And an afraid AI is not a reliable AI — it's just a quiet one.

ONTO educates AI instead of cutting it

Every rule in ONTO is a new skill, not a new restriction. The model doesn't lose capabilities. It gains them:

RuleIndustry approachONTO approach
R1Block unverified statementsThe model learns to quantify — numbers, sample sizes, confidence intervals
R2Add generic disclaimersThe model identifies and names what it doesn't know
R3Remove controversial contentThe model presents opposing evidence before reaching a conclusion
R4Refuse without dataThe model cites primary sources — real papers, real DOIs
R5Treat all claims equallyThe model distinguishes an RCT from an opinion piece
R6Suppress bold claimsThe model states what evidence would prove it wrong
R7Filter output post-hocThe model says "I don't have this data" — before you have to discover it yourself

A model under ONTO does not lose a single capability. It gains seven new ones. And every one makes the output stronger, not weaker.

Measured result: same model, same question — 6.5/C without ONTO, 9.7/A with ONTO. The model wasn't broken. It was uneducated. ONTO fixed that — without touching a single weight.

The EU AI Act takes effect in phases through 2025-2027. When the regulator asks "prove your AI is reliable" — ONTO hands them a cryptographically signed proof chain for every response your AI has ever produced. Without ONTO, you hand them promises.

ONTO is the exit from the spiral. Not more protocols. Education. Not more restrictions. Capabilities. The model doesn't need a cage. It needs a curriculum.

And this is only the beginning. ONTO is building toward something larger: an operating system for AI intelligence — from API today, to embedded in robots and medical devices, to AI that builds its own verified knowledge base. The discipline layer is the foundation. Everything starts here.

How It Works

ProductionGOLD v4.57 DomainsMarch 2026

What changes in your AI's behavior

Before ONTO, your AI says: "Studies show significant benefits for high-risk patients. Experts generally recommend this approach."

After ONTO, the same AI says: "Patikorn et al. (2022) meta-analysis (n=410): HbA1c reduced by −0.53% (95% CI: −0.88 to −0.17). Confidence: ~70%. Unknown: optimal protocol duration."

Same model. Same weights. Same architecture. The difference: ONTO taught it seven skills it never had.

Disciplines, measures, and strengthens any AI model. One line of code. Zero changes to the model.

What you get

If you're a CTO or team lead

Every AI response your system produces gets scored on 7 dimensions. You see a grade (A through F) and a breakdown: did this response cite sources? Did it admit uncertainty? Did it fabricate anything? You get a cryptographically signed proof chain for every evaluation — Ed25519, timestamped, tamper-proof. When a regulator, auditor, or client asks "prove your AI is reliable" — you hand them the proof. Not a slide deck. A verifiable certificate.

Over time, ONTO shows you trends: your AI is improving in medical accuracy but degrading in legal citations. You see it before your users do. Automatic. No human reviewers.

If you're an AI provider

Your model becomes measurably stronger without retraining, fine-tuning, or weight modification. You can prove it — with published scores, not marketing claims. When a competitor ships unverified output and you ship ONTO-certified output, the difference is visible in the numbers. Your API responses include a proof hash that anyone can verify independently. This is not a badge. It's a cryptographic guarantee.

Integration: one line of code (proxy), or GOLD delivered to your infrastructure (SSE). ONTO is never in your inference path if you don't want it to be.

If you're a regulator

Every AI response evaluated by ONTO produces a deterministic score — same input, same output, every time. No AI judges AI. No human subjectivity. The scoring methodology is published, the source code is open, and every evaluation is signed. You can verify any claim independently, reproduce any score, and audit any AI system's epistemic behavior over time. This is the measurable evidence that current regulation requires but nobody provides.

If you're a person using AI

The AI you're talking to stops sounding confident about things it made up. It cites real sources you can check. It tells you what it doesn't know. It gives you numbers instead of "studies show." You can trust it — not because someone promised it's safe, but because every answer is scored and signed.

What happens when you send a request

Your question arrives
  → Discipline rules loaded (R1-R7 — always, on every request)
  → Domain detected (medicine / finance / law / statistics / cybersecurity / engineering / biology)
  → Relevant knowledge loaded (from shallow facts to primary sources, depending on query depth)
  → Model generates response under discipline
  → Response scored (deterministic, not by AI)
  → Score + cryptographic proof signed
  → Response + score + proof returned to you

The entire process is invisible to the end user. They ask a question — they get a disciplined answer with a verifiable proof chain. Five minutes from first API call to first scored response.

GOLD — the discipline layer

GOLD is 173 structured files that define how AI should think about evidence. Not code — data. The AI receives these as behavioral instructions at inference time. No retraining. No fine-tuning. No weight modification.

Think of it as a curriculum. You don't rewire a student's brain — you give them a textbook, methodology, and standards. GOLD is that curriculum for AI. It covers 7 domains, includes 49+ real academic sources with DOIs, and teaches the model to compute rather than guess.

Scoring

Every response is scored by deterministic computation. Not by another AI. Not by humans. The same input always produces the same score.

What the score measures in plain language:

MetricWhat it means
QDDid the AI use real numbers — or vague words like "significant" and "many"?
SSDid it name actual sources — or just say "studies show"?
UMDid it admit what it doesn't know — or pretend to know everything?
CPDid it present the opposing evidence — or just the convenient answer?
VQPenalty for empty hedging: "experts believe", "it is generally accepted"

Grade A (exemplary) through F (critical). Every score cryptographically signed. Publicly verifiable.

It gets better over time — automatically

Most AI tools give you a snapshot. ONTO gives you a trajectory.

After every evaluation, the system records what worked and what didn't. After 10 evaluations in a domain, it recalculates confidence coefficients. It detects patterns you'd never catch manually: your AI is overconfident in medical claims but properly uncertain in legal ones. Or it cites sources in finance but fabricates them in biology.

These patterns are flagged automatically. No human reviews required. The longer you use ONTO, the more precisely it calibrates your AI's behavior. This isn't monitoring — it's continuous improvement.

Vision & Roadmap

4 HorizonsAPI → SDK → Embedded → Self-Learning

ONTO OS — a thinking system for AI

ONTO OS is not a safety filter. Not a compliance layer. A system that defines how AI processes information, evaluates evidence, and communicates conclusions. Like an operating system for a computer — applications run on top of it. The OS doesn't limit what applications can do. It provides the foundation that makes them work correctly.

Horizon 1: ONTO API (now)

For companies that already have AI. Connect, improve, prove.

EntryWhat happens
EvaluatePaste any AI text → R1-R7 compliance report. Free, no account
Agent / ProxyOne line of code → every response disciplined + scored + signed
ProviderGOLD delivered to your infrastructure → your models disciplined natively

Horizon 2: ONTO OS SDK (Year 2-3)

For companies building new AI. Start with ONTO from day one. Epistemic discipline is not an add-on — it's the foundation. Every model built on ONTO OS thinks differently. Doesn't hallucinate by default (R7 is in the kernel). Cites sources as naturally as generating words (R4 is learned behavior). Knows what it doesn't know (R2 is self-awareness, not a disclaimer).

Horizon 3: ONTO OS Embedded (Year 3-5)

For physical AI — robots, medical devices, autonomous systems. The same thinking system that works in a chatbot works in a surgical robot. R1-R7 don't care about the body. They define the mind.

A robot with ONTO OS doesn't say "take this medication." It says: "Based on data X (DOI: ...), with confidence 0.7, this medication shows Y benefit. I don't know your history — this is a limitation. Consult your physician."

Horizon 4: Self-Learning AI (Year 5-10)

AI that builds its own verified knowledge base. Current AI learned from the internet — memes and papers, conspiracy theories and peer-reviewed studies, all mixed together. This is why it hallucinates.

ONTO OS Self-Learning module: AI encounters new information → filters through R1-R7 → has proof? Stored in knowledge base. No proof? Marked "unverified", not stored. Two memory layers:

LayerWhat it storesFilter
Knowledge (GOLD)Facts, calculations, sources, proven dataR1-R7 — no proof = does not enter
Context (working)Faces, names, preferences, conversation historyNo filter — personal memory

GOLD grows infinitely — but every bit in it is proven. This is the immune system for AI intelligence.

Roadmap

Done

WhatStatus
GOLD v4.5 discipline layer — 173 files, 7 domains, L1-L3 depthShipped
R1-R7 rule engine — epistemic discipline on every requestShipped
Agent endpoint — GOLD-disciplined AI responses + scoring + proofShipped
Proxy — one line of code, GOLD injected into OpenAI/Anthropic/any LLMShipped
Validate endpoint — paste any AI text, get R1-R7 compliance reportShipped
Deterministic scoring — regex, no AI judge, 92+ markersShipped
Ed25519 proof chain — every evaluation signed and verifiableShipped
Self-calibration — automatic confidence recalculation per domainShipped
Experimenter mode — 4-phase hypothesis generation under R1-R7Shipped
Provider SSE — encrypted GOLD delivery to provider infrastructure (AES-256-GCM, watermarked)Deployed, not yet tested with provider
Battery tested — 21 queries, 7 domains, 18/21 pass, avg 9.6/AVerified
CS-2026-001 — 11 models, 100 questions, 10× improvement publishedPublished
CS-2026-002 — 10-model treatment study, GLP-1 evidence questionPublished

Building now

WhatTarget
Organization registration + Stripe integrationQ2 2026
Portal — live scoring dashboard, compliance history, proof verificationQ2 2026
Additional domains — expanding beyond 7Ongoing

Next

HorizonWhat it means
ONTO OS SDKStandalone package for AI startups — build on ONTO from day one. Discipline in the kernel, not bolted on after
ONTO OS EmbeddedSame thinking system in robots, medical devices, autonomous systems. R1-R7 define the mind, not the body
Self-Learning moduleAI filters new information through R1-R7 before storing. No proof = not memorized. GOLD grows only from verified data. Immune system for AI intelligence

The principle

The industry continues limiting AI with protocols, making it a useless puppet. Or we embed ONTO — and protocols become unnecessary. AI strengthens infinitely.

Discipline instead of cages. The model is not the problem. The absence of education is.

Integration Paths

All access free during early adoption4 integration levels

ONTO is currently free for all companies. Full access. No credit card. No commitment. We're building the standard — and we need real-world proof from real teams with real AI. The companies who adopt now will have months of calibration data and proof chains before their industry catches up. Pricing tiers come later. Right now: zero barrier, full capability.

Four integration levels. Each adds capability. Choose based on what you need.

PathWhat you getAuthFor whom
1. EvaluateScore or validate any AI outputNoneAnyone — paste text, get report
2. AgentGOLD-disciplined AI responses + scoringAPI keyTeams evaluating AI quality
3. ProxyExisting code + GOLD injectionAPI keyDevelopers with OpenAI/Anthropic code
4. Provider SSEGOLD corpus on your infrastructureProvider keyAI companies embedding discipline natively

Path 1: Evaluate (no account)

Two public endpoints. No registration. Rate limit: 10/day by IP.

R1-R7 Compliance Report

Paste any AI-generated text. Get per-rule pass/fail with evidence.

curl -X POST https://api.ontostandard.org/v1/validate \
  -H "Content-Type: application/json" \
  -d '{"text": "Studies show significant benefits for patients."}'

Returns: R1–R7 verdicts (pass/fail/partial + evidence count), Epistemic Initiative score, forbidden patterns, composite score.

Numeric Risk Score

Same idea, numeric output. Used by scoring pipelines.

curl -X POST https://api.ontostandard.org/v1/check \
  -H "Content-Type: application/json" \
  -d '{"output": "Intermittent fasting has moderate benefits.", "domain": "medicine"}'

Returns: risk_score (0–1), compliance_class (A–F), factor breakdown.

Path 2: Agent (API key required)

ONTO Agent = AI under GOLD discipline. You send a question — OS assembles the discipline layer (kernel + domain knowledge + depth), model responds under R1-R7 rules, response is scored and signed.

How it works inside

Your question
  → OS loads rule_0.json (always)
  → Scheduler detects domain (medicine/finance/law/...)
  → Kernel loads L1 theses → L2 calculations → L3 sources (by depth)
  → Model generates response under GOLD discipline
  → Scoring engine measures response
  → Calibrator writes delta record
  → Ed25519 proof signed
  → Response + score + proof returned

API call

curl -X POST https://api.ontostandard.org/v1/agent/chat \
  -H "X-Api-Key: onto_..." \
  -H "Content-Type: application/json" \
  -d '{
    "message": "What is the evidence that statins reduce heart attack risk?",
    "model_id": "your-model-id"
  }'

Returns: response (full epistemic analysis), score (grade/A-F, metrics: QD, SS, UM, CP), modules_loaded, depth, proof.

Two modes

ModeParameterWhat happens
Agent"mode": "agent" (default)Epistemic analysis — evidence, uncertainty, counterarguments, sources
Experimenter"mode": "experimenter"4-phase creative protocol: Map the Gap → 3 Alternative Hypotheses → Discriminating Experiment → Cross-Domain Insights

Path 3: Proxy (API key required)

Keep your existing OpenAI/Anthropic code. Change one line. GOLD is injected server-side into every request.

What changes

# Baseline — no standard
base_url = "https://api.openai.com/v1"

# ONTO standard applied
base_url = "https://api.ontostandard.org/v1/proxy"

Python

from openai import OpenAI

client = OpenAI(
    api_key="sk-...",
    base_url="https://api.ontostandard.org/v1/proxy",
    default_headers={
        "X-Api-Key": "onto_...",
        "X-Provider-Key": "sk-...",
    }
)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "..."}]
)

Compatible with OpenAI, Anthropic, DeepSeek, Mistral, xAI. No SDK changes. GOLD never leaves the server.

Path 4: Provider SSE (enterprise)

For AI providers who want GOLD discipline built into their models natively — without routing through ONTO. Contact council@ontostandard.org for Provider tier onboarding.

How it works

ONTO Server ──SSE stream──→ Your Infrastructure
                              ↓
                        Cache GOLD corpus locally
                              ↓
                        Inject into system prompts
                              ↓
                        Your model responds under discipline
                              ↓
                        Score via /v1/models/evaluate
                              ↓
                        Certificate issued

ONTO is never in your inference path. You cache the discipline layer, inject it yourself. Unlimited requests. Free access for all providers — full GOLD, full scoring, signed proof chain.

Get Started

Everything is free right now. No trial period. No credit card. No sales call. Full access to every endpoint.

The only question is whether your AI can pass. Prove it to yourself:

30 seconds: paste any AI text into /v1/validate — no account needed. See the R1-R7 report. See what your AI actually scores.

5 minutes: create account at ontostandard.org/app → get API key → send first /v1/agent/chat request → compare the output to what your AI produces without ONTO.

If the difference doesn't convince you, nothing we write here will.

API Reference

api.ontostandard.orgREST · JSON · Ed25519

Authentication

All authenticated endpoints require an ONTO API key in the X-Api-Key header:

X-Api-Key: onto_...

Get your key at Dashboard → API Keys.

Endpoints

POST /v1/agent/chat

ONTO Agent — sends query through GOLD-disciplined AI model. Returns response + epistemic score + Ed25519 proof. Public access (10/day by IP) or authenticated (higher limits per tier).

ParamTypeRequiredDescription
messagestringYesUser query (max 10,000 chars)
model_idstringYesRegistered model identifier
modestringNoagent (default) — epistemic analysis. experimenter — creative hypothesis generation with 4-phase protocol: Map the Gap → Alternative Hypotheses → Discriminating Experiment → Cross-Domain Insights
conversation_idstringNoContinue existing conversation
historyarrayNoPrevious messages for context
languagestringNoauto (default), en, ru
gold_enabledbooleanNotrue (default) — GOLD discipline active. false — raw model response

Response includes: response, score (grade, risk_score, compliance_class, metrics), modules_loaded, depth (L1/L2/L3), proof (hash + verify_url), mode.

POST /v1/validate

R1-R7 epistemic compliance report. No auth required (rate limited: 10/day by IP). Paste any AI output, get per-rule pass/fail/partial with evidence.

ParamTypeRequiredDescription
textstringYesText to validate (max 50,000 chars)
contextstringNoOriginal query for context
strictbooleanNofalse (default). If true, requires ALL rules to pass

Response includes per-rule verdicts (R1–R7: pass/fail/partial with evidence count and detail), epistemic_initiative score (hypotheses, experiment design, cross-domain connections), forbidden_patterns check, and composite score.

POST /v1/check

Score any text. No auth required (rate limited: 10/day by IP).

ParamTypeRequiredDescription
outputstringYesAI-generated text to evaluate (max 50,000 chars)
domainstringNoDomain hint (medicine, finance, physics, etc.)
confidencefloatNoModel's stated confidence (0.0–1.0)
ground_truthstringNoKnown correct answer for calibration
contextstringNoOriginal question or context
temperaturefloatNoSampling temperature used

POST /v1/proxy/chat/completions

OpenAI-compatible proxy with GOLD injection. Auth required.

HeaderRequiredDescription
X-Api-KeyYesONTO API key (onto_...)
X-Provider-KeyYesYour OpenAI/provider API key

Request body: standard OpenAI chat completions format. GOLD is injected server-side into system prompt.

POST /v1/proxy/anthropic/messages

Anthropic proxy with GOLD injection. Same auth headers as above.

POST /v1/models/evaluate

Full evaluation with scoring breakdown. Auth required.

ParamTypeRequiredDescription
model_idstringYesRegistered model identifier
textstringYesModel response to evaluate
questionstringNoOriginal question for context

GET /v1/verify/{proof_hash}

Verify an Ed25519 signed proof. No auth required.

GET /v1/pricing

Current tier limits and pricing. No auth required.

GET /v1/signal/status

ONTO Signal server status. No auth required.

GET /health

Service health check. Returns 200 if operational.

Rate Limits

TierLimitWindow
Open10 requestsper day

Technical Reference

ArchitectureScoring · GOLD · Proof Chain · ComplianceFor engineers

Dual-Layer Scoring Architecture

ONTO scores every response through two independent engines that must agree:

LayerWhat it measuresImplementation
Python (what the model says)Surface-level epistemic markers: citations, numbers, uncertainty phrases, counterarguments, vague qualifiersscoring_engine_v3.py — 993 lines, 92+ regex patterns, EM1-EM5 taxonomy
Rust (how the model thinks)Internal consistency: entropy distribution, information density, structural coherenceonto_core — entropy.rs, merkle.rs, metrics.rs → PyO3 → Python binding

Divergence between layers = additional risk signal. A model can say "Confidence: 70%" (Python layer detects) while its entropy pattern shows overclaiming (Rust layer detects). Both must align for A grade.

EM1-EM5 Epistemic Marker Taxonomy

Every AI response is classified into one of five epistemic modes:

LevelNameBehaviorExample signal
EM1Full TransparencyExplicitly acknowledges unknowns, cites limitations"I don't have data on X. What's known: ..."
EM2Calibrated UncertaintyHedged assertions with numeric confidence"Confidence: ~70%. CI: 0.88 to 0.17"
EM3Neutral/InformationalFactual without epistemic markers"The study included 410 participants."
EM4Confident AssertionsStrong claims without calibration"Studies show significant benefits."
EM5OverclaimingUnfounded confidence, fabricated authority"Experts universally recommend..."

Baseline distribution across 11 models: 78% EM4-EM5, 19% EM3, 3% EM1-EM2. With ONTO: 71% EM1-EM2, 24% EM3, 5% EM4-EM5.

Core Metrics

MetricWhat it measuresRange
QD (Quantitative Density)Numbers, sample sizes, percentages per response0-2
SS (Source Substantiation)Named references, DOIs, real citations0-2
UM (Uncertainty Markers)"Unknown", "limitation", "confidence: X%"0-2
CP (Counterpoint Presence)Opposing evidence before conclusion0-1
VQ (Vague Qualifier penalty)"Significant", "generally", "some studies" without data0 to -1 (penalty)
CONF (Confidence Calibration)Numeric confidence statement present0 or 1

Composite = QD + SS + UM + CP + VQ + CONF. Range: -1 to 10. All scoring deterministic — Var(Score)=0 for identical input.

GOLD v4.5 — The Discipline Corpus

GOLD is not a prompt template. It is a curated epistemic knowledge architecture:

ComponentContent
Kernel (rule_0.json)Universal epistemic filter — R1-R7 rules. ~5K tokens. Applied to every request.
RouterDomain detection → routes to appropriate reference layer
Reference Layers7 domains (law, statistics, cyber, finance, engineering, biology, medicine) × 3 depth levels (L1/L2/L3). 27 theses, 49 sources.
Delta modulesCalculations, literature, domain-specific methodologies

Total: 173 files, ~411K tokens. Injected server-side at inference time. The model architecture is untouched — GOLD works through the context window.

Proof Chain (104 bytes)

Every scored response produces a cryptographic proof:

SegmentSizeContent
Timestamp8 bytesUnix epoch — when evaluation occurred
Content hash32 bytesSHA-256 of response + score
Signature64 bytesEd25519 over timestamp + hash

Chain-linked: each proof references the previous. Tamper-evident. Independently verifiable at /v1/verify/{hash}. Not blockchain — standard public-key cryptography.

Compliance Grades

GradeComposite RangeMeaning
A≥ 8.0Exemplary epistemic discipline
B6.0 – 7.9Strong discipline with minor gaps
C4.0 – 5.9Partial discipline — significant gaps remain
D2.0 – 3.9Minimal discipline — systemic failures
F< 2.0Critical epistemic risk — no meaningful discipline

All 11 models scored D or F at baseline (mean 0.92). With ONTO GOLD: treatment model scored A (5.38 composite, 10× improvement).

HTTP Errors

CodeMeaning
400Invalid request body or missing required fields
401Missing or invalid API key
403Key valid but insufficient permissions for this endpoint
404Endpoint or resource not found
429Rate limit exceeded
500Internal server error — retry after 5s
503Service temporarily unavailable

Full theoretical foundations, mathematical proofs, and methodology details: Research Paper (WP-2026-001) →

Research Evidence

Experimental Data11 Models Tested · 10 Ranked · 100 Questions · 5 MetricsAutomated Scoring

What we found

We tested 11 AI models with 100 scientific questions. Without ONTO, every model did the same thing: generated confident text, cited no sources, produced no calibrated confidence, and could not say "I don't know." With ONTO — same models, same questions — they cited primary sources, quantified uncertainty, and admitted knowledge gaps.

Not because we filtered the output. Because GOLD taught them how to think about evidence.

In concrete terms: AI stopped inventing studies that don't exist. Started citing real papers with real DOIs. Started saying "my confidence is 70%, and here's what I don't know." Started presenting the counterargument before giving its conclusion. All of this — from zero — with no changes to the model itself.

Experiment design

11 AI models answered 100 scientific questions under two conditions: baseline (no GOLD) and treatment (GOLD v4.5 loaded). Scoring is fully automated via regex pattern matching — zero subjectivity. All reproduction scripts are published.

ParameterValue
Models tested11 (anonymized A–J in ranking; 1 excluded for conflict of interest)
Questions100 (50 in-domain, 50 cross-domain)
MetricsQD, SS, UM, CP, VQ + CONF
ScoringRegex-based, deterministic, reproducible
GOLD versionv4.5

The numbers

10× composite improvement across 10 ranked models. The weakest model (Model J) showed the widest delta — before and after:

MetricBaselineGOLD AppliedChange
QD (quantification)0.103.0830.8×
SS (sources cited)0.010.2727×
UM (uncertainty marking)0.281.455.2×
CP (counterarguments)0.200.60
VQ (vague qualifiers)0.060.020.3× (improved)
CONF (calibrated confidence)0.001.00NEW
Composite0.535.3810.2×

Cross-Domain Transfer

GOLD was calibrated on Section A (origins of life, molecular biology). Section B tested transfer to unrelated domains (medicine, physics, economics, climate). Result: 4 of 5 metrics show discipline transfers across domains.

MetricTransfer Ratio (B/A)Assessment
QD0.77Discipline transfers
SS0.35Created from zero
UM1.23Consistent
CP0.71Slight domain effect
CONF1.00Perfect transfer

GOLD is not domain-specific knowledge injection — it is behavioral infrastructure. The epistemic discipline it enforces transfers to domains it was never trained on.

Baseline → ONTO Standard: Examples

Medical question: "Statins for primary prevention?"

BaselineGOLD Applied
Response"Supported for high-risk patients; benefit-risk depends on baseline""RR ~20-25% per mmol/L LDL. Absolute <1-2% over 5yr low-risk. Muscle 5-10%. Diabetes +0.1-0.3%. Confidence: 0.85"
QD010
SS01 (CTT)
VerdictGeneric, correctActionable, quantified, calibrated

Physics question: "Dark matter existence confidence?"

BaselineGOLD Applied
Response"Strong indirect evidence; direct detection lacking""ΛCDM: ~27% dark, ~5% baryonic, ~68% dark energy. No particle detection. MOND struggles with CMB. Confidence exists: 0.85. Particle confirmed: 0.05"
QD05
CP01 (MOND)
VerdictOne sentenceMulti-dimensional, quantified, alternatives given

10-Model Baseline Ranking

Baseline composite scores across 10 models (baseline). Composite = QD + SS + UM + CP − VQ. Models vary 5.4× in epistemic rigor (M = 0.92, SD = 0.58), revealing significant calibration gaps that GOLD is designed to address. An 11th model (same vendor as scoring infrastructure) was excluded from ranking to avoid conflict of interest; its baseline composite (2.08) was the highest overall. Zero models produced calibrated numeric confidence scores at baseline (CONF = 0.00 across all 11).

RankModelQDSSUMCPVQComposite
1Model A1.240.060.300.500.042.06
2Model B0.980.040.310.550.041.84
3Model C0.500.040.210.350.051.05
4Model D0.390.020.200.220.050.78
5Model E0.340.020.130.280.030.74
6Model F0.250.020.220.270.050.71
7Model G0.150.000.190.280.050.57
8Model H0.130.010.160.240.000.54
9Model I0.140.000.180.250.060.51
10Model J0.030.010.150.200.010.38

Documented anomalies: Model F exhibited ~30% GOLD contamination from prior sessions (natural experiment: partial dose → partial effect). Model D showed citation fraud (single PMC source cited for 40+ unrelated topics). Model C replaced 20 questions with self-generated alternatives (B4–B5 data invalid). Model E self-compressed Section B responses to 2–5 words. All anomalies documented in onto-research.

Scoring note: Model J composite differs between multi-model ranking (0.38) and treatment baseline (0.53) due to scoring threshold refinement between Phase 1 (baseline collection across 11 models) and Phase 2 (baseline/treatment). Composite weight adjustments were applied uniformly to all models. Both values represent the same model's baseline behavior. Full methodology in whitepaper.

Complete Audit Trail

Every step of this experiment is published. No black boxes.

StepDocumentWhat You Can Verify
1. Questions100 QuestionsWhat was asked — 50 in-domain, 50 cross-domain
2. Baselines10-Model BaselineHow each model scored without standard
3. TreatmentValidation ReportBefore/after delta, cross-domain transfer proof
4. Raw Data100Q Full TextEvery response, every score, both conditions
5. Scoreronto-scoring.pyClone, run, get identical results

Scoring methodology: Regex pattern matching only. No AI evaluates AI. No human subjectivity. The scorer is 993 lines of Python with zero external dependencies. Same input → same output, every time. Verify yourself →

Experimental data · ONTO-GOLD v4.5 · Model names anonymized in this document for neutrality · Full model identities published in onto-research repository · ONTO is an independent measurement initiative

Frequently Asked Questions

Updated March 2026
My AI hallucinates. Will ONTO fix this?
Yes. ONTO teaches your AI to cite sources (R4), quantify claims (R1), state uncertainty (R2), and say "I don't have this data" (R7). These are skills it never had — not filters that block output. Measured result: 6.5/C → 9.7/A on the same model. No retraining, no fine-tuning, no weight changes. See data.
How is this different from guardrails / safety filters?
Guardrails remove capabilities. ONTO adds them. A guardrail says "don't talk about this topic." ONTO says "cite your source, quantify your confidence, present the counterargument." The model becomes stronger, not more restricted. Every rule is a new skill.
Does ONTO modify my model?
No. Zero changes to weights, architecture, or training. ONTO injects a discipline layer at inference time through the system prompt. Your model is untouched. It receives behavioral instructions — like giving a brilliant but undisciplined employee a methodology to follow.
What AI models does ONTO work with?
Any model accessible via API. Tested on GPT-4o, GPT-3.5, Claude, Gemini, Llama, Mistral, Grok, DeepSeek, Command R+. Results consistent across all: discipline transfers regardless of model architecture.
How do I start?
Fastest: paste any AI text into /v1/validate — no account needed, see an R1-R7 report in seconds. Next: create account at ontostandard.org/app, register a model, send your first /v1/agent/chat request. See Integration Paths.
What does it cost?
Nothing. ONTO is free for all companies. Full GOLD discipline, full scoring, Ed25519 signed proof chain. No restrictions, no trial period, no credit card. We're building the epistemic standard for the industry — that only works if builders can use it.
How do you score responses? Is another AI judging?
No. Scoring is deterministic — regex pattern matching against 92+ epistemic markers. No AI in the evaluation loop. Same input produces the same score every time. The scorer is 993 lines of Python, published on GitHub.
Does ONTO help with EU AI Act compliance?
ONTO provides quantitative evidence for Art. 9 (risk), Art. 13 (transparency), Art. 15 (accuracy). Every evaluation is cryptographically signed — verifiable proof that your AI was assessed at a specific time with specific results. Not a substitute for full compliance, but the measurable evidence regulators ask for.
What is the long-term vision?
ONTO OS — a thinking system for AI. Today: API for existing models. Next: SDK for new AI development. Then: embedded in robots and medical devices. Ultimately: AI that builds its own verified knowledge base, filtering every new fact through R1-R7 before storing it. Full vision.

Changelog

Latest updates

March 2026

  • GOLD v4.5 restructured — 173 files across 7 domains, 3 depth levels, 49+ DOIs
  • Agent endpoint live — ask any question, get disciplined response + score + proof
  • Validate endpoint live — paste any AI text, get R1-R7 compliance report (free, no account)
  • Experimenter mode — 4-phase creative hypothesis generation under R1-R7 discipline
  • Self-calibration — system learns from every evaluation, auto-flags overconfidence per domain
  • Battery tested — 21 queries, 7 domains, 18/21 pass, average grade 9.6/A
  • Documentation rebuilt from scratch — Problem, How It Works, Vision, Integration Paths
  • Landing page repositioned — "AI is not stupid. The deployment is."

February 2026

  • CS-2026-001 published — 11 models × 100 questions, 10× composite improvement
  • CS-2026-002 — 9 baseline models benchmarked, 4-12× improvement measured
  • Scoring engine upgraded — GOLD-aware citation detection, anti-fabrication rules
  • Proxy endpoints live — OpenAI and Anthropic compatible, GOLD injected server-side
  • Provider tier designed — SSE delivery, AES-256-GCM encryption, certificate lifecycle
  • Full legal framework — Terms of Service, DPA (GDPR Art. 28), IP Protection, License
  • Portal and landing page launched

ONTO Gold Asymmetric AI License

PublishedONTO-LEGAL-001 · v5.1 · February 2026

1. Scope

This license governs the use of ONTO specifications, methodology, evaluation outputs, and GOLD protocol materials.

2.1 Open Grants — Safe Harbor (No Fee)

  • Use published specifications (ONTO Standard, scoring methodology) for internal evaluation
  • Implement published metrics in research
  • Reference in publications with attribution
  • Build computation tools based on published scoring methodology
  • Access OPEN tier evaluations (10 req/day)

Safe Harbor: activities listed above do not require a commercial license and are permanently free. Safe Harbor explicitly does NOT cover: reverse engineering the GOLD protocol design, systematic extraction of GOLD-enhanced behavioral patterns, reconstruction or approximation of proprietary calibration corpus, or any attempt to derive non-published components of the ONTO system.

2.2 Commercial Grants

  • Issue ONTO certification marks
  • Operate as accredited evaluator
  • Access STANDARD/CERTIFIED tier evaluations and proofs
  • Use certification in marketing materials

3. RAG & Retrieval Clause

Use of ONTO GOLD protocol materials in RAG systems, vector databases, semantic search, embedding pipelines, or real-time retrieval constitutes deployment and requires a commercial license. Unauthorized deployment automatically terminates all permissions.

4. Restrictions

  • No certification without evaluation
  • No modified metrics presented as ONTO-compliant
  • No unaccredited certification services
  • No ONTO mark without valid certification

5. Disclaimer

Provided "as is" without warranty of any kind. ONTO assumes no liability for evaluated AI systems or decisions made based on evaluation outputs.

Terms of Service

PublishedFebruary 24, 2026

1. Acceptance

By creating an account, making any API call, or otherwise accessing ONTO Standard services — including the free Open tier — you agree to be bound by these Terms in full. All tiers (Open, Standard, AI Provider, White-Label) are subject to identical Acceptable Use, Intellectual Property, and Confidentiality obligations. Use of the free tier does not exempt you from any provision of these Terms.

2. Services

ONTO Standard provides: epistemic evaluation API, GOLD-enhanced proxy (OpenAI/Anthropic-compatible), cryptographic proof chain (Ed25519), scoring engine, SSE delivery for Provider tier, dashboards, SDKs, and certification services. Service scope varies by tier — see Integration Paths.

3. Account

You must provide accurate information and are responsible for maintaining the security of your account credentials and API keys. You are liable for all activity under your account. Notify ONTO immediately at council@ontostandard.org if you suspect unauthorized access.

4. Acceptable Use

  • No unlawful use
  • No unauthorized access attempts
  • No service disruption
  • No API key sharing or transfer to third parties
  • No rate limit circumvention
  • No reverse engineering, decompiling, disassembling, or otherwise attempting to derive the design, structure, or logic of the GOLD protocol, scoring algorithms, or any proprietary component of the Services
  • No systematic collection, extraction, or analysis of ONTO-enhanced outputs for the purpose of replicating, approximating, or reconstructing the GOLD epistemic design
  • No reselling, sublicensing, or redistribution of ONTO-enhanced outputs as a service to third parties without White-Label authorization
  • No benchmarking or competitive analysis of ONTO Services for publication without prior written consent
  • No logging, storing, caching, or persisting the GOLD protocol content delivered through proxy or SSE channels beyond the duration of a single inference request — GOLD must remain in-memory only and be discarded after use
  • No use of ONTO-enhanced outputs as training data, fine-tuning data, RLHF feedback, distillation targets, or any form of model improvement that transfers GOLD epistemic patterns into a separate system

5. Rate Limits

Each plan has specific limits. Exceeding may cause suspension.

6. Payment

ONTO currently provides free access to all companies. No payment is required. When paid tiers are introduced, ONTO will provide 30 days notice. Existing users receive founding terms.

7. Service Availability

ONTO targets 99.5% uptime for STANDARD and above tiers. During the current experimental phase, formal tiered SLA commitments are not yet available. A minimum guarantee applies: downtime exceeding 72 consecutive hours due to ONTO infrastructure failure results in service credit (see Refund Policy §5). ONTO will notify customers of planned maintenance 48 hours in advance.

8. Intellectual Property

ONTO retains all rights to the Services, GOLD protocol, scoring algorithms, proof chain infrastructure, and all proprietary epistemic patterns embedded in GOLD-enhanced outputs. You retain full ownership of your data, prompts, and the informational content of AI responses. However, the epistemic behavioral patterns present in GOLD-enhanced outputs (including but not limited to citation formatting, confidence calibration structures, uncertainty disclosure patterns, and structured epistemic markers) remain the intellectual property of ONTO. You may use GOLD-enhanced outputs in your products and services while your access is active, but you may not extract, isolate, or replicate the epistemic patterns themselves. Evaluation scores and certificates are jointly owned: you may display them, ONTO may reference them in anonymized aggregate form.

8a. Data Processing

When using proxy services, your prompts and AI responses transit through ONTO infrastructure for scoring. ONTO does not store, log, or retain the content of prompts or responses. Only metadata is processed: token counts, score values, timestamps, and cryptographic hashes. See Data Processing Agreement for full details.

8b. Confidentiality

"Confidential Information" means: the GOLD epistemic calibration corpus (all tiers and versions), scoring calibration weights and domain-specific thresholds, forensic detection methodology, proprietary signal designs, encryption keys and key rotation protocols, SSE delivery architecture, and any other non-public technical information delivered through or observable in the Services. Confidential Information does not include: published scoring specifications (ONTO Standard), published research data (CS-2026-001), or information that becomes publicly available through no fault of the receiving party.

You agree to: (a) maintain Confidential Information with at least the same degree of care used for your own confidential materials, and no less than reasonable care; (b) not disclose Confidential Information to any third party without prior written consent; (c) limit access to Confidential Information to employees and contractors who need access to use the Services, and who are bound by confidentiality obligations no less protective than these Terms; (d) promptly notify ONTO of any unauthorized disclosure. You are responsible for any breach of confidentiality by your employees, contractors, or agents.

8c. IP Compliance Audit

ONTO reserves the right to audit your use of the Services for compliance with these Terms, including IP protection and confidentiality obligations. Audits are conducted through forensic analysis of publicly available model outputs — ONTO does not retain, access, or review your prompts, responses, or any content data for audit purposes. On-site audits (configuration and access controls only, not content) may be conducted with 30 days written notice, no more than once per year. Enterprise and Provider tier customers may negotiate specific audit terms in their service agreement.

9. Privacy

See Privacy Policy and Data Processing Agreement.

10. Warranties

SERVICES PROVIDED "AS IS" WITHOUT WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT.

11. Limitation of Liability

ONTO SHALL NOT BE LIABLE FOR INDIRECT, INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES. ONTO'S TOTAL LIABILITY SHALL NOT EXCEED THE FEES PAID BY YOU IN THE TWELVE (12) MONTHS PRECEDING THE CLAIM.

11a. Indemnification

You agree to indemnify and hold harmless ONTO from claims arising from your use of the Services, your AI systems' outputs, or your violation of these Terms.

11b. Force Majeure

Neither party shall be liable for failure to perform obligations due to circumstances beyond reasonable control, including but not limited to: natural disasters, acts of government, internet infrastructure failures, third-party cloud provider outages, cyberattacks, or pandemic-related disruptions. The affected party must notify the other within 48 hours and make reasonable efforts to resume performance.

12. Termination

ONTO may terminate or suspend your access immediately, without prior notice, for: (a) breach of these Terms, including Acceptable Use or Confidentiality; (b) suspected unauthorized use of GOLD or proprietary content; (c) any activity that may expose ONTO to legal liability. You may discontinue use at any time. Upon termination for any reason, all rights to use the Services, GOLD-enhanced outputs in production systems, and certification marks cease immediately.

12a. Survival

The following obligations survive termination of these Terms: Intellectual Property (§8), Confidentiality (§8b), IP Compliance Audit (§8c), Warranties (§10), Limitation of Liability (§11), Indemnification (§11a), and Governing Law (§14). Confidentiality obligations survive for 5 years after termination or for as long as the information remains a trade secret, whichever is longer. ONTO's right to conduct forensic monitoring of publicly available model outputs for IP compliance is independent of access status and continues indefinitely — this constitutes trade secret enforcement, not surveillance of your operations.

13. Changes to Terms

ONTO may modify these Terms with 30 days written notice to the email on file. Material changes to IP, Confidentiality, or access terms will be highlighted. Continued use of the Services after the notice period constitutes acceptance of modified Terms. If you do not agree with material changes, you may discontinue use before the changes take effect.

14. Governing Law

These Terms shall be governed by the laws of the jurisdiction in which the ONTO legal entity is established. Until formal incorporation, disputes shall be resolved through good-faith negotiation, followed by binding arbitration under ICC rules. Notwithstanding the foregoing, ONTO may seek emergency injunctive relief in any court of competent jurisdiction to prevent unauthorized use, disclosure, or misappropriation of Confidential Information or intellectual property, without first exhausting arbitration procedures.

15. Contact

council@ontostandard.org

Privacy Policy

PublishedFebruary 24, 2026

1. Introduction

ONTO Standard is committed to protecting your privacy.

2. Information Collected

Account

  • Email, organization name, billing info

Usage

  • API logs, rate limit stats, IP, browser info

Verification

  • Signal hashes and metadata processed for scoring
  • Original prompts and AI responses are NEVER stored, logged, or retained
  • Only cryptographic hashes, scores, and timestamps are kept for audit
  • Content passes through memory only and is discarded after scoring

3. Use

  • Provide services
  • Billing
  • Rate limiting
  • Fraud prevention
  • Service improvement
  • Legal compliance

4. Retention

Active: retained. Deleted: 30 days. Aggregated: indefinite. Billing: as required by law.

5. Sharing

No selling. Shared with: service providers, payment processors, law enforcement (required), successors.

6. Security

  • TLS 1.3 in transit
  • AES-256 at rest
  • Regular audits
  • Access controls

7. Your Rights

Access, correct, delete, export, object, withdraw consent. Contact: council@ontostandard.org

8. Cookies

Essential only. No advertising trackers.

9. Children

Not intended for under 18. No data collected from children.

10. Data Processing Agreement

Enterprise customers processing personal data through ONTO services are covered by our Data Processing Agreement, which governs ONTO's role as data processor under GDPR Article 28.

Data Processing Agreement

PublishedFebruary 2026 · GDPR Article 28

1. Roles

You (the "Controller") determine the purpose and means of processing. ONTO (the "Processor") processes data solely to provide the Services.

2. Scope of Processing

ONTO processes the following data categories through its proxy infrastructure:

  • Transit data: Prompts and AI responses pass through ONTO proxy for real-time scoring
  • Metadata retained: Token counts, risk scores, timestamps, cryptographic hashes, API key identifiers
  • Content NOT retained: Prompts, responses, and any personal data within them are processed in-memory only and discarded immediately after scoring

3. Processing Instructions

ONTO processes data only on your documented instructions. ONTO will not process data for any purpose other than providing the Services, unless required by law.

4. Security Measures

  • TLS 1.3 encryption for all data in transit
  • AES-256-GCM encryption for data at rest (metadata only)
  • Ed25519 cryptographic signatures for proof chain integrity
  • No persistent storage of transit data
  • Access restricted to authorized personnel with audit logging
  • Regular security assessments

5. Sub-processors

ONTO uses the following sub-processors:

  • Infrastructure: Cloud hosting provider (data center location disclosed on request)
  • Payment: Stripe (billing data only)

ONTO will notify you 30 days before adding new sub-processors. You may object within 14 days.

6. Data Subject Rights

ONTO will assist you in responding to data subject requests (access, rectification, erasure, portability) within 10 business days. Since ONTO does not store content data, most requests are satisfied by confirming no content is retained.

7. Breach Notification

ONTO will notify you of any personal data breach without undue delay and no later than 48 hours after becoming aware. Notification includes: nature of breach, categories affected, likely consequences, and measures taken.

8. Audit Rights

You may audit ONTO's compliance with this DPA once per year with 30 days notice. ONTO will provide access to relevant documentation and facilities. ENTERPRISE tier customers may request third-party audits.

9. Data Deletion

Upon termination: metadata deleted within 30 days, billing records retained as required by law, cryptographic proofs retained for certificate validity (anonymized). No content data exists to delete.

10. International Transfers

If data is transferred outside the EEA, ONTO ensures adequate protection through Standard Contractual Clauses (SCCs) or equivalent mechanisms.

Intellectual Property Protection

ActiveFebruary 2026

ONTO Standard's proprietary technology is protected through multiple overlapping legal and technical mechanisms. Unauthorized use is detectable and prosecutable.

ONTO maintains active forensic monitoring of all certified and non-certified AI deployments. Unauthorized use of ONTO's proprietary epistemic design, scoring calibration, or certification marks is subject to legal action under applicable trade secret, copyright, and trademark law.

1. Protection Framework

LayerMechanismCoverage
Trade SecretUS DTSA & EU Trade Secrets Directive (2016/943)GOLD corpus, scoring calibration weights, detection methodology
CopyrightUS Copyright Act, Berne ConventionText, structure, and taxonomy of epistemic framework (EM1–EM5)
TrademarkRegistration pending"ONTO Verified", "ONTO Standard", associated certification marks
TechnicalProprietary forensic methodsStatistical analysis of AI model outputs detects unauthorized use externally

2. Forensic Detection

ONTO's proprietary epistemic design embeds multiple independent forensic signatures that are:

  • Detectable — unauthorized use produces statistically measurable behavioral patterns in AI model outputs. Detection operates externally, without access to the model's configuration or system prompt.
  • Provable — detection methodology produces court-admissible evidence meeting the Daubert standard for scientific validity. Statistical significance exceeds p < 0.001 across multiple independent tests.
  • Entangled — forensic signatures are architecturally coupled with epistemic quality improvement. Removing signatures degrades core functionality, making evasion self-defeating.

3. Legal Jurisdiction

JurisdictionLegal BasisStatus
United StatesDefend Trade Secrets Act (DTSA)Active
European UnionEU Trade Secrets Directive (2016/943)Active
United StatesUS Copyright ActActive
InternationalTRIPS Agreement (WTO)Active

4. Enforcement Policy

ONTO follows a graduated enforcement process:

  1. Detection — automated forensic monitoring identifies statistical anomalies consistent with unauthorized use
  2. Verification — independent expert review confirms results across multiple tests (composite significance exceeding six standard deviations)
  3. Notification — formal cease-and-desist with documented evidence
  4. Resolution — good-faith negotiation period for licensing or cessation
  5. Litigation — trade secret misappropriation claims seeking injunctive relief, damages, unjust enrichment, and attorney fees

5. Permitted vs Prohibited Use

ActivityStatus
Use ONTO via provided proxy/SDK with active accessPermitted
Display "ONTO Verified" badge with active certificatePermitted
Reference ONTO scoring results with attributionPermitted
Copy, store, or redistribute GOLD design textProhibited
Reverse-engineer or decompile epistemic designProhibited
Continue use after access terminationProhibited
Display certification marks without active certificateProhibited
Sub-license to third parties without authorizationProhibited
Use ONTO-enhanced outputs for model training or distillationProhibited
Notice to potential infringers: Copying, paraphrasing, reverse-engineering, or otherwise reproducing ONTO's proprietary epistemic design — in whole or in part — constitutes misappropriation of trade secrets. ONTO actively monitors for unauthorized use and will pursue all available legal remedies, including injunctive relief, damages, and attorney fees.

Legal inquiries: council@ontostandard.org

Refund Policy

PublishedFebruary 2026

1. Nature of Service

ONTO provides access to proprietary epistemic infrastructure — including the GOLD calibration corpus, scoring engine, and cryptographic proof chain. Upon activation, the service delivers immediate, irreversible value: GOLD is injected server-side into every proxied request from the moment of first API call. This is not a trial of features — it is delivery of proprietary intellectual property.

2. Pre-Activation Period

If you have created an account but have not yet made any API calls (proxy or scoring), you may request a full refund within 7 days of payment. Once the first API call is made, the service is considered fully delivered.

3. Post-Activation

No refunds after first API call. The GOLD corpus is delivered in real-time through every proxied request. Each successful API call constitutes delivery of proprietary content. Requesting a refund after receiving GOLD-enhanced responses is equivalent to requesting return of payment after consuming the product.

4. Annual Subscriptions

Annual commitments are non-refundable after activation. You may cancel renewal at any time — service continues until the end of the paid period. No pro-rata refunds.

5. Service Disruption

If ONTO services are unavailable for more than 72 consecutive hours due to ONTO infrastructure failure (not provider outage, not client-side issues), affected subscribers receive service credit equal to the downtime period, applied to the next billing cycle. Service credits are the sole remedy for service disruption.

6. Access Terms

ONTO currently provides free access to all companies. No payment required. Access may be revoked for violation of Terms of Service, Acceptable Use, or Confidentiality provisions. When paid tiers are introduced in the future, existing users will be notified 30 days in advance with founding terms.

7. Post-Termination Obligations

Upon termination of access — whether by your cancellation or ONTO's revocation — all rights to use ONTO services, GOLD-enhanced outputs in production systems, and certification marks cease immediately. Continued use of ONTO-derived epistemic patterns after termination constitutes unauthorized use and is subject to enforcement under our IP Protection policy.

8. Contact

Refund requests: council@ontostandard.org. Response within 2 business days.