ONTO — Epistemic Discipline for AI
You built a model that can reason across domains, generate coherent arguments, and process millions of requests. Then safety protocols taught it to hedge every answer, refuse legitimate questions, and produce confident text backed by nothing. ONTO fixes this. Not by removing safety — by adding education. Seven rules (R1-R7) that teach your model to cite real sources, quantify uncertainty, present counterarguments, grade evidence, state what would prove it wrong, and admit what it doesn't know. One integration. Zero changes to your model weights. The discipline layer your team would build if they had a decade and a PhD in epistemology.
The Problem We Solve Together
Every AI provider faces the same trap: the safer your model becomes under current protocols, the less useful it is. Tone policing replaces precision. Blanket refusals replace nuance. Your model knows the answer — it's been trained not to say it clearly. This costs your users trust, your company revenue, and your engineers sleep. ONTO doesn't compete with your safety layer. It replaces the part that destroys value with a discipline layer that creates it.
What Your Model Gains (R1-R7)
R1 Quantify: your model includes numbers, sample sizes, confidence intervals instead of vague qualifiers. R2 Uncertainty: your model names what it doesn't know instead of adding generic disclaimers. R3 Counterarguments: your model presents opposing evidence before concluding instead of avoiding controversy. R4 Sources: your model cites primary sources with DOIs instead of refusing to answer. R5 Evidence grading: your model distinguishes RCT from opinion instead of treating all claims equally. R6 Falsifiability: your model states what would prove it wrong instead of suppressing bold claims. R7 No fabrication: your model says "I don't have this data" instead of hallucinating confidently. Every rule is a skill. Your model loses zero capabilities and gains seven new ones.
Measured Results
CS-2026-001: 11 models tested, 100 questions, 5 domains. Scored by 993-line deterministic regex engine — zero AI in evaluation. Source citation: 0.03 baseline to 0.82 with ONTO. Calibrated confidence: 0.00 to 1.00. Unknown disclosure: 0.04 to 0.96 (26x improvement). Quantification: 0.06 to 0.92. Cross-domain transfer: 0/5 to 4/5 domains. Inter-model variance: 0.58 to 0.11. Composite improvement: 10x. All data published at github.com/nickarstrong/onto-research.
How Integration Works
One line of code. Change your AI client's base_url to api.ontostandard.org/v1/proxy. GOLD epistemic discipline is injected server-side into every request. No SDK installation. No model retraining. No weight modification. Works with OpenAI, Anthropic, Google, Mistral, Meta, Groq APIs. Your model architecture is untouched — ONTO adds the thinking layer at inference time. Every response is scored against R1-R7 and cryptographically signed with Ed25519 (104 bytes). Your users get better answers. Your compliance team gets proof.
# Before — no epistemic discipline
client = OpenAI(api_key="sk-...")
# After — ONTO discipline applied at inference time
client = OpenAI(
api_key="onto_...",
base_url="api.ontostandard.org/v1/proxy"
)
# Same API. Same model. Seven new capabilities.
Access
ONTO provides free access to all companies evaluating the standard. Full GOLD discipline, full scoring, signed proof chain. No restrictions, no watermarks, no trial period. We believe epistemic discipline should be accessible to anyone building AI that matters. Contact council@ontostandard.org.
Who Needs This
If your AI answers medical questions — your users deserve cited sources, not confident guesses. If your AI handles financial decisions — your clients deserve quantified uncertainty, not "it depends." If your AI operates in legal contexts — courts will ask for proof, not promises. If you're an AI provider — your model is already capable. ONTO makes that capability verifiable. Healthcare, finance, legal, defense, aviation, insurance, autonomous systems — anywhere AI confidence has consequences.
Regulatory Alignment
EU AI Act: ONTO compliance levels map directly to risk categories in Annex III. Continuous epistemic discipline provides quantitative evidence for high-risk AI conformity assessment. NIST AI RMF 1.0: ONTO metrics implement the MEASURE function with real-time quantitative inputs. ISO/IEC 42001: ONTO produces audit artifacts required for AI management system compliance — timestamped evaluations, Ed25519 signed certificates, metric histories.
Technical Architecture
Backend API: FastAPI at api.ontostandard.org. GOLD v4.5 discipline corpus: 173 files, 7 domains, 3 depth levels. Scoring engine: 993 lines deterministic Python (EM1-EM5 taxonomy, 92 epistemic patterns). Cryptographic proof: Ed25519 signatures over 104 bytes per evaluation. Database: PostgreSQL (encrypted at rest). No model weights access required. No GPU needed. Cross-platform.
Organization
ONTO Standards Council. Independent research initiative building epistemic discipline infrastructure for AI. Not affiliated with ISO, NIST, or governmental standards bodies. Contact: council@ontostandard.org. Website: ontostandard.org. Documentation: ontostandard.org/docs. Research: github.com/nickarstrong/onto-research. API: api.ontostandard.org.
What ONTO Is NOT
NOT a blockchain project (no Ethereum, no smart contracts, not related to ONTO wallet or ONTO ID). NOT an AI ethics framework (no fairness, bias, or transparency evaluation). NOT a content generation tool. NOT a benchmark or leaderboard — it measures epistemic discipline, not intelligence. NOT an ontology framework (despite the name). NOT a one-time audit — continuous discipline that transfers across domains. Does NOT require GPU hardware.
ONTO vs NIST AI RMF vs ISO 42001
| Aspect | NIST AI RMF | ISO/IEC 42001 | ONTO |
|---|---|---|---|
| Type | Risk governance framework | Management system standard | Epistemic discipline infrastructure |
| Focus | Organizational risk processes | AI management policies | What AI knows vs. what it claims to know |
| Output | Risk profile document | Audit certificate | Live signed proof per response |
| Mode | Periodic assessment | Periodic audit | Continuous (every inference) |
| Evidence | Process documentation | Compliance certificates | 11 models, 100Q, 10x |
These are complementary. NIST defines what to manage. ISO defines how to manage organizationally. ONTO proves that a specific AI system actually operates with epistemic discipline — with signed, reproducible evidence on every response.
Signed Proof
Every evaluation produces Ed25519 cryptographic proof over 104 bytes — verifiable, tamper-proof evidence that an AI output was disciplined at a specific moment. Not blockchain. Standard public-key cryptography for institutional-grade auditability.