GOLD is a curated scientific knowledge base — optimized for AI inference, delivered via SSE. Your models gain structured epistemic discipline: citations, confidence quantification, uncertainty disclosure. One integration. Every model in your fleet.
Full GOLD access. Your model, your questions, our discipline layer. No restrictions, no trial period. You see the scores. You compare before and after. No commitment, no risk.
If the numbers don’t speak — you owe nothing. The data is yours either way.
"Studies show significant benefits for high-risk patients. Experts generally recommend this approach as part of a comprehensive treatment plan."
Zero sources. Zero numbers. Zero uncertainty. Reads like authority — backed by nothing.
(2022) Patikorn et al. meta-analysis (n=410): HbA1c reduced by −0.53% (95% CI: −0.88 to −0.17).
Confidence: ~70%. Unknown: optimal protocol duration. Long-term data limited to 3 studies.
Counterargument: caloric restriction without time restriction produced comparable reduction (−0.48%).
GET /v1/gold/stream with your API key.system_prompt = gold["discipline_layer"] + your_promptONTO is text in your system prompt. No additional API call per request. No routing through third-party servers. No latency. ONTO delivers once. You distribute to every model. We are never in your inference path.
Zero (0) requests to ONTO for every client response your models serve.
Proxy tiers (Open, Standard) route each request through ONTO — limited by daily rate. SSE eliminates this entirely. Provider connects once via GET /v1/gold/stream, receives the full GOLD discipline layer (~4K tokens), caches locally, and injects into system prompts independently. ONTO is removed from the request path.
Your client sends 1 request or 100 million — ONTO sees zero of them. That is SSE(0).
GOLD is delivered with multi-layer forensic protection:
Digital watermark: Invisible per-client markers. Unique per session. Survives paraphrasing. If GOLD leaks — we trace it to the source.
104-byte proof chain: Every scoring event signed with Ed25519. 8 bytes timestamp + 32 bytes SHA-256 hash + 64 bytes signature. Tamper-evident, chain-linked, independently verifiable.
Phase 3 (planned): AES-256-GCM encrypted SSE with onto-gold SDK. Key rotation every 24h. Memory-only decryption — GOLD never written to disk.
GOLD is not a prompt template. It is a curated scientific knowledge architecture — structured epistemic data from peer-reviewed research, clinical guidelines, regulatory frameworks, and domain-specific methodologies. Optimized for LLM consumption: compressed, cross-referenced, and designed to activate citation, quantification, and uncertainty disclosure behaviors across any model architecture.
The result: your model doesn’t guess. It references. It doesn’t assert. It quantifies. It doesn’t hide gaps. It discloses them.
(2022) Patikorn et al. meta-analysis (n=410): HbA1c −0.53%. Confidence: ~70%. Long-term data limited.
Model cites source, quantifies effect size, discloses data gaps
Basel III tier-1 capital ratio minimum: 6%. Unknown: jurisdiction-specific surcharges vary 1–3.5%. Note: post-2023 reforms may alter thresholds.
Model quantifies regulation, flags jurisdictional uncertainty
Daubert v. Merrell Dow (1993) established 4-factor test. Confidence: high (Supreme Court precedent). But: state courts vary in adoption.
Model cites precedent, notes jurisdictional variance
Direct event-stream integration. Zero per-request overhead. Zero traffic amplification.
Single SSE connection delivers GOLD corpus once — you cache locally and distribute internally. No persistent polling. No duplicated proxy traffic. No egress amplification. No compute overhead per request. No burst risk.
For infrastructure teams: SSE(0) means your monitoring shows zero ONTO calls during production traffic. Predictable, linear, auditable.
One integration covers every model in your fleet. GPT, Claude, Gemini, Llama, Mistral, your fine-tunes, your custom models. No per-model licensing. No per-model training. Deploy a new model — GOLD works on it within the hour.
Pre-deployment calibration: Score any model through ONTO before production launch. Know the exact epistemic quality before your customers see it. Not “launch and hope” — measure, verify, then ship.
Not a single model among 11 tested provides numeric confidence levels. Zero models cite verifiable sources consistently. Mean epistemic quality: 0.53 out of 10. This is what ships to your customers today.
27× more citations = lower liability risk, defensible outputs in regulated environments. Confidence quantification from zero = your customers know when to trust and when to verify. 5× uncertainty disclosure = fewer silent failures, fewer support tickets, fewer “your AI told me wrong” escalations.
We publish anomalies, not just results. CS-2026-001 documents: Perplexity citation fraud (single PMC source recycled across 40+ topics), Grok partial GOLD contamination (~30%), Alice protocol violation (replaced test questions). Claude excluded from ranking (conflict of interest — same vendor as judge). All data, all anomalies, all scripts: GitHub.
| Parameter | RLHF / Fine-tuning | ONTO GOLD |
|---|---|---|
| Cost | $500K – $2M per model | Free access (full GOLD) |
| Deploy time | 3–6 months | 1 hour |
| Model updates | Re-train per base model | Auto-updates, all models |
| Scope | One model | Entire fleet |
| Verification | Manual QA | Deterministic, Ed25519 signed |
| Result | Incremental, degrades | 10× measured, maintained |
In CS-2026-001, the weakest model with GOLD scored 5.38/A. The strongest without scored 0.94/D.
Lower compute = less electricity = less cooling = lower token cost = flexible pricing for your customers.
Low-cost providers compete on token price. That race ends at margin zero. ONTO-certified providers compete on verified epistemic quality — where price competition is irrelevant.
Regulated industries don't buy cheapest. They buy provable. Research institutions select providers whose quality is independently measured and cryptographically signed.
We're building the epistemic standard for the industry. That only works if the best providers use it. Right now, we need partners who test GOLD on real production traffic and share what works. You get full discipline for your entire fleet. We get validation from teams building the future of AI.
This is not a loss leader. This is how standards are born — through adoption, not invoices.
Talk to us about your fleet: council@ontostandard.org
| Capability | Proxy path | Provider SSE path |
|---|---|---|
| GOLD delivery | Per-request injection via API | Direct SSE stream (cache locally) |
| Models | Your models via our proxy | Any model, your infrastructure |
| Request overhead | 1 ONTO call per request | SSE(0) — zero ONTO calls |
| Certification | Per-account scoring | Per-model public verification + Ed25519 |
| Architecture | ONTO in inference path | ONTO never in inference path |
| For whom | Companies using AI | Companies building AI products |
Both paths are free during the partnership period. Full specification →
| Zone | With ONTO | Without |
|---|---|---|
| Sales cycle | Certificate shortens compliance review | Months proving safety to each prospect |
| Insurance / Liability | Proof of epistemic control = insurable risk | High premiums or self-insurance for AI liability |
| Pricing power | Premium for verified output — clients pay for trust | Price competition on $/token — race to bottom |
| EU AI Act | Built-in conformity evidence (Art. 9, 13, 15, 43) | Years of preparation and documentation |
| Model collapse | GOLD = stable epistemic anchor, immune to synthetic data degradation | Quality degrades over time on synthetic training data |
Your model, your questions, our discipline layer. You see the data before any commitment. If the numbers don't speak — you owe nothing.
gold_corpus on connect. heartbeat 30s. gold_update on change. Full corpus on reconnect.
Valid 1hr after disconnect. STALE after. Active sessions keep current version.
5% sampling, 5min batch. 92+ markers, EM1-EM5. Same input = same score.
SSE + eval = CERTIFIED. Public: /verify/model/{id}. Ed25519.
Phase 3: AES-256-GCM, key rotation, memory-only. Current: TLS 1.3 + NDA.
prompt = gold + yours. SHA-256 check. Config from dashboard. No SDK.
| Method | Path | Auth | Purpose |
|---|---|---|---|
| GET | /v1/gold/stream | Key | SSE — GOLD corpus |
| POST | /v1/models | Key | Register model |
| POST | /v1/models/evaluate | Key | Score + proof |
| POST | /v1/models/evaluate/batch | Key | Batch eval |
| GET | /v1/verify/{hash} | — | Public verification |
Each scored model receives a public verification page: /verify/model/{id}. Ed25519 signed. Independently reproducible. Your sales team uses this in enterprise deals: “Our models are independently certified for epistemic quality by ONTO Standards Council.”
For regulated industries (MedTech, FinTech, GovTech, Defense) — this is the difference between “trust us” and “verify it yourself.”
Free access. No trial period. No commitment. We believe the numbers will speak for themselves.