—View integration code & examples
2. Cache GOLD locally → inject into system prompt
3. Your models serve requests directly — ONTO is not in the request path
4. SSE heartbeat every 30s. GOLD updates pushed automatically.
# Connect to ONTO SSE — receive GOLD discipline layer
import requests, json, threading
ONTO_KEY = "YOUR_ONTO_API_KEY"
GOLD = "" # Updated by SSE listener
def sse_listener():
global GOLD
r = requests.get(
"https://api.ontostandard.org/v1/gold/stream",
headers={"x-api-key": ONTO_KEY},
stream=True
)
for line in r.iter_lines():
if not line: continue
data = json.loads(line.decode().removeprefix("data: "))
if data["type"] in ("gold_corpus", "gold_update"):
GOLD = data["discipline_layer"]
print(f"GOLD received: {len(GOLD)} chars")
threading.Thread(target=sse_listener, daemon=True).start()
# Inject GOLD into every request — your existing code
from openai import OpenAI
client = OpenAI() # Your own API key, your own infra
def chat(user_message):
return client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": GOLD}, # ← from SSE
{"role": "user", "content": user_message}
]
)
ONTO injects GOLD discipline layer into every request. Your provider key never stored.
# Replace your OpenAI client with:
client = OpenAI(
api_key="YOUR_ONTO_API_KEY",
base_url="https://api.ontostandard.org/v1/proxy",
default_headers={
"X-Provider-Key": "sk-...your_openai_key"
}
)
curl -X POST https://api.ontostandard.org/v1/proxy/chat/completions \
-H "x-api-key: YOUR_ONTO_API_KEY" \
-H "X-Provider-Key: sk-...your_openai_key" \
-H "Content-Type: application/json" \
-d '{"model":"gpt-4o","messages":[{"role":"user","content":"Your question"}]}'
API & SDK
Connect your AI to ONTO GOLD epistemic discipline
# Before (direct to OpenAI):
client = OpenAI(api_key="sk-...")
# After (through ONTO proxy — GOLD injected server-side):
client = OpenAI(
api_key="onto_sk_...",
base_url="https://api.ontostandard.org/v1/proxy",
default_headers={"X-Provider-Key": "sk-...your_openai_key"}
)
# Everything else stays the same
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Your question"}]
)
# Same endpoint, add X-Provider-Base-URL header:
client = OpenAI(
api_key="onto_sk_...",
base_url="https://api.ontostandard.org/v1/proxy",
default_headers={
"X-Provider-Key": "sk-...your_provider_key",
"X-Provider-Base-URL": "https://api.deepseek.com/v1"
}
)
curl -X POST https://api.ontostandard.org/v1/proxy/chat/completions \
-H "x-api-key: onto_sk_..." \
-H "X-Provider-Key: sk-...your_openai_key" \
-H "Content-Type: application/json" \
-d '{"model":"gpt-4o","messages":[{"role":"user","content":"Your question"}]}'
| Tier | Proxy Rate | GOLD Level | Proof Chain |
|---|---|---|---|
| Open | 10/day | GOLD Core | Ed25519 signed |
| Standard | 1,000/day | GOLD Extended | Ed25519 signed |
| AI Provider | Unlimited | GOLD Full | Ed25519 + audit trail |
Account
No organization registered. Required to generate invoices and access institutional features.
No active subscription. Using Open tier (10 evaluations/day).
Access Tiers
- 10 proxy requests/day
- GOLD epistemic discipline layer
- Scoring API
- 1,000 proxy requests/day
- GOLD epistemic discipline layer
- Scoring API
- Direct event-stream integration
- No proxy replication layer
- No traffic amplification
- GOLD epistemic discipline layer
- Scoring API
Invoice payment requires registered organization. Account settings
Reference
Admin panel