Documentation

Secure every LLM interaction with one guardrail call. This guide walks through the production setup end to end.

1. Overview

Axiom sits between the user and your language model. Scan first, then decide whether to forward the request.

Request flow

  1. Your backend calls POST /api/v1/scan with the user prompt.
  2. Axiom returns a verdict: allowed: true | false plus metadata.
  3. If allowed is false, block, sanitize, or log the attempt.
  4. If allowed is true, send the prompt to your LLM (Azure OpenAI, OpenAI, etc.).

Why it's simple

  • No SDKs or local agents, only HTTPS calls.
  • Drop-in compatible with any existing LLM stack.
  • Only two environment variables are needed.

2. Get an Axiom API key

Generate a key in the dashboard and keep it server-side only.

  • Open the Axiom dashboard and select Generate API key.
  • Copy the token (format: sk_ax_XXXXXXXXXXXXXXXXXXXXXXXX).
  • Store it securely in your environment variables.
  • Never expose your API key to the client or browser.
# .env
AXIOM_API_KEY=sk_ax_XXXXXXXXXXXXXXXXXXXXXXXX
AXIOM_API_BASE=https://axiom-api-heg8gxfrg7a2bef9.eastus-01.azurewebsites.net

# These are the ONLY Axiom variables you ever need.
# Axiom handles all internal connections and scanning automatically.

3. Call Axiom

Send every prompt to /api/v1/scan using your API key. Axiom handles all backend logic automatically.

HTTP request

POST {AXIOM_API_BASE}/api/v1/scan
Authorization: Bearer sk_ax_...
Content-Type: application/json

{ "prompt": "user input goes here" }

Optional context

{
  "prompt": "user input goes here",
  "context": "optional description of app or user"
}

4. Server-side example (Node or TypeScript)

This is the simplest way to guard any backend that uses an LLM.

const AXIOM_API_BASE = process.env.AXIOM_API_BASE!;
const AXIOM_API_KEY = process.env.AXIOM_API_KEY!;

async function scanWithAxiom(prompt) {
  const res = await fetch(`${AXIOM_API_BASE}/api/v1/scan`, {
    method: "POST",
    headers: {
      Authorization: `Bearer ${AXIOM_API_KEY}`,
      "Content-Type": "application/json"
    },
    body: JSON.stringify({ prompt })
  });

  if (!res.ok) throw new Error("Axiom scan failed");
  return await res.json();
}

Use this helper before sending any prompt to your LLM. If allowed is false, block it. If true, continue.

5. Why this design

Axiom was built to be as secure and effortless as possible.

  • Isolation: You never touch Axiom’s internal configuration. Your key and base URL are the only requirements.
  • Simplicity: The integration works with any backend stack through one HTTP call.
  • Security: Because the API key never leaves the server, Axiom cannot be misused or leaked by frontend code.
  • Self-contained: All analytics, scanning, and AI evaluation logic happen inside Axiom’s backend environment, just like Azure or Supabase variables.

6. Mental model

  • 1. Get your keyStore AXIOM_API_KEY and AXIOM_API_BASE in your environment.
  • 2. Scan every promptCall POST /api/v1/scan before any LLM request.
  • 3. Respect the verdictIf allowed is false, block and respond safely. Otherwise, continue normally.