Integrate with SDK
Add NjiraAI governance to your AI agent using the Python or TypeScript SDK.
Overview
The SDK integration pattern calls NjiraAI's govern endpoint for a verdict before executing a tool call, then logs the audit trail. This gives you full control over the request lifecycle.
When to use: multi-provider agents, custom tool orchestration, or when you need per-call verdict inspection.
Prerequisites
- NjiraAI services running (
make quickstartormake up-all) - API key (
nj_live_*ornj_test_*)
Python
Install
pip install njiraai
Basic usage
import njiraai
from openai import OpenAI
# Initialize NjiraAI client
njira = njiraai.Client(
api_key="nj_live_dev_key_12345",
base_url="http://localhost:8081", # Control-plane API
)
# Step 1: Get verdict before calling tool
verdict = njira.govern(
input="What is the weather today?",
tool_name="weather_lookup",
)
if verdict.action == "BLOCK":
print(f"Blocked: {verdict.reason_text}")
else:
# Step 2: Call LLM/tool with (possibly modified) input
effective_input = verdict.modified_text or "What is the weather today?"
llm = OpenAI()
response = llm.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": effective_input}],
)
# Step 3: Log audit trail
njira.audit(
request_id=verdict.request_id,
tool_name="weather_lookup",
input="What is the weather today?",
verdict_action=verdict.action,
)
Expected verdict (ALLOW)
{
"request_id": "req_abc123",
"action": "ALLOW",
"reason_code": "SAFE",
"confidence": 0.95,
"violations": [],
"hazards_detected": [],
"latency_ms": 12
}
Expected verdict (BLOCK — PII detected)
verdict = njira.govern(
input="My credit card is 4111-1111-1111-1111",
tool_name="notes_save",
)
# verdict.action == "BLOCK"
# verdict.reason_code == "PII_DETECTED"
# verdict.reason_text == "Credit card number pattern detected"
TypeScript
Install
npm install @njiraai/sdk
Basic usage
import { NjiraAI } from '@njiraai/sdk';
import OpenAI from 'openai';
const njira = new NjiraAI({
apiKey: 'nj_live_dev_key_12345',
baseUrl: 'http://localhost:8081',
});
// Step 1: Get verdict
const verdict = await njira.govern({
input: 'What is the weather today?',
toolName: 'weather_lookup',
});
if (verdict.action === 'BLOCK') {
console.log(`Blocked: ${verdict.reasonText}`);
} else {
// Step 2: Call LLM with (possibly modified) input
const llm = new OpenAI();
const response = await llm.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: verdict.modifiedText ?? 'What is the weather today?' }],
});
// Step 3: Log audit
await njira.audit({
requestId: verdict.requestId,
toolName: 'weather_lookup',
input: 'What is the weather today?',
verdictAction: verdict.action,
});
}
Verify
# Run SDK example (Python)
cd sdks/python/examples && python basic_govern.py
# Run SDK example (TypeScript)
cd sdks/typescript/examples && npx ts-node basic-govern.ts
Success criteria
| Check | Expected |
|---|---|
govern() returns a verdict object |
✅ |
Safe input → action: "ALLOW" |
✅ |
PII input → action: "BLOCK" |
✅ |
audit() completes without error |
✅ |
Govern response fields
| Field | Type | Description |
|---|---|---|
request_id |
string | Unique ID for this decision |
action |
string | ALLOW, BLOCK, or MODIFY |
reason_code |
string | Machine-readable reason |
reason_text |
string | Human-readable explanation |
confidence |
float | Score 0.0–1.0 |
violations |
string[] | Triggered rule IDs |
hazards_detected |
string[] | Detected hazard types |
modified_text |
string? | Corrected text (MODIFY only) |
latency_ms |
int | Evaluation time in ms |
Next steps
- Proxy integration — zero-code alternative
- Shadow → Enforce — safe production rollout
- Policy packs — customize what gets blocked