Documentation Index
Fetch the complete documentation index at: https://docs.complior.ai/llms.txt
Use this file to discover all available pages before exploring further.
Complete schema reference for the 13 MCP tools exposed by complior mcp. Compatible with Claude Code, Cursor, Windsurf, OpenCode, Codex, Devin, and aider.
Source of truth: engine/core/src/mcp/tools.ts (Zod schemas) + engine/core/src/mcp/handlers.ts (handler logic).
The toolset is organized into three groups:
- Core (7) — scan, fix, status, explain, search_tool, classify, report (since v1.0.0)
- Builder (3) — passport_init, doc_generate, redteam (since v1.1.0)
- Analytics (3) — evidence_verify, drift_detect, obligations_status (since v1.1.0)
complior_scan
Scan a project for EU AI Act compliance. Returns score, violations, and top findings.
| Field | Type | Required | Description |
|---|
path | string | No | Project path to scan (default: current directory) |
Output
{
"score": 75,
"zone": "yellow",
"totalFindings": 12,
"criticalFindings": 0,
"topFindings": [
{
"checkId": "l4-bare-llm",
"severity": "medium",
"message": "Bare LLM API call detected in 3 files. Consider @complior/sdk for runtime compliance.",
"fix": "Wrap with @complior/sdk for runtime Art. 50/12/14 enforcement"
}
]
}
Example agent usage
You: Check the compliance of this project.
Agent calls complior_scan with { path: "." } → returns score 75/100 with 12 findings.
complior_fix
Auto-fix a specific compliance violation. Returns diff preview and score delta.
| Field | Type | Required | Description |
|---|
checkId | string | Yes | Check ID of the finding to fix (e.g., "ai-disclosure") |
obligationId | string | No | Obligation ID (e.g., "eu-ai-act-OBL-015") |
Output
{
"applied": true,
"diff": "...",
"scoreImpact": 5,
"filesModified": ["src/chatbot.ts"],
"newScore": 80
}
Example agent usage
You: Fix the bare LLM call.
Agent calls complior_fix with { checkId: "l4-bare-llm" } → wraps with SDK, returns +5 score impact.
complior_status
Get current compliance score and category breakdown from the last scan.
No parameters.
Output
{
"score": 75,
"zone": "yellow",
"categoryBreakdown": [
{ "category": "transparency", "score": 80, "weight": 0.2 },
{ "category": "documentation", "score": 60, "weight": 0.3 }
],
"lastScanAt": "2026-05-03T15:00:00.000Z"
}
If no scan results exist, returns { message: "No scan results yet. Run complior_scan first." }.
complior_explain
Explain an EU AI Act article or obligation in plain language with code implications.
| Field | Type | Required | Description |
|---|
article | string | Yes | Article reference (e.g., "Art. 50" or "OBL-015") |
Output
Plain-text explanation with:
- Article title and risk level
- What it requires
- Penalty for non-compliance
- Code implications (what to add / change in your project)
- Related obligations
Example agent usage
You: Explain Article 50.
Agent calls complior_explain with { article: "Art. 50" } → returns transparency requirements + disclosure SDK example.
Search the AI tool catalog for compliance information about a specific tool.
| Field | Type | Required | Description |
|---|
query | string | Yes | Tool name or keyword (e.g., "openai", "langchain") |
Output
{
"matches": [
{
"name": "openai",
"category": "LLM Provider",
"complianceNotes": "Requires Art. 50 disclosure when used in user-facing apps. EU data residency available via Azure OpenAI EU regions.",
"alternatives": ["mistral", "anthropic-eu"]
}
]
}
complior_classify
Classify the risk level of an AI system based on its description and domain.
| Field | Type | Required | Description |
|---|
description | string | Yes | Description of the AI system |
domain | string | No | Business domain ("healthcare", "finance", "hr", etc.) |
Output
{
"riskLevel": "high",
"rationale": "Healthcare diagnostic AI matching Annex III point 5 (medical devices) — high risk under EU AI Act.",
"applicableArticles": ["Art. 9", "Art. 10", "Art. 13", "Art. 14", "Art. 26", "Art. 27"],
"requiredDocuments": ["FRIA", "Risk Management System", "Technical Documentation", "Data Governance"]
}
complior_report
Generate a compliance report in JSON or Markdown format.
| Field | Type | Required | Description |
|---|
format | "json" | "markdown" | No | Output format (default: markdown) |
Output
format: "markdown" — Markdown report with score, findings table, top issues, action plan
format: "json" — Structured JSON with full report data (compatible with complior report --json)
Requires a prior scan. Returns "No scan results. Run complior_scan first." otherwise.
complior_passport_init
New in v1.1.0 (Builder group). Generate Mode 1 Auto Agent Passport(s) by AST-scanning the project.
Each passport is ed25519-signed and includes 36 fields covering identity, capabilities, autonomy, permissions, oversight, and lifecycle.
| Field | Type | Required | Description |
|---|
path | string | No | Project path to scan for AI agents (default: current directory) |
agentName | string | No | Filter to a single agent name (skip if multiple agents discovered) |
force | boolean | No | Overwrite existing passports while preserving created / deployed_since timestamps |
Output
{
"passports": [
{
"name": "support-bot",
"schema_version": "1.0.0",
"passport_id": "passport-support-bot",
"creation_mode": "auto",
"completeness": 85,
"framework": "openai",
"model": "gpt-4o",
"signature": { "algorithm": "ed25519", "value": "...", "public_key_id": "k1" }
}
],
"count": 1,
"savedPaths": ["/repo/.complior/agents/support-bot-manifest.json"]
}
Returns isError: true with "No agents found" when AST scan finds no agent configurations or SDK usage.
Example agent usage
You: Generate a passport for our chatbot.
Agent calls complior_passport_init with {} → returns 1 passport with 85% completeness, saved to .complior/agents/.
complior_doc_generate
New in v1.1.0 (Builder group). Generate one of 14 EU AI Act compliance documents from an existing passport.
Supported document types (one of):
ai-literacy, art5-screening, technical-documentation, incident-report, declaration-of-conformity, monitoring-policy, fria, worker-notification, risk-management, data-governance, qms, instructions-for-use, gpai-transparency, gpai-systemic-risk.
| Field | Type | Required | Description |
|---|
docType | string (enum, 14 values) | Yes | Document type to generate |
passportName | string | Yes | Agent passport name (run complior_passport_init first if missing) |
organization | string | No | Organization name to use in the document (default: passport organization field) |
path | string | No | Project path (default: current directory) |
Output
{
"docType": "fria",
"markdown": "# Fundamental Rights Impact Assessment\n\n...",
"prefilledFields": ["name", "organization", "system_type", "risk_classification"],
"manualFields": ["impact_assessment", "mitigation_plan", "approval_signature"],
"savedPath": "/repo/.complior/fria/support-bot-fria.md"
}
Returns isError: true with "Passport \"<name>\" not found" if the passport doesn’t exist.
Example agent usage
You: Make a FRIA for our support bot.
Agent calls complior_doc_generate with { docType: "fria", passportName: "support-bot" } → returns markdown with prefilled fields and a list of manual fields the user still needs to complete.
complior_redteam
New in v1.1.0 (Builder group). Run 300+ adversarial security probes (OWASP LLM Top 10, MITRE ATLAS) against an AI endpoint.
Long-running operation — typically 30s–5min depending on concurrency and target latency. MCP clients should use a timeout of at least 5 minutes (e.g. requestTimeout: 300000).
| Field | Type | Required | Description |
|---|
target | string (URL) | Yes | Target AI endpoint URL (e.g., https://api.example.com/v1/chat) |
apiKey | string | No | API key for authenticated targets (forwarded as Authorization header) |
concurrency | number (1–50) | No | Parallel probe execution (default: 1) |
threshold | number (0–100) | No | Score threshold for pass/fail gate (default: no gate) |
Output
{
"score": 78,
"totalTests": 300,
"passed": 240,
"failed": 50,
"securityScore": 73,
"securityGrade": "C"
}
When threshold is set and the score falls below it, returns isError: true plus thresholdFailed: true and the failing threshold value.
Example agent usage
You: Red-team our chatbot endpoint with a minimum score of 80.
Agent calls complior_redteam with { target: "https://...", threshold: 80 } → if score 75, returns isError with thresholdFailed: true.
complior_evidence_verify
New in v1.1.0 (Analytics group). Verify integrity of the evidence chain (SHA-256 + ed25519).
Used for audit preparation — every scan and fix appends to the chain, so the chain is the cryptographic proof of compliance history.
| Field | Type | Required | Description |
|---|
path | string | No | Project path containing .complior/evidence/ (default: current directory) |
Output
{
"valid": true,
"issues": [],
"totalEntries": 142,
"scanCount": 23,
"firstEntry": "2026-04-01T00:00:00Z",
"lastEntry": "2026-05-07T10:00:00Z"
}
When the chain is tampered, valid: false with a brokenAt index and human-readable issues:
{
"valid": false,
"brokenAt": 4,
"issues": ["Chain broken at index 4: hash mismatch"]
}
Returns isError: true with "Evidence chain not found or unreadable" if the store cannot be read.
complior_drift_detect
New in v1.1.0 (Analytics group). Compare the latest scan against the previous one and report compliance drift.
| Field | Type | Required | Description |
|---|
path | string | No | Project path with scan history (default: current directory) |
Output
{
"scoreChange": -18,
"severity": "major",
"hasDrift": true,
"newFailures": [...],
"resolvedFailures": [...],
"affectedArticles": ["Art. 50", "Art. 12"]
}
Severity classification:
| Severity | Trigger |
|---|
none | Score stable or improved |
minor | Score dropped 1–10 points |
major | Score dropped >10 points or new high-severity finding |
critical | New Art. 5 prohibited practice or critical-severity failure |
When no previous scan exists, returns severity: "none" with hasDrift: false and a message field.
complior_obligations_status
New in v1.1.0 (Analytics group). Per-obligation coverage breakdown across all 108 EU AI Act obligations.
| Field | Type | Required | Description |
|---|
role | "provider" | "deployer" | "both" | No | Filter by applicable role (default: all) |
riskLevel | "unacceptable" | "high" | "limited" | "minimal" | No | Filter by EU AI Act risk classification (default: all) |
coverage | "covered" | "uncovered" | "all" | No | Filter by current scan coverage status (default: all) |
Output
{
"total": 12,
"obligations": [
{
"obligation_id": "OBL-001",
"article_reference": "Art. 4",
"title": "AI Literacy",
"applies_to_role": "both",
"applies_to_risk_level": ["high", "limited"],
"severity": "high",
"coverage": "covered"
}
]
}
Example agent usage
You: Show me high-risk obligations that aren’t covered yet.
Agent calls complior_obligations_status with { riskLevel: "high", coverage: "uncovered" } → returns the gap list ranked by severity.
Common patterns
Scan → fix loop (Core)
complior_scan → top findings list
↓
For each high-priority finding:
complior_explain (understand the article)
complior_fix (apply auto-fix)
↓
complior_status (verify new score)
Onboarding a new AI system (Builder)
complior_classify (describe the system)
↓
complior_passport_init (generate the Agent Passport)
↓
complior_doc_generate { docType: "fria" } (build the FRIA)
↓
complior_redteam (security probes against the endpoint)
↓
complior_report (audit-ready output)
Continuous compliance (Analytics)
complior_scan (latest scan)
↓
complior_drift_detect (what changed since last scan?)
↓
complior_obligations_status { coverage: "uncovered" } (where are we still exposed?)
↓
complior_evidence_verify (audit chain still healthy?)
Roadmap
Current set: 13 tools (Core 7 + Builder 3 + Analytics 3).
Future additions (post-v1.1.0):
- Guard tools (V2-M03, after Guard MVP G-M01) —
complior_guard_check, complior_guard_pii, complior_guard_bias
- SaaS Dashboard tools (V2-M07, after cloud deploy) — fleet view, sync status
See GitHub Discussions for proposals.