Skip to main content

Mode comparison

Mode 1: AutoMode 2: Runtime
InputSource code (AST)MCP tool calls + Eval
Commandcomplior initcomplior proxy / eval
Auto-fill65–70%40–70%
Confidence~0.420.55
VerificationCode-verifiedBehavior-observed
Use caseOwn codeBlack-box vendor agent
UserDeveloperDevOps / Platform Eng
StatusProductionPlanned

Mode 1: Auto (from code)

complior init          # auto-discovers agents during project init
complior agent init    # optional: manual re-discovery / --force regenerate
Scans your codebase through AST analysis. Detects frameworks (57+), models, permissions, human gates, autonomy level, kill-switch. Risk class computed from autonomy × project domain. Best for: Your own AI agents where source code is available.

Mode 2: Runtime (from observation)

Two data sources:
  1. MCP Proxy — intercepts MCP tool calls from black-box agents (Cursor, Windsurf, vendor agents). Records tools_used, data_access, timing, success/error rates.
  2. Evalcomplior eval --target <url> tests the system with 680 probes and records behavioral data.
Best for: Vendor AI agents you deploy but don’t have source code for.
Mode 2 is planned. MCP Proxy infrastructure is 60% built. Eval integration is in development.