AgentLens

Agent observability that traces decisions, not just API calls.

See why your AI agents chose what they chose.

PyPI npm OpenCode Plugin License Demo

--- ## The Problem Existing observability tools show you _what_ LLM calls were made. AgentLens shows you _why_ your agent made each decision along the way -- which tool it picked, what alternatives it rejected, and the reasoning behind every choice. ## Getting Started 1. **Register** at [agentlens.vectry.tech/register](https://agentlens.vectry.tech/register) with your email and password. 2. **Log in** to the dashboard at [agentlens.vectry.tech](https://agentlens.vectry.tech). 3. **Create an API key** in **Settings > API Keys**. 4. **Install the SDK** and start tracing. > Self-hosting? You do not need to register with the hosted service. See [Self-Hosting](#self-hosting) below. ## Quick Start ```bash pip install vectry-agentlens ``` ```python import agentlens # Use the API key you created in Settings > API Keys agentlens.init(api_key="your-key", endpoint="https://agentlens.vectry.tech") with agentlens.trace("my-agent-task", tags=["production"]): # Your agent logic here... agentlens.log_decision( type="TOOL_SELECTION", chosen={"name": "search_web", "confidence": 0.92}, alternatives=[{"name": "search_docs", "reason_rejected": "query too broad"}], reasoning="User query requires real-time data not in local docs" ) agentlens.shutdown() ``` Open `https://agentlens.vectry.tech/dashboard` to see your traces (login required). ## Features - **Decision Tracing** -- Log every decision point with reasoning, alternatives, and confidence scores - **OpenCode Plugin** -- Trace your coding agent sessions with `opencode-agentlens` - **TypeScript SDK** -- First-class TypeScript support with `agentlens-sdk` - **OpenAI Integration** -- Auto-instrument OpenAI calls with one line: `wrap_openai(client)` - **LangChain Integration** -- Drop-in callback handler for LangChain agents - **Nested Traces** -- Multi-agent workflows with parent-child span relationships - **Real-time Dashboard** -- SSE-powered live trace streaming with filtering and search - **Decision Tree Viz** -- Interactive React Flow visualization of agent decision paths - **Analytics** -- Token usage, cost tracking, duration timelines per trace - **Self-Hostable** -- Docker Compose deployment, bring your own Postgres + Redis ## OpenCode Plugin Trace your [OpenCode](https://opencode.ai) coding agent sessions automatically. ```bash npm install -g opencode-agentlens ``` Add to your `opencode.json`: ```json { "plugin": ["opencode-agentlens"] } ``` Set environment variables (use the API key from your dashboard at **Settings > API Keys**): ```bash export AGENTLENS_API_KEY="your-key" export AGENTLENS_ENDPOINT="https://agentlens.vectry.tech" ``` Every coding session automatically captures tool calls, LLM interactions, file edits, and permission flows. ## TypeScript SDK ```bash npm install agentlens-sdk ``` ```typescript import { init, TraceBuilder, SpanType, SpanStatus } from "agentlens-sdk"; // Use the API key from Settings > API Keys in your dashboard init({ apiKey: "your-key", endpoint: "https://agentlens.vectry.tech" }); const trace = new TraceBuilder("my-agent-task", { tags: ["production"], }); trace.addSpan({ name: "tool-call", type: SpanType.TOOL_CALL, status: SpanStatus.COMPLETED, }); trace.end(); ``` ## Architecture ``` SDK (Python/TS) API (Next.js) Dashboard (React) agentlens.trace() ------> POST /api/traces ------> Real-time SSE stream TraceBuilder.end() Prisma + Postgres Decision tree viz OpenCode plugin Redis pub/sub Analytics & filters ``` ## Integrations ### OpenAI ```python import openai from agentlens.integrations.openai import wrap_openai client = openai.OpenAI() wrap_openai(client) # Auto-traces all completions with agentlens.trace("openai-task"): response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Hello!"}] ) ``` ### LangChain ```python from agentlens.integrations.langchain import AgentLensCallbackHandler handler = AgentLensCallbackHandler() agent.run("Do something", callbacks=[handler]) ``` ### Custom Agents ```python with agentlens.trace("planner"): agentlens.log_decision( type="ROUTING", chosen={"name": "research_agent"}, alternatives=[{"name": "writer_agent"}], reasoning="Task requires data gathering first" ) with agentlens.trace("researcher"): # Nested trace creates child span automatically agentlens.log_decision( type="TOOL_SELECTION", chosen={"name": "web_search"}, alternatives=[{"name": "database_query"}], reasoning="Need real-time information" ) ``` ## Decision Types | Type | Use Case | |------|----------| | `TOOL_SELECTION` | Agent chose which tool/function to call | | `ROUTING` | Agent decided which sub-agent or path to take | | `PLANNING` | Agent formulated a multi-step plan | | `RETRY` | Agent decided to retry a failed operation | | `ESCALATION` | Agent escalated to human or higher-level agent | | `MEMORY_RETRIEVAL` | Agent chose what context to retrieve | | `CUSTOM` | Any other decision type | ## Pricing AgentLens cloud ([agentlens.vectry.tech](https://agentlens.vectry.tech)) offers three billing tiers. One trace equals one session for billing purposes. | Plan | Price | Sessions | Details | |------|-------|----------|---------| | **Free** | $0 | 20 sessions/day | No credit card required | | **Starter** | $5/month | 1,000 sessions/month | For individual developers | | **Pro** | $20/month | 100,000 sessions/month | For teams and production workloads | Manage your subscription in **Settings > Billing** in the dashboard. Self-hosted instances are not subject to these limits. ## Self-Hosting Self-hosted AgentLens instances do not require registration with the hosted SaaS service. You manage your own API keys and have no session limits. ```bash git clone https://gitea.repi.fun/repi/agentlens.git cd agentlens docker compose up -d ``` The dashboard will be available at `http://localhost:4200`. ### Environment Variables | Variable | Default | Description | |----------|---------|-------------| | `DATABASE_URL` | `postgresql://agentlens:agentlens@postgres:5432/agentlens` | PostgreSQL connection string | | `REDIS_URL` | `redis://redis:6379` | Redis connection string | | `NODE_ENV` | `production` | Node environment | ## Project Structure ``` agentlens/ apps/web/ # Next.js 15 dashboard + API packages/database/ # Prisma schema + client packages/sdk-python/ # Python SDK (PyPI: vectry-agentlens) packages/sdk-ts/ # TypeScript SDK (npm: agentlens-sdk) packages/opencode-plugin/ # OpenCode plugin (npm: opencode-agentlens) examples/ # Example agent scripts docker-compose.yml # Production deployment ``` ## SDK Reference - [Python SDK documentation](packages/sdk-python/README.md) - [TypeScript SDK documentation](packages/sdk-ts/README.md) - [OpenCode plugin documentation](packages/opencode-plugin/README.md) ## Examples See the [examples directory](examples/) for runnable agent scripts: - `basic_agent.py` -- Minimal AgentLens usage with decision logging - `openai_agent.py` -- OpenAI wrapper auto-instrumentation - `multi_agent.py` -- Nested multi-agent workflows - `customer_support_agent.py` -- Realistic support bot with routing and escalation ## Contributing AgentLens is open source under the MIT license. Contributions welcome. ```bash # Development setup npm install npx turbo dev # Start web app in dev mode cd packages/sdk-python pip install -e ".[dev]" # Install SDK in dev mode pytest # Run SDK tests ``` ## License MIT