fix: upsert traces to handle duplicate IDs from intermediate flushes

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-Claude)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
This commit is contained in:
Vectry
2026-02-10 11:41:49 +00:00
parent ff5bf05a47
commit bdd6362c1a
19 changed files with 175 additions and 35 deletions

26
launch/linkedin.md Normal file
View File

@@ -0,0 +1,26 @@
# AgentLens Launch -- LinkedIn Post
---
**Open-sourcing AgentLens: observability for AI agents that traces decisions, not just API calls**
If you're building AI agents, you've probably hit this: your agent does something unexpected, you open your observability dashboard, and all you see is a list of LLM API calls with latencies and token counts. Helpful for cost tracking. Not helpful for understanding why the agent chose that path.
I spent the last two weeks building AgentLens to address this. It's an open-source observability tool that traces agent decisions -- tool selection, routing, planning, retries, escalation, memory retrieval -- and captures the reasoning and alternatives at each decision point.
The idea is simple: if you can see what your agent considered and why it chose what it chose, debugging and improving agent behavior gets a lot more tractable.
What's included:
- Python SDK with OpenAI auto-instrumentation (pip install vectry-agentlens)
- Next.js dashboard for exploring decision flows and timelines
- Self-hostable via Docker Compose (PostgreSQL + Redis)
- MIT licensed
This is v0.1.0. It works, but it's early. The decision taxonomy is still evolving and there are rough edges. I'm sharing it now because I'd rather get feedback from people actually building agents than polish it in isolation.
Live demo: https://agentlens.vectry.tech
Repository: https://gitea.repi.fun/repi/agentlens
PyPI: https://pypi.org/project/vectry-agentlens/
If you're working with autonomous agents, I'd genuinely like to hear: what does useful observability look like for your use case? What decision types matter most to you?

39
launch/show-hn.md Normal file
View File

@@ -0,0 +1,39 @@
# Show HN: AgentLens -- Open-source observability for AI agents that traces decisions, not just API calls
**Repo:** https://gitea.repi.fun/repi/agentlens
**Live demo:** https://agentlens.vectry.tech
**PyPI:** https://pypi.org/project/vectry-agentlens/
---
I've been building AI agents for a while and kept running into the same problem: when an agent does something unexpected, the existing observability tools (LangSmith, Helicone, etc.) show me the LLM API calls that happened, but not *why* the agent chose a particular path.
Knowing that GPT-4 was called with these tokens and returned in 1.2s doesn't help me understand why my agent picked tool A over tool B, or why it decided to escalate instead of retry.
So I built AgentLens. It traces agent *decisions* -- tool selection, routing, planning, retries, escalation, memory retrieval -- and captures the reasoning, alternatives considered, and confidence scores at each decision point.
Quick setup:
```bash
pip install vectry-agentlens
```
```python
import agentlens
from agentlens.integrations.openai import wrap_openai
from openai import OpenAI
agentlens.init(api_key="your-key", endpoint="http://localhost:4200")
client = OpenAI()
wrap_openai(client)
```
The `wrap_openai` call auto-instruments your OpenAI client. From there you can log decisions with `agentlens.log_decision()` specifying the type (TOOL_SELECTION, ROUTING, PLANNING, RETRY, ESCALATION, MEMORY_RETRIEVAL, or CUSTOM), what was chosen, what the alternatives were, and why.
The dashboard is a Next.js app that shows decision flows, timelines, and lets you drill into individual agent runs. You can filter by decision type, search by outcome, and see where agents are spending their "thinking" time.
Stack: Python SDK + Next.js 15 dashboard + PostgreSQL + Redis. Self-hostable via Docker Compose. MIT licensed.
Honest caveats: this is v0.1.0. I built it solo in about two weeks. The decision model works but the taxonomy is still evolving. There are rough edges. The SDK currently supports Python only. I haven't done serious load testing yet.
Would love feedback on the decision model and what decision types you'd want to see. If you're building agents and have opinions on what "observability" should actually mean for autonomous systems, I'd really like to hear it.

67
launch/twitter.md Normal file
View File

@@ -0,0 +1,67 @@
# AgentLens Launch -- Twitter/X Thread
---
**Tweet 1 (Hook)**
Current agent observability tools tell you WHAT API calls your agent made.
They don't tell you WHY it picked tool A over tool B, or why it retried instead of escalating.
That's the gap I kept hitting. So I built something to fix it.
---
**Tweet 2 (What it does)**
AgentLens traces agent decisions, not just LLM calls.
It captures tool selection, routing, planning, retries, and escalation -- with the reasoning, alternatives considered, and confidence at each step.
Open source. MIT licensed. Built solo in 2 weeks.
#AI #OpenSource #Agents
---
**Tweet 3 (Code)**
Four lines to get started:
```
pip install vectry-agentlens
import agentlens
agentlens.init(api_key="key", endpoint="http://localhost:4200")
wrap_openai(openai.OpenAI())
```
Auto-instruments your OpenAI client. Then trace decisions as they happen.
---
**Tweet 4 (Features)**
What you get:
- Live Next.js dashboard with decision flows
- OpenAI auto-instrumentation via wrap_openai()
- 7 decision types: routing, planning, tool selection, retry, escalation, memory retrieval, custom
- Self-host with Docker Compose
- Python SDK on PyPI
#DevTools #LLM
---
**Tweet 5 (CTA)**
AgentLens is v0.1.0 -- early but functional. Rough edges exist.
Try the live demo: https://agentlens.vectry.tech
Repo: https://gitea.repi.fun/repi/agentlens
Install: pip install vectry-agentlens
Feedback welcome, especially on the decision model.
#OpenSource #AI #Agents