Honest comparison. The categories don't overlap as much as the names suggest.
Read your existing local files (Claude Code, Codex CLI, Cursor). Search, browse, cost-track. Crixin · search-sessions · claude-history all live here.
Instrument your own code that calls Claude / OpenAI APIs. Centralized traces, evals, prompt mgmt for teams. LangFuse · LangSmith · Helicone · Braintrust all live here.
If you're a solo dev who wants to search past Claude Code sessions, the second list is overkill. If you're building a customer-facing AI app and need traces/evals across users, the first list won't cover it.
| Crixin | LangFuse | LangSmith | search-sessions | claude-history | |
|---|---|---|---|---|---|
| Auto-ingests Claude Code (.jsonl) | ✓ | — | — | ✓ | ✓ |
| Auto-ingests Codex CLI | ✓ | — | — | — | — |
| Auto-ingests Cursor | ✓ | — | — | — | — |
| Local-only / no account | ✓ | self-host only | — | ✓ | ✓ |
| Cost / token estimates | ✓ (real BPE) | ✓ | ✓ | — | — |
| Browser dashboard | ✓ | ✓ | ✓ | CLI only | TUI only |
| Built-in MCP server | ✓ | separate package | — | community | — |
| Annual Wrapped report | ✓ | — | — | — | — |
| Time to first dashboard | ~30s (one npx) | ~30 min (Docker + Postgres) | cloud signup | ~10s | ~10s |
| Solo-dev price | $5/mo or free | free OSS / $59/mo team | $39/user/mo | free | free |
You use Claude Code / Codex / Cursor daily. You want a dashboard on top of all three sources without instrumenting anything. You want $5/mo not $39/seat.
You're shipping an app with embedded LLM calls. Your team needs trace visibility across users, eval pipelines, prompt versioning. You can absorb hosting + setup overhead.
You only want a fast CLI/TUI for Claude Code search and you don't need cross-tool ingest, cost analysis, or an MCP layer. Both are excellent at this scope.