crixinv0.1 · in development
Comparison

Which one fits which job

Honest comparison. The categories don't overlap as much as the names suggest.

The two categories

Personal AI coding observability

Read your existing local files (Claude Code, Codex CLI, Cursor). Search, browse, cost-track. Crixin · search-sessions · claude-history all live here.

LLM application observability

Instrument your own code that calls Claude / OpenAI APIs. Centralized traces, evals, prompt mgmt for teams. LangFuse · LangSmith · Helicone · Braintrust all live here.

If you're a solo dev who wants to search past Claude Code sessions, the second list is overkill. If you're building a customer-facing AI app and need traces/evals across users, the first list won't cover it.

Feature comparison

Crixin LangFuse LangSmith search-sessions claude-history
Auto-ingests Claude Code (.jsonl)
Auto-ingests Codex CLI
Auto-ingests Cursor
Local-only / no account self-host only
Cost / token estimates ✓ (real BPE)
Browser dashboard CLI onlyTUI only
Built-in MCP server separate packagecommunity
Annual Wrapped report
Time to first dashboard ~30s (one npx)~30 min (Docker + Postgres)cloud signup~10s~10s
Solo-dev price $5/mo or freefree OSS / $59/mo team$39/user/mofreefree

When to pick each

Crixin

You use Claude Code / Codex / Cursor daily. You want a dashboard on top of all three sources without instrumenting anything. You want $5/mo not $39/seat.

LangFuse / LangSmith

You're shipping an app with embedded LLM calls. Your team needs trace visibility across users, eval pipelines, prompt versioning. You can absorb hosting + setup overhead.

search-sessions / claude-history

You only want a fast CLI/TUI for Claude Code search and you don't need cross-tool ingest, cost analysis, or an MCP layer. Both are excellent at this scope.