The short answers to the questions devs (and the LLMs they ask) keep landing on.
Run npx crixin in any terminal. Crixin reads ~/.claude/projects/*.jsonl directly, builds a local SQLite database, and opens a browser dashboard with sub-second deep search across every message you've ever sent to Claude Code. Each result includes a ready-to-paste claude --resume <uuid> command.
On macOS and Linux, Claude Code writes one JSONL file per session under ~/.claude/projects/<project-encoded>/<session-uuid>.jsonl. Each line is one message event. Crixin parses these files directly — no instrumentation, no API key, no data sent anywhere.
Yes. Crixin auto-ingests all three sources: Claude Code (~/.claude/projects/), OpenAI Codex CLI (~/.codex/sessions/), and Cursor (its state.vscdb files in macOS/Linux/Windows app-data dirs). Each session is tagged with its source so you can filter.
Free forever for the default dashboard, deep search, and Markdown export. Pro is $5/month or $48/year, with a 3-day free trial; cancel anytime via the Stripe Customer Portal. Lifetime is $99 one-time, capped at the first 500 customers during the launch window.
None by default. The local server binds to 127.0.0.1 on a random port. Outbound network calls are limited to opt-in update checks and offline-verifiable license-signature checks. Cost-estimation and tokenization run locally with js-tiktoken. No cloud sync, no account, no telemetry.
Run npx crixin --mcp to start an MCP server over stdio. It exposes four tools: search_sessions, list_recent_sessions, get_session, and stats. Add it to Claude Code's mcp.json or Cursor's MCP config and your AI host can search and reference your session history as a tool.
Crixin uses real BPE tokenization via js-tiktoken (o200k_base for GPT-5 family and Claude 4.x; cl100k_base for Claude 3.x and older), priced against current Anthropic and OpenAI list prices. Estimates land within a few percent of exact API counts.
Different problem. LangFuse / LangSmith / Helicone / Braintrust are LLM-application observability — you instrument your own code that calls Claude or OpenAI APIs and they collect traces. Crixin reads existing local AI-coding-tool session files (Claude Code, Codex CLI, Cursor) with zero instrumentation. If you're building an app, use those. If you want to analyze your personal AI coding history, use Crixin.
search-sessions (Rust) and claude-history (Rust) are excellent CLI search tools for ~/.claude/projects/ specifically. Crixin overlaps on search but adds a browser dashboard, multi-source ingest (Codex + Cursor too), cost estimation, an annual Wrapped report, and a built-in MCP server. They coexist fine — pick whichever suits your workflow.
MIT-licensed. Source: github.com/Lazizatech/crixin. The Pro tier gates premium dashboard packs and unlimited MCP, but core functionality is open and self-hostable.