LongTracer

RAG Hallucination Detection and Multi-Project Tracing

RAG systems still hallucinate. Your LLM confidently states facts not in your documents. LongTracer catches this before it reaches your users - verifying every claim against your source documents using a two-stage STS and NLI pipeline. No LLM dependency. No vector store required. Just strings in, trust score out.

$pip install longtracer

Catch hallucinations before they reach users

Claim-level verification with full trace. Works with any RAG framework.

Claim-Level Verification

Verifies every individual claim in the LLM response against source documents. Pinpoints exactly which statements are hallucinated.

Trust Score (0.0–1.0)

Returns a trust score representing the proportion of supported claims. Threshold-based filtering for production quality gates.

Parallel Pipeline

STS relevance scoring runs alongside LLM generation. Minimal latency impact on your RAG system.

No LLM Dependency

Works with any RAG framework. Just strings in, verification out. No API keys, no LLM costs for verification.

Multi-Project Tracing

Track verification results across multiple projects and time periods with pluggable storage backends.

LangChain and LlamaIndex Ready

Native integration helpers for LangChain and LlamaIndex. Drop into your existing pipeline in minutes.

Common questions

Need a RAG system with built-in hallucination detection?

We build production RAG systems with LongTracer integrated for quality monitoring. Schedule a free consultation.