Find failures fast with agent tracing
See exactly what your agent is doing step by step. Pinpoint the issues hurting latency, cost, and response quality.
- Native tracing for popular agent frameworks and OpenTelemetry
- SDKs for Python, TypeScript, Go, and Java
- Message threading for multi-turn chat interactions

Cut through the noise in production
Get a real-time view of how your agents are performing. Spot issues early, understand impact, and start triaging. LangSmith monitoring lets you score quality with online evals on the characteristics that matter the most.
- Cost tracking
- Online LLM-as-judge and code evals
- Tool and agent trajectory monitoring
- Webhook and Pagerduty alerts

Discover usage patterns and issues automatically
Automatically analyze and cluster your traces to detect usage patterns, common agent behaviors, and failure modes.
- Unsupervised topic clustering
- Templates for error analysis
- Executive summary with key findings

Resources for LangSmith Observability
FAQs for LangSmith Observability
Teams need an LLM observability platform to understand how their AI applications behave in production. LLM observability platforms provide visibility into RAG pipelines, AI agent decisions, track model performance metrics like cost and latency, and help debug complex failures and hallucinations by showing the complete execution trace from end-to-end.
Custom dashboards track token usage, latency (P50, P99), error rates, cost breakdowns, and feedback scores. Configure alerts via webhooks or PagerDuty when metrics cross thresholds.
LangSmith works with any LLM framework. Trace applications built with OpenAI SDK, Anthropic SDK, Vercel AI SDK, LlamaIndex, or custom implementations, not just LangChain. OpenTelemetry support connects to existing pipelines. Learn more.
Yes. If your team has observability infrastructure on OpenTelemetry, LangSmith integrates with your existing pipelines. Send LangSmith trace data to your tools or ingest OTel data into LangSmith. See the docs.
Yes. Observability and Evaluation work well together but don't require each other. Start with tracing and monitoring, then add evals when ready. For all plan types, you'll get access to both and only pay for what you use.
Yes. LangSmith offers managed cloud, bring-your-own-cloud (BYOC), and self-hosted options for teams with data residency requirements. Contact us about the right option for your security needs. For more information, check out our documentation.
LangSmith cloud stores data in secure infrastructure. When using LangSmith hosted at smith.langchain.com, data is stored in GCP us-central-1. If you’re on the Enterprise plan, we can deliver LangSmith to run on your kubernetes cluster in AWS, GCP, or Azure so that data never leaves your environment. For more information, check out our documentation. For teams with compliance requirements, self-hosted and BYOC options let you control where your data lives.
No. The LangSmith SDK uses an async callback handler that sends traces to a distributed collector. Your application performance is never impacted. If LangSmith experiences an incident, your agent keeps running normally.
We will not train on your data, and you own all rights to your data. See LangSmith Terms of Service for more information.
LangSmith has a free tier for development and small-scale production. Paid plans scale with trace volume. See our pricing page for details, or contact us for enterprise pricing.
Ready to get visibility into your agents?
LangSmith Observability is framework agnostic and works no matter how you build your agent.



