Get Started with Tracing
This section describes how to send traces in Arthur GenAI Engine.
Ingest Your First Trace
Use OpenInference with OpenTelemetry to send traces to Arthur.
Install Dependencies
pip install openinference-instrumentation-langchain langchain-openai langchain opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp python-dotenvConfigure Environment Variables
Create a .env file:
# Arthur Configuration
ARTHUR_BASE_URL=http://localhost:3030
ARTHUR_API_KEY=your_api_key_here
ARTHUR_TASK_ID=your_task_id_here # ensure task is marked as agentic: is_agentic=True
# LLM Configuration (if using LangChain)
OPENAI_API_KEY=your_openai_api_key_hereBasic Setup
The simplest way to start tracing is to set up OpenTelemetry with OpenInference instrumentation:
import os
from dotenv import load_dotenv
from opentelemetry import trace as trace_api
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import SimpleSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
from openinference.instrumentation.langchain import LangChainInstrumentor
load_dotenv()
def setup_tracing():
"""Set up OpenInference tracing with Arthur endpoint."""
arthur_base_url = os.getenv("ARTHUR_BASE_URL")
arthur_api_key = os.getenv("ARTHUR_API_KEY")
arthur_task_id = os.getenv("ARTHUR_TASK_ID")
# Create tracer provider with Arthur task metadata
tracer_provider = trace_sdk.TracerProvider(
resource=Resource.create({
"arthur.task": arthur_task_id,
"service.name": "my-tracing-app" })
)
trace_api.set_tracer_provider(tracer_provider)
# Configure OTLP exporter and add span processor
tracer_provider.add_span_processor(
SimpleSpanProcessor(
OTLPSpanExporter(
endpoint=f"{arthur_base_url}/v1/traces",
headers={"Authorization": f"Bearer {arthur_api_key}"}
)
)
)
# Instrument LangChain (if using LangChain)
LangChainInstrumentor().instrument()
# Call setup before using your LLM/Agent
setup_tracing()Example: Tracing a LangChain Agent
from langchain_openai import ChatOpenAI
from langchain.agents import create_openai_tools_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
# Set up tracing (call this once at startup)
setup_tracing()
# Create your agent - all interactions are automatically traced
model = ChatOpenAI(model="gpt-4o", temperature=0)
agent_executor = AgentExecutor(
agent=create_openai_tools_agent(model, tools=[], prompt=...),
tools=[],
verbose=True)
# Run your agent - traces are automatically sent to Arthur
result = agent_executor.invoke({"input": "Hello!"})Example: Manual Span Creation
You can also create spans manually for custom instrumentation:
from opentelemetry import trace
tracer = trace.get_tracer(__name__)
def my_function():
with tracer.start_as_current_span("my_function") as span:
span.set_attribute("input", "some input")
result = "Hello, world!"
span.set_attribute("output", result)
return result
my_function()Key Concepts
OpenInference Instrumentation
- LangChainInstrumentor: Automatically instruments LangChain components to create spans
- Resource: Embeds metadata (like Arthur task ID) into all traces automatically
- OTLPSpanExporter: Sends spans to Arthur via OTLP protocol
Arthur Integration
- Task ID: Links all traces to a specific Arthur task (set via
arthur.taskresource attribute) - API Key: Authenticates your application with Arthur
View Your Traces
After running your application, traces will appear in your Arthur dashboard:
- Navigate to
http://localhost:3030/and select your task - View traces with:
- LLM calls and responses
- Tool invocations
- Agent execution steps
- Timing information
- Input/output data
Next Steps
- Add more tools to your agent and see how they appear in traces
- Add session and user metadata via convenient helpers like
using_sessionandusing_user - Scale up to more complex agent architectures
- Analyze traces in the Arthur dashboard for performance insights
Resources
Updated 1 day ago