Skip to main content
Backend instrumentation helpers for OpenTelemetry tracing. Follows OpenTelemetry Gen-AI semantic conventions: https://opentelemetry.io/docs/specs/semconv/gen-ai/

Functions

FUNC get_model_id_str

get_model_id_str(backend: Any) -> str
Extract model_id string from a backend instance. Args:
  • backend: Backend instance
Returns:
  • String representation of the model_id

FUNC get_system_name

get_system_name(backend: Any) -> str
Get the Gen-AI system name from backend. Args:
  • backend: Backend instance
Returns:
  • System name (e.g., ‘openai’, ‘ollama’, ‘huggingface’)

FUNC get_context_size

get_context_size(ctx: Any) -> int
Get the size of a context. Args:
  • ctx: Context object
Returns:
  • Number of items in context, or 0 if cannot be determined

FUNC instrument_generate_from_context

instrument_generate_from_context(backend: Any, action: Any, ctx: Any, format: Any = None, tool_calls: bool = False)
Create a backend trace span for generate_from_context. Follows Gen-AI semantic conventions for chat operations. Args:
  • backend: Backend instance
  • action: Action component
  • ctx: Context
  • format: Response format (BaseModel subclass or None)
  • tool_calls: Whether tool calling is enabled
Returns:
  • Context manager for the trace span

FUNC start_generate_span

start_generate_span(backend: Any, action: Any, ctx: Any, format: Any = None, tool_calls: bool = False)
Start a backend trace span for generate_from_context (without auto-closing). Use this for async operations where the span should remain open until post-processing completes. Args:
  • backend: Backend instance
  • action: Action component
  • ctx: Context
  • format: Response format (BaseModel subclass or None)
  • tool_calls: Whether tool calling is enabled
Returns:
  • Span object or None if tracing is disabled

FUNC instrument_generate_from_raw

instrument_generate_from_raw(backend: Any, num_actions: int, format: Any = None, tool_calls: bool = False)
Create a backend trace span for generate_from_raw. Follows Gen-AI semantic conventions for text generation operations. Args:
  • backend: Backend instance
  • num_actions: Number of actions in the batch
  • format: Response format (BaseModel subclass or None)
  • tool_calls: Whether tool calling is enabled
Returns:
  • Context manager for the trace span

FUNC record_token_usage

record_token_usage(span: Any, usage: Any) -> None
Record token usage metrics following Gen-AI semantic conventions. Args:
  • span: The span object (may be None if tracing is disabled)
  • usage: Usage object or dict from the LLM response (e.g., OpenAI usage object)

FUNC record_response_metadata

record_response_metadata(span: Any, response: Any, model_id: str | None = None) -> None
Record response metadata following Gen-AI semantic conventions. Args:
  • span: The span object (may be None if tracing is disabled)
  • response: Response object or dict from the LLM
  • model_id: Model ID used for the response (if different from request)