pip install "mellea[telemetry]", Ollama running locally.
Mellea provides built-in OpenTelemetry instrumentation
across three independent pillars — tracing, metrics, and logging. Each can be enabled
separately. All telemetry is opt-in: if the [telemetry] extra is not installed,
every telemetry call is a silent no-op.
Note: OpenTelemetry is an optional dependency. Mellea works normally without it. Install withpip install "mellea[telemetry]"oruv pip install "mellea[telemetry]".
Configuration
All telemetry is configured via environment variables:General
| Variable | Description | Default |
|---|---|---|
OTEL_SERVICE_NAME | Service name for all telemetry signals | mellea |
OTEL_EXPORTER_OTLP_ENDPOINT | OTLP endpoint for all telemetry signals | none |
Tracing variables
| Variable | Description | Default |
|---|---|---|
MELLEA_TRACE_APPLICATION | Enable application-level tracing | false |
MELLEA_TRACE_BACKEND | Enable backend-level tracing | false |
MELLEA_TRACE_CONSOLE | Print traces to console (debugging) | false |
Metrics variables
| Variable | Description | Default |
|---|---|---|
MELLEA_METRICS_ENABLED | Enable metrics collection | false |
MELLEA_METRICS_CONSOLE | Print metrics to console (debugging) | false |
MELLEA_METRICS_OTLP | Enable OTLP metrics exporter | false |
OTEL_EXPORTER_OTLP_METRICS_ENDPOINT | Metrics-specific OTLP endpoint (overrides general) | none |
MELLEA_METRICS_PROMETHEUS | Enable Prometheus metric reader | false |
OTEL_METRIC_EXPORT_INTERVAL | Export interval in milliseconds | 60000 |
Logging variables
| Variable | Description | Default |
|---|---|---|
MELLEA_LOGS_OTLP | Enable OTLP logs exporter | false |
OTEL_EXPORTER_OTLP_LOGS_ENDPOINT | Logs-specific OTLP endpoint (overrides general) | none |
Quick start
Enable tracing and metrics with console output to verify everything works:Checking telemetry status programmatically
Tracing
Mellea has two independent trace scopes:mellea.application— user-facing operations: session lifecycle,@generativecalls,instruct()andact(), sampling strategies, and requirement validation.mellea.backend— LLM backend interactions following the OpenTelemetry Gen-AI Semantic Conventions. Records model calls, token usage, finish reasons, and API latency.
Metrics
Mellea automatically tracks token consumption across all backends using OpenTelemetry counters (mellea.llm.tokens.input and
mellea.llm.tokens.output). No code changes are required — the
TokenMetricsPlugin records metrics via the plugin hook system after each
LLM call completes.
The metrics API also exposes create_counter, create_histogram, and
create_up_down_counter for instrumenting your own application code.
Mellea supports three exporters that can run simultaneously:
- Console — print to stdout for debugging
- OTLP — export to production observability platforms
- Prometheus — register with
prometheus_clientfor scraping
Logging
Mellea uses a color-coded console logger (FancyLogger) by default. When the
[telemetry] extra is installed and MELLEA_LOGS_OTLP=true is set, Mellea
also exports logs to an OTLP collector alongside existing console output.
See Logging for console logging
configuration, OTLP log export setup, and programmatic access via
get_otlp_log_handler().
Full example: docs/examples/telemetry/telemetry_example.py
See also:
- Tracing — distributed traces with Gen-AI semantic conventions.
- Metrics — token usage metrics, exporters, and custom instruments.
- Logging — console logging and OTLP log export.
- Evaluate with LLM-as-a-Judge — automated quality evaluation correlated with trace data.