Classes
CLASS LiteLLMBackend
A generic LiteLLM compatible backend.
Args:
model_id: The LiteLLM model identifier string; typically"<provider>/<model_creator>/<model_name>".formatter: Formatter for rendering components. Defaults to[TemplateFormatter](../formatters/template_formatter#class-templateformatter).base_url: Base URL for the LLM API endpoint; defaults to the Ollama local endpoint.model_options: Default model options for generation requests.
to_mellea_model_opts_map: Mapping from backend-specific option names to Mellea[ModelOption](model_options#class-modeloption)sentinel keys.from_mellea_model_opts_map: Mapping from Mellea[ModelOption](model_options#class-modeloption)sentinel keys to backend-specific option names.
FUNC processing
ModelResponse (non-streaming) or
ModelResponseStream chunk (streaming). Tool call parsing is deferred to
post_processing.
Args:
mot: The output thunk being populated.chunk: A single response object or streaming chunk from LiteLLM.
FUNC post_processing
mot: The output thunk to finalize.conversation: The chat conversation sent to the model, used for logging.tools: Available tools, keyed by name.thinking: The thinking/reasoning effort level passed to the model, orNoneif reasoning mode was not enabled._format: The structured output format class used during generation, if any.
FUNC generate_from_raw
FUNC generate_from_raw
FUNC generate_from_raw
actions: Actions to generate completions for.ctx: The current generation context.format: Optional Pydantic model for structured output; passed asguided_jsonin the request body.model_options: Per-call model options.tool_calls: Ignored; tool calling is not supported on this endpoint.
- list[ModelOutputThunk]: A list of model output thunks, one per action.