Classes
CLASS OpenAIBackend
A generic OpenAI compatible backend.
Args:
model_id: OpenAI-compatible model identifier. Defaults tomodel_ids.OPENAI_GPT_5_1.formatter: Formatter for rendering components. Defaults to[TemplateFormatter](../formatters/template_formatter#class-templateformatter).base_url: Base URL for the API endpoint; defaults to the standard OpenAI endpoint if not set.model_options: Default model options for generation requests.default_to_constraint_checking_alora: IfFalse, deactivates aLoRA constraint checking; primarily for benchmarking and debugging.api_key: API key; falls back toOPENAI_API_KEYenv var.kwargs: Additional keyword arguments forwarded to the OpenAI client.
to_mellea_model_opts_map_chats: Mapping from chat-endpoint option names to Mellea[ModelOption](model_options#class-modeloption)sentinel keys.from_mellea_model_opts_map_chats: Mapping from Mellea sentinel keys to chat-endpoint option names.to_mellea_model_opts_map_completions: Mapping from completions-endpoint option names to Mellea[ModelOption](model_options#class-modeloption)sentinel keys.from_mellea_model_opts_map_completions: Mapping from Mellea sentinel keys to completions-endpoint option names.
FUNC filter_openai_client_kwargs
kwargs: Arbitrary keyword arguments to filter.
- A dict containing only keys accepted by
openai.OpenAI.__init__.
FUNC filter_chat_completions_kwargs
model_options: Model options dict that may contain non-chat keys.
- A dict containing only keys accepted by
chat.completions.create.
FUNC filter_completions_kwargs
model_options: Model options dict that may contain non-completions keys.
- A dict containing only keys accepted by
completions.create.
FUNC generate_from_chat_context
[Formatter](../core/formatter#class-formatter).
Formats the context and action into OpenAI-compatible chat messages, submits the
request asynchronously, and returns a thunk that lazily resolves the output.
Args:
action: The component or content block to generate a completion for.ctx: The current generation context._format: Optional Pydantic model class for structured output decoding.model_options: Per-call model options.tool_calls: IfTrue, expose available tools and parse responses.
- tuple[ModelOutputThunk[C], Context]: A thunk holding the (lazy) model output
and an updated context that includes
actionand the new output.
FUNC processing
ChatCompletion (non-streaming) or ChatCompletionChunk
(streaming). Tool call parsing is deferred to post_processing.
Args:
mot: The output thunk being populated.chunk: A single response object or streaming delta from the OpenAI API.
FUNC post_processing
mot: The output thunk to finalize.tools: Available tools, keyed by name.conversation: The chat conversation sent to the model, used for logging.thinking: The reasoning effort level passed to the model, orNoneif reasoning mode was not enabled.seed: The random seed used during generation, orNone._format: The structured output format class used during generation, if any.
FUNC generate_from_raw
FUNC generate_from_raw
FUNC generate_from_raw
actions: Actions to generate completions for.ctx: The current generation context.format: Optional Pydantic model for structured output; passed as a guided-decoding parameter.model_options: Per-call model options.tool_calls: Ignored; tool calling is not supported on this endpoint.
- list[ModelOutputThunk]: A list of model output thunks, one per action.
openai.BadRequestError: If the request is invalid (e.g. when targeting an Ollama server that does not support batched completion requests).
FUNC base_model_name
granite-3.3-8b-instruct for ibm-granite/granite-3.3-8b-instruct.