Functions
FUNC chat_response_delta_merge
mot: the ModelOutputThunk that the deltas are being used to populated.delta: the most recent ollama ChatResponse.
Classes
CLASS OllamaModelBackend
A model that uses the Ollama Python SDK for local inference.
Args:
model_id: Ollama model ID. If a[ModelIdentifier](model_ids#class-modelidentifier)is passed, itsollama_nameattribute must be set.formatter: Formatter for rendering components. Defaults to[TemplateFormatter](../formatters/template_formatter#class-templateformatter).base_url: Ollama server endpoint; defaults toenv(OLLAMA_HOST)orhttp\://localhost\:11434.model_options: Default model options for generation requests.
to_mellea_model_opts_map: Mapping from Ollama-specific option names to Mellea[ModelOption](model_options#class-modeloption)sentinel keys.from_mellea_model_opts_map: Mapping from Mellea[ModelOption](model_options#class-modeloption)sentinel keys to Ollama-specific option names.
FUNC is_model_available
model_name: The name of the model to check for (e.g., “llama2”).
- True if the model is available, False otherwise.
FUNC generate_from_chat_context
[Context](../core/base#class-context) as a chat history and uses the ollama.Client.chat()
interface to generate a completion. Returns a thunk that lazily resolves
the model output.
Args:
action: The component or content block to generate a completion for.ctx: The current generation context (must be a chat context)._format: Optional Pydantic model class for structured output decoding.model_options: Per-call model options.tool_calls: IfTrue, expose available tools and parse responses.
- ModelOutputThunk[C]: A thunk holding the (lazy) model output.
RuntimeError: If not called from a thread with a running event loop.
FUNC generate_from_raw
FUNC generate_from_raw
FUNC generate_from_raw
actions: Actions to generate completions for.ctx: The current generation context.format: Optional Pydantic model for structured output decoding.model_options: Per-call model options.tool_calls: Ignored; tool calling is not supported on this endpoint.
- list[ModelOutputThunk]: A list of model output thunks, one per action.
FUNC processing
ollama.ChatResponse. Also
extracts tool call requests inline and merges the chunk into the running
aggregated response stored in mot._meta["chat_response"].
Args:
mot: The output thunk being populated.chunk: A single chat response object from Ollama.tools: Available tools, keyed by name, used for extracting tool call requests from the response.
FUNC post_processing
mot: The output thunk to finalize.conversation: The chat conversation sent to the model, used for logging.tools: Available tools, keyed by name._format: The structured output format class used during generation, if any.