Skip to main content
LLM tool definitions, parsing, and validation for mellea backends. Provides the MelleaTool class (and the @tool decorator shorthand) for wrapping Python callables as OpenAI-compatible tool schemas, with factory methods for LangChain and smolagents interoperability. Also includes helpers for converting tool lists to JSON, extracting tool call requests from raw LLM output strings, and validating/coercing tool arguments against the tool’s JSON schema using Pydantic.

Functions

FUNC tool

tool(func: Callable | None = None, name: str | None = None) -> MelleaTool | Callable[[Callable], MelleaTool]
Decorator to mark a function as a Mellea tool. This decorator wraps a function to make it usable as a tool without requiring explicit MelleaTool.from_callable() calls. The decorated function returns a MelleaTool instance that must be called via .run(). Args:
  • func: The function to decorate (when used without arguments)
  • name: Optional custom name for the tool (defaults to function name)
Returns:
  • A MelleaTool instance. Use .run() to invoke the tool.
  • The returned object passes isinstance(result, MelleaTool) checks.
Examples: Basic usage:
>>> @tool
... def get_weather(location: str, days: int = 1) -> dict:
...     '''Get weather forecast.
...
...     Args:
...         location: City name
...         days: Number of days to forecast
...     '''
...     return {{"location": location, "forecast": "sunny"}}
>>>
>>> # The decorated function IS a MelleaTool
>>> isinstance(get_weather, MelleaTool)  # True
>>>
>>> # Can be used directly in tools list (no extraction needed)
>>> tools = [get_weather]
>>>
>>> # Must use .run() to invoke the tool
>>> result = get_weather.run(location="Boston")
With custom name (as decorator):
>>> @tool(name="weather_api")
... def get_weather(location: str) -> dict:
...     return {{"location": location}}
>>>
>>> result = get_weather.run(location="New York")
With custom name (as function):
>>> def new_tool(): ...
>>> differently_named_tool = tool(new_tool, name="different_name")

FUNC add_tools_from_model_options

add_tools_from_model_options(tools_dict: dict[str, AbstractMelleaTool], model_options: dict[str, Any])
If model_options has tools, add those tools to the tools_dict. Accepts MelleaTool instances or @tool decorated functions. Args:
  • tools_dict: Mutable mapping of tool name to tool instance; modified in-place.
  • model_options: Model options dict that may contain a ModelOption.TOOLS entry (either a list of MelleaTool or a dict[str, MelleaTool]).

FUNC add_tools_from_context_actions

add_tools_from_context_actions(tools_dict: dict[str, AbstractMelleaTool], ctx_actions: list[Component | CBlock] | None)
If any of the actions in ctx_actions have tools in their template_representation, add those to the tools_dict. Args:
  • tools_dict: Mutable mapping of tool name to tool instance; modified in-place.
  • ctx_actions: List of [Component](../core/base#class-component) or [CBlock](../core/base#class-cblock) objects whose template representations may declare tools, or None to skip.

FUNC convert_tools_to_json

convert_tools_to_json(tools: dict[str, AbstractMelleaTool]) -> list[dict]
Convert tools to json dict representation. Args:
  • tools: Mapping of tool name to AbstractMelleaTool instance.
Returns:
  • List of OpenAI-compatible JSON tool schema dicts, one per tool.
Notes:
  • Huggingface transformers library lets you pass in an array of functions but doesn’t like methods.
  • WatsonxAI uses from langchain_ibm.chat_models import convert_to_openai_tool in their demos, but it gives the same values.
  • OpenAI uses the same format / schema.

FUNC json_extraction

json_extraction(text: str) -> Generator[dict, None, None]
Yield the next valid JSON object found in a given string. Args:
  • text: Input string potentially containing one or more JSON objects.
Returns:
  • A generator that yields each valid JSON object found in text,
  • in order of appearance.

FUNC find_func

find_func(d) -> tuple[str | None, Mapping | None]
Find the first function in a json-like dictionary. Most llms output tool requests in the form ...\{"name": string, "arguments": \{\}\}... Args:
  • d: A JSON-like Python object (typically a dict) to search for a function call record.
Returns:
  • A (name, args) tuple where name is the tool name string and args
  • is the arguments mapping, or (None, None) if no function call was found.

FUNC parse_tools

parse_tools(llm_response: str) -> list[tuple[str, Mapping]]
A simple parser that will scan a string for tools and attempt to extract them; only works for json based outputs. Args:
  • llm_response: Raw string output from a language model.
Returns:
  • List of (tool_name, arguments) tuples for each tool call found.

FUNC validate_tool_arguments

validate_tool_arguments(tool: AbstractMelleaTool, args: Mapping[str, Any]) -> dict[str, Any]
Validate and optionally coerce tool arguments against tool’s JSON schema. This function validates tool call arguments extracted from LLM responses against the tool’s JSON schema from as_json_tool. It can automatically coerce common type mismatches (e.g., string “30” to int 30) and provides detailed error messages. Args:
  • tool: The MelleaTool instance to validate against
  • args: Raw arguments from model (post-JSON parsing)
  • coerce_types: If True, attempt type coercion for common cases (default: True)
  • strict: If True, raise ValidationError on failures; if False, log warnings and return original args (default: False)
Returns:
  • Validated and optionally coerced arguments dict
Raises:
  • ValidationError: If strict=True and validation fails
Examples:
>>> def get_weather(location: str, days: int = 1) -> dict:
...     return {{"location": location, "days": days}}
>>> tool = MelleaTool.from_callable(get_weather)
>>> # LLM returns days as string
>>> args = {{"location": "Boston", "days": "3"}}
>>> validated = validate_tool_arguments(tool, args)
>>> validated
{{'location': 'Boston', 'days': 3}}
>>> # Strict mode raises on validation errors
>>> bad_args = {{"location": "Boston", "days": "not_a_number"}}
>>> validate_tool_arguments(tool, bad_args, strict=True)
Traceback (most recent call last):
...
pydantic.ValidationError: ...

FUNC convert_function_to_ollama_tool

convert_function_to_ollama_tool(func: Callable, name: str | None = None) -> OllamaTool
Convert a Python callable to an Ollama-compatible tool schema. Imported from Ollama. Args:
  • func: The Python callable to convert.
  • name: Optional override for the tool name; defaults to func.__name__.
Returns:
  • An OllamaTool instance representing the function as an OpenAI-compatible
  • tool schema.

Classes

CLASS MelleaTool

Tool class to represent a callable tool with an OpenAI-compatible JSON schema. Wraps a Python callable alongside its JSON schema representation so it can be registered with backends that support tool calling (OpenAI, Ollama, HuggingFace, etc.). Args:
  • name: The tool name used for identification and lookup.
  • tool_call: The underlying Python callable to invoke when the tool is run.
  • as_json_tool: The OpenAI-compatible JSON schema dict describing the tool’s parameters.
Methods:

FUNC run

run(self, *args, **kwargs) -> Any
Run the tool with the given arguments. Args:
  • args: Positional arguments forwarded to the underlying callable.
  • kwargs: Keyword arguments forwarded to the underlying callable.
Returns:
  • The return value of the underlying callable.

FUNC as_json_tool

as_json_tool(self) -> dict[str, Any]
Return the tool converted to a OpenAI compatible JSON object.

FUNC from_langchain

from_langchain(cls, tool: Any)
Create a MelleaTool from a LangChain tool object. Args:
  • tool: A langchain_core.tools.BaseTool instance to wrap.
Returns:
  • A Mellea tool wrapping the LangChain tool.
Raises:
  • ImportError: If langchain-core is not installed.
  • ValueError: If tool is not a BaseTool instance.

FUNC from_smolagents

from_smolagents(cls, tool: Any)
Create a Tool from a HuggingFace smolagents tool object. Args:
  • tool: A smolagents.Tool instance
Returns:
  • A Mellea tool wrapping the smolagents tool
Raises:
  • ImportError: If smolagents is not installed
  • ValueError: If tool is not a smolagents Tool instance

FUNC from_callable

from_callable(cls, func: Callable, name: str | None = None)
Create a MelleaTool from a plain Python callable. Introspects the callable’s signature and docstring to build an OpenAI-compatible JSON schema automatically. Args:
  • func: The Python callable to wrap as a tool.
  • name: Optional name override; defaults to func.__name__.
Returns:
  • A Mellea tool wrapping the callable.

CLASS SubscriptableBaseModel

Pydantic BaseModel subclass that also supports subscript ([]) access. Imported from the Ollama Python client. Allows model fields to be accessed via model["field"] in addition to model.field, which is required for compatibility with Ollama’s internal response parsing.
Methods:

FUNC get

get(self, key: str, default: Any = None) -> Any
Return the value of a field by name, or a default if the field does not exist. Args:
  • key: The field name to look up on the model.
  • default: Value to return when key is not a field on the model. Defaults to None.
Returns:
  • The field value if the attribute exists, otherwise default.
>>> msg = Message(role='user')
>>> msg.get('role')
'user'
>>> msg = Message(role='user')
>>> msg.get('nonexistent')
>>> msg = Message(role='user')
>>> msg.get('nonexistent', 'default')
'default'
>>> msg = Message(role='user', tool_calls=[ Message.ToolCall(function=Message.ToolCall.Function(name='foo', arguments={{}}))])
>>> msg.get('tool_calls')[0]['function']['name']
'foo'

CLASS OllamaTool

Pydantic model for an Ollama-compatible tool schema, imported from the Ollama Python SDK. Represents the JSON structure that Ollama (and OpenAI-compatible endpoints) expect when a tool is passed to the chat API. Mellea builds these objects internally via convert_function_to_ollama_tool and never exposes them to end users directly. Attributes:
  • type: Tool type; always "function" for function-calling tools.
  • function: Nested object containing the function name, description, and parameters schema.