Skip to main content

mellea.backends.litellm

A generic LiteLLM compatible backend that wraps around the openai python sdk.

Classes

LiteLLMBackend

A generic LiteLLM compatible backend. Methods:

generate_from_context

generate_from_context(self, action: Component | CBlock, ctx: Context)
See generate_from_chat_context.

processing

processing(self, mot: ModelOutputThunk, chunk: litellm.ModelResponse | litellm.ModelResponseStream)
Called during generation to add information from a single ModelResponse or a chunk / ModelResponseStream to the ModelOutputThunk. For LiteLLM, tool call parsing is handled in the post processing step.

post_processing

post_processing(self, mot: ModelOutputThunk, conversation: list[dict], tools: dict[str, Callable], thinking, format)
Called when generation is done.
I