mellea.stdlib.session
Mellea Sessions.
Functions
get_session
RuntimeError: If no session is currently active.
backend_name_to_class
start_session
with statement), it automatically
sets the session as the current active session for use with convenience functions
like instruct(), chat(), query(), and transform(). When called directly,
it returns a session object that can be used directly.
Args:
backend_name: The backend to use. Options are:- “ollama”: Use Ollama backend for local models
- “hf” or “huggingface”: Use HuggingFace transformers backend
- “openai”: Use OpenAI API backend
- “watsonx”: Use IBM WatsonX backend
- “litellm”: Use the LiteLLM backend
model_id: Model identifier or name. Can be aModelIdentifierfrom mellea.backends.model_ids or a string model name.ctx: Context manager for conversation history. Defaults to SimpleContext(). Use ChatContext() for chat-style conversations.model_options: Additional model configuration options that will be passed to the backend (e.g., temperature, max_tokens, etc.).**backend_kwargs: Additional keyword arguments passed to the backend constructor.
- A session object that can be used as a context manager
- or called directly with session methods.
Classes
MelleaSession
Mellea sessions are a THIN wrapper around m convenience functions with NO special semantics.
Using a Mellea session is not required, but it does represent the “happy path” of Mellea programming. Some nice things about ussing a MelleaSession:
- In most cases you want to keep a Context together with the Backend from which it came.
- You can directly run an instruction or a send a chat, instead of first creating the
InstructionorChatobject and then later calling backend.generate on the object. - The context is “threaded-through” for you, which allows you to issue a sequence of commands instead of first calling backend.generate on something and then appending it to your context.
MelleaSessions and managing your Context and Backend directly.
Note: we put the instruct, validate, and other convenience functions here instead of in Context or Backend to avoid import resolution issues.
Methods:
reset
cleanup
act
act
act
action: the Component from which to generate.requirements: used as additional requirements when a sampling strategy is providedstrategy: a SamplingStrategy that describes the strategy for validating and repairing/retrying for the instruct-validate-repair pattern. None means that no particular sampling strategy is used.return_sampling_results: attach the (successful and failed) sampling attempts to the results.format: if set, the BaseModel to use for constrained decoding.model_options: additional model options, which will upsert into the model/backend’s defaults.tool_calls: if true, tool calling is enabled.
- A ModelOutputThunk if
return_sampling_resultsisFalse, else returns aSamplingResult.
instruct
instruct
instruct
description: The description of the instruction.requirements: A list of requirements that the instruction can be validated against.icl_examples: A list of in-context-learning examples that the instruction can be validated against.grounding_context: A list of grounding contexts that the instruction can use. They can bind as variables using a (key: str, value: str | ContentBlock) tuple.user_variables: A dict of user-defined variables used to fill in Jinja placeholders in other parameters. This requires that all other provided parameters are provided as strings.prefix: A prefix string or ContentBlock to use when generating the instruction.output_prefix: A string or ContentBlock that defines a prefix for the output generation. Usually you do not need this.strategy: A SamplingStrategy that describes the strategy for validating and repairing/retrying for the instruct-validate-repair pattern. None means that no particular sampling strategy is used.return_sampling_results: attach the (successful and failed) sampling attempts to the results.format: If set, the BaseModel to use for constrained decoding.model_options: Additional model options, which will upsert into the model/backend’s defaults.tool_calls: If true, tool calling is enabled.images: A list of images to be used in the instruction or None if none.
chat
validate
query
obj: The object to be queried. It should be an instance of MObject or can be converted to one if necessary.query: The string representing the query to be executed against the object.format: format for output parsing.model_options: Model options to pass to the backend.tool_calls: If true, the model may make tool calls. Defaults to False.
- The result of the query as processed by the backend.
transform
obj: The object to be queried. It should be an instance of MObject or can be converted to one if necessary.transformation: The string representing the query to be executed against the object.format: format for output parsing; usually not needed with transform.model_options: Model options to pass to the backend.
- ModelOutputThunk|Any: The result of the transformation as processed by the backend. If no tools were called,
- the return type will be always be ModelOutputThunk. If a tool was called, the return type will be the return type
- of the function called, usually the type of the object passed in.
last_prompt
- A string if the last prompt was a raw call to the model OR a list of messages (as role-msg-dicts). Is None if none could be found.