# Mellea ## Docs - [Building Custom Components](https://docs.mellea.ai/advanced/custom-components.md): Implement the Component Protocol to create reusable, testable generative building blocks. - [Inference-Time Scaling](https://docs.mellea.ai/advanced/inference-time-scaling.md): Control how Mellea generates and validates outputs: rejection sampling, SOFAI, budget forcing, and majority voting. - [Intrinsics](https://docs.mellea.ai/advanced/intrinsics.md): Adapter-accelerated RAG quality checks using LoRA/aLoRA adapters with Granite models. - [LoRA and aLoRA adapters](https://docs.mellea.ai/advanced/lora-and-alora-adapters.md): Train lightweight adapters on your own labeled data and use them as requirement validators in Mellea programs. - [Mellea Core Internals](https://docs.mellea.ai/advanced/mellea-core-internals.md): The three core data structures and abstraction layers underlying every Mellea program. - [Prefix Caching and KV Blocks](https://docs.mellea.ai/advanced/prefix-caching-and-kv-blocks.md): Reuse KV cache state across calls to eliminate redundant prefill work on LocalHFBackend. - [Security and Taint Tracking](https://docs.mellea.ai/advanced/security-and-taint-tracking.md): Use GuardianCheck with IBM Granite Guardian to validate LLM outputs for safety risks. - [Template formatting](https://docs.mellea.ai/advanced/template-formatting.md): How Mellea's TemplateFormatter converts Python objects into model-ready text using Jinja2 templates. - [Mellea API Reference](https://docs.mellea.ai/api-reference.md): Complete reference for Mellea's Python API and command-line tools - [cli.alora.commands](https://docs.mellea.ai/api/cli/alora/commands.md): Typer sub-application for the `m alora` command group. - [cli.alora.intrinsic_uploader](https://docs.mellea.ai/api/cli/alora/intrinsic_uploader.md): Upload a trained adapter to Hugging Face Hub in the intrinsic directory layout. - [cli.alora.readme_generator](https://docs.mellea.ai/api/cli/alora/readme_generator.md): LLM-assisted generator for adapter intrinsic README files. - [cli.alora.train](https://docs.mellea.ai/api/cli/alora/train.md): Fine-tune a causal language model to produce a LoRA or aLoRA adapter. - [cli.alora.upload](https://docs.mellea.ai/api/cli/alora/upload.md): Upload a trained LoRA or aLoRA adapter to Hugging Face Hub. - [cli.decompose.decompose](https://docs.mellea.ai/api/cli/decompose/decompose.md): Implementation of the `m decompose run` CLI command. - [cli.eval.commands](https://docs.mellea.ai/api/cli/eval/commands.md): Use the eval command for LLM-as-a-judge evaluation, given a (set of) test file(s) consisting of prompts, instructions, and optionally, targets. - [cli.eval](https://docs.mellea.ai/api/cli/eval/eval.md): CLI package for test-based LLM evaluation. - [cli.eval.runner](https://docs.mellea.ai/api/cli/eval/runner.md): Execution engine for the test-based LLM evaluation pipeline. - [cli.fix.commands](https://docs.mellea.ai/api/cli/fix/commands.md): CLI command for `m fix async`. - [cli.fix.fixer](https://docs.mellea.ai/api/cli/fix/fixer.md): AST-based detection and source transformation for async call fixes. - [cli.m](https://docs.mellea.ai/api/cli/m.md): Entrypoint for the `m` command-line tool. - [mellea.backends.adapters.adapter](https://docs.mellea.ai/api/mellea/backends/adapters/adapter.md): Adapter classes for adding fine-tuned modules to inference backends. - [mellea.backends.backend](https://docs.mellea.ai/api/mellea/backends/backend.md): `FormatterBackend`: base class for prompt-engineering backends. - [mellea.backends](https://docs.mellea.ai/api/mellea/backends/backends.md): Backend implementations for the mellea inference layer. - [mellea.backends.cache](https://docs.mellea.ai/api/mellea/backends/cache.md): Cache abstractions and implementations for model state. - [mellea.backends.model_ids](https://docs.mellea.ai/api/mellea/backends/model_ids.md): `ModelIdentifier` dataclass and a catalog of pre-defined model IDs. - [mellea.backends.model_options](https://docs.mellea.ai/api/mellea/backends/model_options.md): Common ModelOptions for Backend Generation. - [mellea.backends.tools](https://docs.mellea.ai/api/mellea/backends/tools.md): LLM tool definitions, parsing, and validation for mellea backends. - [mellea.core.backend](https://docs.mellea.ai/api/mellea/core/backend.md): Abstract `Backend` interface and generation-walk utilities. - [mellea.core.base](https://docs.mellea.ai/api/mellea/core/base.md): Foundational data structures for mellea's generative programming model. - [mellea.core](https://docs.mellea.ai/api/mellea/core/core.md): Core abstractions for the mellea library. - [mellea.core.formatter](https://docs.mellea.ai/api/mellea/core/formatter.md): Abstract `Formatter` interface for rendering components to strings. - [mellea.core.requirement](https://docs.mellea.ai/api/mellea/core/requirement.md): `Requirement` interface for constrained and validated generation. - [mellea.core.sampling](https://docs.mellea.ai/api/mellea/core/sampling.md): Abstract interfaces for sampling strategies and their results. - [mellea.core.utils](https://docs.mellea.ai/api/mellea/core/utils.md): Logging utilities for the mellea core library. - [mellea.formatters.chat_formatter](https://docs.mellea.ai/api/mellea/formatters/chat_formatter.md): `ChatFormatter` for converting context histories to chat-message lists. - [mellea.formatters](https://docs.mellea.ai/api/mellea/formatters/formatters.md): Formatters for converting components into model-ready prompts. - [mellea.formatters.granite.base](https://docs.mellea.ai/api/mellea/formatters/granite/base/base.md): Shared data structures and functions for formatting code. - [mellea.formatters.granite.base.io](https://docs.mellea.ai/api/mellea/formatters/granite/base/io.md): Input and output processing for chat completions-like APIs. - [mellea.formatters.granite.base.optional](https://docs.mellea.ai/api/mellea/formatters/granite/base/optional.md): Context-manager helpers for gracefully handling optional import dependencies. - [mellea.formatters.granite.base.types](https://docs.mellea.ai/api/mellea/formatters/granite/base/types.md): Common Pydantic types shared across the Granite formatter package. - [mellea.formatters.granite.base.util](https://docs.mellea.ai/api/mellea/formatters/granite/base/util.md): Common utility functions for the library and tests. - [mellea.formatters.granite.granite3.granite32.input](https://docs.mellea.ai/api/mellea/formatters/granite/granite3/granite32/input.md): Input and output processing for the Granite 3.2 family of models. - [mellea.formatters.granite.granite3.granite32.output](https://docs.mellea.ai/api/mellea/formatters/granite/granite3/granite32/output.md): Parser for Granite 3.2 model output. - [mellea.formatters.granite.granite3.granite32.types](https://docs.mellea.ai/api/mellea/formatters/granite/granite3/granite32/types.md): Dataclasses for the Granite 3.2 family of models. - [mellea.formatters.granite.granite3.granite33.input](https://docs.mellea.ai/api/mellea/formatters/granite/granite3/granite33/input.md): Classes and functions that implement input and output string processing for the Granite 3.3 family of models. - [mellea.formatters.granite.granite3.granite33.output](https://docs.mellea.ai/api/mellea/formatters/granite/granite3/granite33/output.md): Parser which receives Granite 3.3 model output and returns the constituents of the output. - [mellea.formatters.granite.granite3.granite33.types](https://docs.mellea.ai/api/mellea/formatters/granite/granite3/granite33/types.md): Dataclasses that are specific to the Granite 3.3 family of models. - [mellea.formatters.granite.granite3.types](https://docs.mellea.ai/api/mellea/formatters/granite/granite3/types.md): Type definitions that are shared within the Granite 3 family of models. - [mellea.formatters.granite.intrinsics.input](https://docs.mellea.ai/api/mellea/formatters/granite/intrinsics/input.md): Classes and functions that implement common aspects of input processing for intrinsics. - [mellea.formatters.granite.intrinsics.output](https://docs.mellea.ai/api/mellea/formatters/granite/intrinsics/output.md): Classes and functions that implement common aspects of output processing for intrinsics. - [mellea.formatters.granite.intrinsics.util](https://docs.mellea.ai/api/mellea/formatters/granite/intrinsics/util.md): Common utility functions for this package. - [mellea.formatters.template_formatter](https://docs.mellea.ai/api/mellea/formatters/template_formatter.md): `TemplateFormatter`: Jinja2-template-based formatter for legacy backends. - [mellea.helpers.async_helpers](https://docs.mellea.ai/api/mellea/helpers/async_helpers.md): Async helper functions for managing concurrent model output thunks. - [mellea.helpers](https://docs.mellea.ai/api/mellea/helpers/helpers.md): Low-level helpers and utilities supporting mellea backends. - [mellea.helpers.openai_compatible_helpers](https://docs.mellea.ai/api/mellea/helpers/openai_compatible_helpers.md): A file for helper functions that deal with OpenAI API compatible helpers. - [mellea.helpers.server_type](https://docs.mellea.ai/api/mellea/helpers/server_type.md): Utilities for detecting and classifying the target inference server. - [mellea.plugins.base](https://docs.mellea.ai/api/mellea/plugins/base.md): Base types for the Mellea plugin system. - [mellea.plugins.decorators](https://docs.mellea.ai/api/mellea/plugins/decorators.md): Mellea hook decorator. - [mellea.plugins.hooks.component](https://docs.mellea.ai/api/mellea/plugins/hooks/component.md): Component lifecycle hook payloads. - [mellea.plugins.hooks.generation](https://docs.mellea.ai/api/mellea/plugins/hooks/generation.md): Generation pipeline hook payloads. - [mellea.plugins.hooks.sampling](https://docs.mellea.ai/api/mellea/plugins/hooks/sampling.md): Sampling pipeline hook payloads. - [mellea.plugins.hooks.session](https://docs.mellea.ai/api/mellea/plugins/hooks/session.md): Session lifecycle hook payloads. - [mellea.plugins.hooks.tool](https://docs.mellea.ai/api/mellea/plugins/hooks/tool.md): Tool execution hook payloads. - [mellea.plugins.hooks.validation](https://docs.mellea.ai/api/mellea/plugins/hooks/validation.md): Validation hook payloads. - [mellea.plugins](https://docs.mellea.ai/api/mellea/plugins/plugins.md): Mellea Plugin System — extension points for policy enforcement, observability, and customization. - [mellea.plugins.pluginset](https://docs.mellea.ai/api/mellea/plugins/pluginset.md): PluginSet — composable groups of hooks and plugins. - [mellea.plugins.registry](https://docs.mellea.ai/api/mellea/plugins/registry.md): Plugin registration and helpers. - [mellea.plugins.types](https://docs.mellea.ai/api/mellea/plugins/types.md): Mellea hook type enum and hook registration. - [mellea.stdlib.components.chat](https://docs.mellea.ai/api/mellea/stdlib/components/chat.md): Chat primitives: the `Message` and `ToolMessage` components. - [mellea.stdlib.components.docs.document](https://docs.mellea.ai/api/mellea/stdlib/components/docs/document.md): `Document` component for grounding model inputs with text passages. - [mellea.stdlib.components.instruction](https://docs.mellea.ai/api/mellea/stdlib/components/instruction.md): `Instruction` component for instruct/validate/repair loops. - [mellea.stdlib.components.intrinsic.intrinsic](https://docs.mellea.ai/api/mellea/stdlib/components/intrinsic/intrinsic.md): `Intrinsic` component for invoking fine-tuned adapter capabilities. - [mellea.stdlib.components.mify](https://docs.mellea.ai/api/mellea/stdlib/components/mify.md): The `@mify` decorator for turning Python objects into [`Component`](../../core/base#class-component)s. - [mellea.stdlib.components.mobject](https://docs.mellea.ai/api/mellea/stdlib/components/mobject.md): `MObject`, `Query`, `Transform`, and `MObjectProtocol` for query/transform workflows. - [mellea.stdlib.components.simple](https://docs.mellea.ai/api/mellea/stdlib/components/simple.md): `SimpleComponent`: a lightweight named-span component. - [mellea.stdlib.context](https://docs.mellea.ai/api/mellea/stdlib/context.md): Concrete [`Context`](../core/base#class-context) implementations for common conversation patterns. - [mellea.stdlib.frameworks.react](https://docs.mellea.ai/api/mellea/stdlib/frameworks/react.md): ReACT (Reason + Act) agentic pattern implementation. - [mellea.stdlib.functional](https://docs.mellea.ai/api/mellea/stdlib/functional.md): Functions for Mellea operations like Instruct, Chat, etc... - [mellea.stdlib.requirements.md](https://docs.mellea.ai/api/mellea/stdlib/requirements/md.md): This file contains various requirements for Markdown-formatted files. - [mellea.stdlib.requirements.python_reqs](https://docs.mellea.ai/api/mellea/stdlib/requirements/python_reqs.md): Requirements for Python code generation validation. - [mellea.stdlib.requirements.requirement](https://docs.mellea.ai/api/mellea/stdlib/requirements/requirement.md): Requirements are a special type of Component used as input to the "validate" step in Instruct/Validate/Repair design patterns. - [mellea.stdlib.requirements.safety.guardian](https://docs.mellea.ai/api/mellea/stdlib/requirements/safety/guardian.md): Risk checking with Granite Guardian models via existing backends. - [mellea.stdlib.requirements.tool_reqs](https://docs.mellea.ai/api/mellea/stdlib/requirements/tool_reqs.md): [`Requirement`](../../core/requirement#class-requirement) factories for tool-use validation. - [mellea.stdlib.sampling.base](https://docs.mellea.ai/api/mellea/stdlib/sampling/base.md): Base Sampling Strategies. - [mellea.stdlib.sampling.sampling_algos.budget_forcing_alg](https://docs.mellea.ai/api/mellea/stdlib/sampling/sampling_algos/budget_forcing_alg.md): Budget-forcing generation algorithm for thinking models. - [mellea.stdlib.sampling.sofai](https://docs.mellea.ai/api/mellea/stdlib/sampling/sofai.md): SOFAI (Slow and Fast AI) Sampling Strategy. - [mellea.stdlib.session](https://docs.mellea.ai/api/mellea/stdlib/session.md): `MelleaSession`: the primary entry point for running generative programs. - [mellea.stdlib](https://docs.mellea.ai/api/mellea/stdlib/stdlib.md): The mellea standard library of components, sessions, and sampling strategies. - [mellea.stdlib.tools.interpreter](https://docs.mellea.ai/api/mellea/stdlib/tools/interpreter.md): Code interpreter tool and execution environments for agentic workflows. - [mellea.telemetry.logging](https://docs.mellea.ai/api/mellea/telemetry/logging.md): OpenTelemetry logging instrumentation for Mellea. - [mellea.telemetry.metrics](https://docs.mellea.ai/api/mellea/telemetry/metrics.md): OpenTelemetry metrics instrumentation for Mellea. - [mellea.telemetry](https://docs.mellea.ai/api/mellea/telemetry/telemetry.md): OpenTelemetry instrumentation for Mellea. - [mellea.telemetry.tracing](https://docs.mellea.ai/api/mellea/telemetry/tracing.md): OpenTelemetry tracing instrumentation for Mellea. - [Building Extensions](https://docs.mellea.ai/community/building-extensions.md): Create custom components, backends, sampling strategies, and requirements to extend Mellea. - [Code of Conduct](https://docs.mellea.ai/community/code-of-conduct.md): Standards and enforcement for the Mellea community. - [Contributing to Mellea](https://docs.mellea.ai/community/contributing-guide.md): Development setup, coding standards, and PR process for Mellea contributors. - [Mellea vs Orchestration Frameworks](https://docs.mellea.ai/concepts/architecture-vs-agents.md): What makes Mellea different from LangChain, smolagents, and other agent frameworks — and how they work together. - [Context and Sessions](https://docs.mellea.ai/concepts/context-and-sessions.md): How Component, Backend, Context, and Session fit together in Mellea's architecture. - [Generative Functions](https://docs.mellea.ai/concepts/generative-functions.md): How the @generative decorator turns a Python function signature into an LLM-backed implementation. - [Generative Programming](https://docs.mellea.ai/concepts/generative-programming.md): The ideas behind Mellea — what generative programs are, why they're hard, and how Mellea addresses those challenges. - [The Instruction Model](https://docs.mellea.ai/concepts/instruct-validate-repair.md): How instruct(), requirements, and the IVR loop work in Mellea. - [MObjects and mify](https://docs.mellea.ai/concepts/mobjects-and-mify.md): How the @mify decorator turns any Python class into an LLM-queryable object with controlled field and method exposure. - [Plugins & Hooks](https://docs.mellea.ai/concepts/plugins.md): Intercept and customize Mellea's execution pipeline with plugins — enforce policies, add observability, and transform payloads at well-defined hook points. - [The Requirements System](https://docs.mellea.ai/concepts/requirements-system.md): How Requirement, ValidationResult, and the IVR loop work together to enforce constraints on generative output. - [Evaluate with LLM-as-a-Judge](https://docs.mellea.ai/evaluation-and-observability/evaluate-with-llm-as-a-judge.md): Use the LLM itself to evaluate output quality — inline as a requirement, or as a standalone validation pass. - [Logging](https://docs.mellea.ai/evaluation-and-observability/logging.md): Configure Mellea's console logging and export logs to OTLP collectors. - [Metrics](https://docs.mellea.ai/evaluation-and-observability/metrics.md): Collect token usage metrics and instrument your own code with OpenTelemetry counters, histograms, and up-down counters. - [Telemetry](https://docs.mellea.ai/evaluation-and-observability/telemetry.md): Add OpenTelemetry tracing, metrics, and logging to Mellea programs. - [Tracing](https://docs.mellea.ai/evaluation-and-observability/tracing.md): Export distributed traces from Mellea using OpenTelemetry semantic conventions. - [Data Extraction Pipeline](https://docs.mellea.ai/examples/data-extraction-pipeline.md): Use the @generative decorator with a typed return value to extract structured data from unstructured text in a single declarative function. - [Examples](https://docs.mellea.ai/examples/index.md): Complete working programs demonstrating Mellea patterns in production-like scenarios. - [Legacy Code Integration with @mify](https://docs.mellea.ai/examples/legacy-code-integration.md): Apply the @mify decorator to existing Python classes so a Mellea session can act on, query, and transform your objects without rewriting them. - [Resilient RAG with Fallback Filtering](https://docs.mellea.ai/examples/resilient-rag-fallback.md): Build a retrieval-augmented generation pipeline that uses FAISS for vector search and a @generative relevance filter to remove noise before generation. - [Traced Generation Loop](https://docs.mellea.ai/examples/traced-generation-loop.md): Enable OpenTelemetry tracing for a multi-operation Mellea session using environment variables, and export spans to Jaeger or any OTLP backend. - [Installation](https://docs.mellea.ai/getting-started/installation.md): Install Mellea and set up your Python environment. - [Quick Start](https://docs.mellea.ai/getting-started/quickstart.md): Run your first generative program in minutes. - [act() and aact()](https://docs.mellea.ai/guide/act-and-aact.md): Work directly with Components using act(), aact(), and the functional API. - [Backends and Configuration](https://docs.mellea.ai/guide/backends-and-configuration.md): Configure Mellea to use Ollama, OpenAI, LiteLLM, HuggingFace, or WatsonX backends. - [Generative Functions](https://docs.mellea.ai/guide/generative-functions.md): Define type-safe LLM functions with @generative and Pydantic structured output. - [Glossary](https://docs.mellea.ai/guide/glossary.md): Definitions of Mellea-specific terms and concepts. - [m decompose](https://docs.mellea.ai/guide/m-decompose.md): Break complex tasks into ordered, executable subtasks with the m decompose CLI. - [Tools and Agents](https://docs.mellea.ai/guide/tools-and-agents.md): Give LLMs access to tools, build ReACT agents, and validate tool call arguments. - [Working with Data](https://docs.mellea.ai/guide/working-with-data.md): Ground instructions with documents, build RAG pipelines, and use MObjects and RichDocument. - [Build a RAG Pipeline](https://docs.mellea.ai/how-to/build-a-rag-pipeline.md): Combine vector retrieval with Mellea's generative filtering and grounded generation to build a reliable retrieval-augmented generation system. - [Configure model options](https://docs.mellea.ai/how-to/configure-model-options.md): Set temperature, seed, max tokens, system prompts, and other backend parameters at session level or per call. - [Enforce Structured Output](https://docs.mellea.ai/how-to/enforce-structured-output.md): Get JSON, Pydantic models, and typed values from LLM calls using @generative and instruct(format=...). - [Handling Exceptions and Failures](https://docs.mellea.ai/how-to/handling-exceptions.md): Handle SamplingResult failures, PreconditionException, and parse errors gracefully in Mellea programs. - [Refactor Prompts with the CLI](https://docs.mellea.ai/how-to/refactor-prompts-with-cli.md): Use m decompose to break a complex prompt into typed, validated generative functions. - [Unit Test Generative Code](https://docs.mellea.ai/how-to/unit-test-generative-code.md): Write reliable tests for @generative functions using pytest markers and output validation. - [Async and Streaming](https://docs.mellea.ai/how-to/use-async-and-streaming.md): Use async methods, parallel generation, and streaming output with Mellea. - [Context and Sessions](https://docs.mellea.ai/how-to/use-context-and-sessions.md): Extend MelleaSession to add custom validation, logging, and filtering behavior. - [Use Images and Vision Models](https://docs.mellea.ai/how-to/use-images-and-vision.md): Pass images to instruct() and chat() calls, and configure vision-capable backends. - [Write Custom Verifiers](https://docs.mellea.ai/how-to/write-custom-verifiers.md): Write validation functions that inspect LLM output and return pass/fail results with repair guidance. - [AWS Bedrock](https://docs.mellea.ai/integrations/bedrock.md): Run Mellea with AWS Bedrock models using the Bedrock Mantle backend or LiteLLM. - [HuggingFace Transformers](https://docs.mellea.ai/integrations/huggingface.md): Run Mellea on local hardware with LocalHFBackend and HuggingFace Transformers. - [LangChain](https://docs.mellea.ai/integrations/langchain.md): Use LangChain tools inside Mellea and seed a Mellea session with LangChain message history. - [m serve](https://docs.mellea.ai/integrations/m-serve.md): Run a Mellea program as an OpenAI-compatible chat endpoint with m serve. - [MCP Integration](https://docs.mellea.ai/integrations/mcp.md): Expose Mellea functions as Model Context Protocol tools, callable from Claude Desktop, Cursor, and any MCP-compatible client. - [Ollama](https://docs.mellea.ai/integrations/ollama.md): Run Mellea with local models via Ollama — the default backend. - [OpenAI and OpenAI-Compatible APIs](https://docs.mellea.ai/integrations/openai.md): Use Mellea with OpenAI's API and any OpenAI-compatible endpoint — LM Studio, vLLM, Anthropic, and more. - [smolagents](https://docs.mellea.ai/integrations/smolagents.md): Use HuggingFace smolagents tools inside a Mellea session. - [Vertex AI](https://docs.mellea.ai/integrations/vertex-ai.md): Connect Mellea to Google Vertex AI models via LiteLLM. - [IBM WatsonX](https://docs.mellea.ai/integrations/watsonx.md): Run Mellea with IBM WatsonX AI using the WatsonxAIBackend. - [Common Errors](https://docs.mellea.ai/troubleshooting/common-errors.md): Common errors, diagnostic steps, and fixes for Mellea programs. - [FAQ](https://docs.mellea.ai/troubleshooting/faq.md): Answers to frequently asked questions about Mellea installation, backends, and generative functions. - [Tutorial: Your First Generative Program](https://docs.mellea.ai/tutorials/01-your-first-generative-program.md): Build a document analysis pipeline step by step — from a single instruct() call to a composed, typed, validated generative program. - [Tutorial: Streaming and Async](https://docs.mellea.ai/tutorials/02-streaming-and-async.md): Make LLM calls non-blocking, stream tokens as they arrive, and process batches concurrently. - [Tutorial: Using Generative Slots](https://docs.mellea.ai/tutorials/03-using-generative-slots.md): Replace ad-hoc instruct() calls with typed, composable @generative functions. - [Tutorial: Making Agents Reliable](https://docs.mellea.ai/tutorials/04-making-agents-reliable.md): Add requirements validation and Guardian safety checks to a ReACT tool-using agent. - [Tutorial: Mifying Legacy Code](https://docs.mellea.ai/tutorials/05-mifying-legacy-code.md): Add LLM query and transform capabilities to existing Python classes without rewriting them. ## OpenAPI Specs - [openapi](https://docs.mellea.ai/api-reference/openapi.json)