Skip to main content
Input and output processing for the Granite 3.2 family of models.

Classes

CLASS Granite32InputProcessor

Input processor for version 3.2 of the main Granite models, all sizes. This input processor is based on the Jinja template that was used during supervised fine tuning of these models. This template is as follows:
{{%- if messages[0]['role'] == 'system' %}}
    {{%- set system_message = messages[0]['content'] %}}
    {{%- set loop_messages = messages[1:] %}}
{{%- else %}}
    {{%- set system_message = \"Knowledge Cutoff Date: April 2024.\nToday's Date: \"
      + strftime_now('%B %d, %Y') + \".\nYou are Granite, developed by IBM.\" %}}
    {{%- if tools and documents %}}
            {{%- set system_message = system_message + \" You are a helpful AI
            assistant with access to the following tools.
              When a tool is required to answer the user's query, respond with
              <|tool_call|> followed by a JSON list of tools used. If a tool does
              not exist in the provided list of tools, notify the user that you do
              not have the ability to fulfill the request.\n\nWrite the response to
              the user's input by strictly aligning with the facts in the provided
              documents. If the information needed to answer the question is not
              available in the documents, inform the user that the question cannot
              be answered based on the available data.\" %}}
    {{%- elif tools %}}
            {{%- set system_message = system_message + \" You are a helpful AI
            assistant with access to the following tools. When a tool is required to
            answer the user's query, respond with <|tool_call|> followed by a JSON
            list of tools used. If a tool does not exist in the provided list of
            tools, notify the user that you do not have the ability to fulfill the
            request.\" %}}
    {{%- elif documents %}}
            {{%- set system_message = system_message + \" Write the response to the
            user's input by strictly aligning with the facts in the provided
            documents. If the information needed to answer the question is not
            available in the documents, inform the user that the question cannot be
            answered based on the available data.\" %}}
    {{%- elif thinking %}}
            {{%- set system_message = system_message + \" You are a helpful AI
            assistant.\nRespond to every user query in a comprehensive and detailed
            way. You can write down your thoughts and reasoning process before
            responding. In the thought process, engage in a comprehensive cycle of
            analysis, summarization, exploration, reassessment, reflection,
            backtracing, and iteration to develop well-considered thinking process.
            In the response section, based on various attempts, explorations, and
            reflections from the thoughts section, systematically present the final
            solution that you deem correct. The response should summarize the
            thought process. Write your thoughts after 'Here is my thought process:'
            and write your response after 'Here is my response:' for each user
            query.\" %}}
    {{%- else %}}
            {{%- set system_message = system_message + \" You are a helpful AI
            assistant.\" %}}
    {{%- endif %}}
    {{%- if 'citations' in controls and documents %}}
        {{%- set system_message = system_message + '\n\nIn your response, use the
        symbols <co> and </co> to indicate when a fact comes from a document in the
        search result, e.g <co>0</co> for a fact from document 0. Afterwards, list
        all the citations with their corresponding documents in an ordered list.' %}}
    {{%- endif %}}
    {{%- if 'hallucinations' in controls and documents %}}
        {{%- set system_message = system_message + '\n\nFinally, after the response
        is written, include a numbered list of sentences from the response that are
        potentially hallucinated and not based in the documents.' %}}
    {{%- endif %}}
    {{%- set loop_messages = messages %}}
{{%- endif %}}
{{{{- '<|start_of_role|>system<|end_of_role|>' + system_message +
    '<|end_of_text|>\n' }}}}
{{%- if tools %}}
    {{{{- '<|start_of_role|>tools<|end_of_role|>' }}}}
    {{{{- tools | tojson(indent=4) }}}}
    {{{{- '<|end_of_text|>\n' }}}}
{{%- endif %}}
{{%- if documents %}}
    {{{{- '<|start_of_role|>documents<|end_of_role|>' }}}}
    {{%- for document in documents %}}
        {{{{- 'Document ' + loop.index0 | string + '\n' }}}}
        {{{{- document['text'] }}}}
        {{%- if not loop.last %}}
            {{{{- '\n\n'}}}}
        {{%- endif%}}
    {{%- endfor %}}
    {{{{- '<|end_of_text|>\n' }}}}
{{%- endif %}}
{{%- for message in loop_messages %}}
    {{{{- '<|start_of_role|>' + message['role'] + '<|end_of_role|>' +
    message['content'] + '<|end_of_text|>\n' }}}}
    {{%- if loop.last and add_generation_prompt %}}
        {{{{- '<|start_of_role|>assistant' }}}}
        {{%- if controls %}}
            {{{{- ' ' + controls | tojson()}}}}
        {{%- endif %}}
        {{{{- '<|end_of_role|>' }}}}
    {{%- endif %}}
{{%- endfor %}}
Methods:

FUNC sanitize

sanitize(cls, chat_completion, parts = 'all')
Sanitize the chat completion by removing Granite 3.2 special tokens. Args:
  • chat_completion: The chat completion request to sanitize.
  • parts: Which parts of the chat completion to sanitize; defaults to "all".
Returns:
  • The sanitized chat completion with all Granite 3.2 special tokens
  • removed from the specified parts.

FUNC transform

transform(self, chat_completion: ChatCompletion, add_generation_prompt: bool = True) -> str
Transform the chat completion request into a Granite 3.2 prompt string. Args:
  • chat_completion: The structured chat completion request to convert into a tokenizer-ready prompt string.
  • add_generation_prompt: When True, appends the assistant role header to the end of the prompt to trigger generation. Defaults to True.
Returns:
  • The prompt string formatted for the Granite 3.2 model tokenizer.
Raises:
  • ValueError: If conflicting options are specified, such as enabling thinking mode together with documents, tools, or a custom system message; or enabling citations or hallucinations with a custom system message.