Skip to main content
Typer sub-application for the m alora command group. Provides three commands: train (fine-tune a base causal language model on a JSONL dataset to produce a LoRA or aLoRA adapter), upload (push adapter weights to Hugging Face Hub, optionally packaging the adapter as an intrinsic with an io.yaml configuration), and add-readme (use an LLM to auto-generate and upload an INTRINSIC_README.md for the trained adapter).

Functions

FUNC alora_train

alora_train(datafile: str = typer.Argument(..., help='JSONL file with item/label pairs'), basemodel: str = typer.Option(..., help='Base model ID or path'), outfile: str = typer.Option(..., help='Path to save adapter weights'), promptfile: str = typer.Option(None, help='Path to load the prompt format file'), adapter: str = typer.Option('alora', help='Adapter type: alora or lora'), device: str = typer.Option('auto', help='Device: auto, cpu, cuda, or mps'), epochs: int = typer.Option(6, help='Number of training epochs'), learning_rate: float = typer.Option(6e-06, help='Learning rate'), batch_size: int = typer.Option(2, help='Per-device batch size'), max_length: int = typer.Option(1024, help='Max sequence length'), grad_accum: int = typer.Option(4, help='Gradient accumulation steps'))
Train an aLoRA or LoRA model on your dataset. Args:
  • datafile: JSONL file with item/label pairs for training.
  • basemodel: Base model ID or path.
  • outfile: Path to save adapter weights.
  • promptfile: Path to load the prompt format file.
  • adapter: Adapter type; "alora" or "lora".
  • device: Device to train on: "auto", "cpu", "cuda", or "mps".
  • epochs: Number of training epochs.
  • learning_rate: Learning rate for the optimizer.
  • batch_size: Per-device training batch size.
  • max_length: Maximum sequence length.
  • grad_accum: Number of gradient accumulation steps.

FUNC alora_upload

alora_upload(weight_path: str = typer.Argument(..., help='Path to saved adapter weights'), name: str = typer.Option(..., help='Destination model name (e.g., acme/carbchecker-alora)'), intrinsic: bool = typer.Option(default=False, help='True if the uploaded adapter implements an intrinsic. If true, the caller must provide an io.yaml file.'), io_yaml: str = typer.Option(default=None, help='Location of the io.yaml file that configures input and output processing if the model is invoked as an intrinsic.'))
Upload trained adapter to remote model registry. Args:
  • weight_path: Path to saved adapter weights directory.
  • name: Destination model name on Hugging Face Hub (e.g. "acme/carbchecker-alora").
  • intrinsic: If True, the adapter implements an intrinsic and an io.yaml file must also be provided.
  • io_yaml: Path to the io.yaml file configuring input/output processing when the model is invoked as an intrinsic.

FUNC alora_add_readme

alora_add_readme(datafile: str = typer.Argument(..., help='JSONL file with item/label pairs'), basemodel: str = typer.Option(..., help='Base model ID or path'), promptfile: str = typer.Option(None, help='Path to load the prompt format file'), name: str = typer.Option(..., help='Destination model name (e.g., acme/carbchecker-alora)'), hints: str = typer.Option(default=None, help='File containing any additional hints.'), io_yaml: str = typer.Option(default=None, help='Location of the io.yaml file that configures input and output processing if the model is invoked as an intrinsic.'))
Generate and upload an INTRINSIC_README.md for a trained adapter. Args:
  • datafile: JSONL file with item/label pairs used to train the adapter.
  • basemodel: Base model ID or path.
  • promptfile: Path to the prompt format file, or None.
  • name: Destination model name on Hugging Face Hub.
  • hints: Path to a file containing additional domain hints, or None.
  • io_yaml: Path to the io.yaml intrinsic configuration file, or None.
Raises:
  • OSError: If no Hugging Face authentication token is found.
  • SystemExit: If the user declines to upload the generated README.