m alora command group.
Provides three commands: train (fine-tune a base causal language model on a JSONL
dataset to produce a LoRA or aLoRA adapter), upload (push adapter weights to
Hugging Face Hub, optionally packaging the adapter as an intrinsic with an
io.yaml configuration), and add-readme (use an LLM to auto-generate and
upload an INTRINSIC_README.md for the trained adapter).
Functions
FUNC alora_train
datafile: JSONL file with item/label pairs for training.basemodel: Base model ID or path.outfile: Path to save adapter weights.promptfile: Path to load the prompt format file.adapter: Adapter type;"alora"or"lora".device: Device to train on:"auto","cpu","cuda", or"mps".epochs: Number of training epochs.learning_rate: Learning rate for the optimizer.batch_size: Per-device training batch size.max_length: Maximum sequence length.grad_accum: Number of gradient accumulation steps.
FUNC alora_upload
weight_path: Path to saved adapter weights directory.name: Destination model name on Hugging Face Hub (e.g."acme/carbchecker-alora").intrinsic: IfTrue, the adapter implements an intrinsic and anio.yamlfile must also be provided.io_yaml: Path to theio.yamlfile configuring input/output processing when the model is invoked as an intrinsic.
FUNC alora_add_readme
datafile: JSONL file with item/label pairs used to train the adapter.basemodel: Base model ID or path.promptfile: Path to the prompt format file, orNone.name: Destination model name on Hugging Face Hub.hints: Path to a file containing additional domain hints, orNone.io_yaml: Path to theio.yamlintrinsic configuration file, orNone.
OSError: If no Hugging Face authentication token is found.SystemExit: If the user declines to upload the generated README.