PyTorch LoRA Library (loralib)

0.1.2 · active · verified Thu Apr 16

loralib provides a PyTorch implementation of Low-Rank Adaptation (LoRA), a parameter-efficient fine-tuning method for large deep learning models. It enables adapting models with performance comparable to full fine-tuning while significantly reducing trainable parameters and memory footprint. The library is currently at version 0.1.2 and appears to be actively maintained by Microsoft, though PyPI updates are infrequent, with core development often reflected in GitHub activities like checkpoint releases.

Common errors

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to integrate loralib into an existing PyTorch model. It involves replacing target `nn.Linear` (or `nn.Embedding`, `nn.Conv2d`) layers with their `lora.Linear` counterparts, then marking only the newly introduced LoRA parameters as trainable, and finally, saving only these LoRA-specific weights for efficient deployment.

import torch
import torch.nn as nn
import loralib as lora

class MyModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.linear1 = nn.Linear(10, 20)
        self.linear2 = nn.Linear(20, 5)

    def forward(self, x):
        return self.linear2(self.linear1(x))

# 1. Instantiate the base model
base_model = MyModel()

# 2. Convert a layer to its LoRA equivalent
# Replace nn.Linear with lora.Linear, specifying rank 'r'
# Here, we convert linear1 to a LoRA-enabled layer
base_model.linear1 = lora.Linear(base_model.linear1.in_features, base_model.linear1.out_features, r=4)

# (Optional: Convert more layers)
# base_model.linear2 = lora.Linear(base_model.linear2.in_features, base_model.linear2.out_features, r=4)

# 3. Mark only LoRA parameters as trainable
lora.mark_only_lora_as_trainable(base_model)

# Verify trainable parameters
print("Trainable parameters after LoRA conversion:")
for name, param in base_model.named_parameters():
    if param.requires_grad:
        print(f"  {name}: {param.shape}")

# Example usage (forward pass)
input_tensor = torch.randn(1, 10)
output_tensor = base_model(input_tensor)
print(f"Output shape: {output_tensor.shape}")

# 4. Save only the LoRA-specific state_dict
lora_weights = lora.lora_state_dict(base_model)
# torch.save(lora_weights, 'my_model_lora.pt')

view raw JSON →