{"id":8296,"library":"loralib","title":"PyTorch LoRA Library (loralib)","description":"loralib provides a PyTorch implementation of Low-Rank Adaptation (LoRA), a parameter-efficient fine-tuning method for large deep learning models. It enables adapting models with performance comparable to full fine-tuning while significantly reducing trainable parameters and memory footprint. The library is currently at version 0.1.2 and appears to be actively maintained by Microsoft, though PyPI updates are infrequent, with core development often reflected in GitHub activities like checkpoint releases.","status":"active","version":"0.1.2","language":"en","source_language":"en","source_url":"https://github.com/microsoft/LoRA","tags":["pytorch","deep-learning","fine-tuning","lora","parameter-efficient-learning","nlp","computer-vision"],"install":[{"cmd":"pip install loralib","lang":"bash","label":"Install from PyPI"}],"dependencies":[{"reason":"Core deep learning framework for LoRA implementation.","package":"torch","optional":false}],"imports":[{"note":"The standard and recommended alias for loralib functions and layers.","symbol":"loralib as lora","correct":"import loralib as lora"},{"note":"To apply LoRA, replace standard PyTorch layers like `nn.Linear` with their `loralib` counterparts.","wrong":"from torch.nn import Linear","symbol":"lora.Linear","correct":"from loralib import Linear"},{"note":"This function automatically freezes all non-LoRA parameters and enables gradients only for the introduced LoRA layers, which is crucial for parameter-efficient training.","symbol":"lora.mark_only_lora_as_trainable","correct":"import loralib as lora\nlora.mark_only_lora_as_trainable(model)"},{"note":"Used to extract and save only the LoRA-specific parameters from a model, significantly reducing checkpoint size.","symbol":"lora.lora_state_dict","correct":"import loralib as lora\ntorch.save(lora.lora_state_dict(model), 'lora_weights.pt')"}],"quickstart":{"code":"import torch\nimport torch.nn as nn\nimport loralib as lora\n\nclass MyModel(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.linear1 = nn.Linear(10, 20)\n        self.linear2 = nn.Linear(20, 5)\n\n    def forward(self, x):\n        return self.linear2(self.linear1(x))\n\n# 1. Instantiate the base model\nbase_model = MyModel()\n\n# 2. Convert a layer to its LoRA equivalent\n# Replace nn.Linear with lora.Linear, specifying rank 'r'\n# Here, we convert linear1 to a LoRA-enabled layer\nbase_model.linear1 = lora.Linear(base_model.linear1.in_features, base_model.linear1.out_features, r=4)\n\n# (Optional: Convert more layers)\n# base_model.linear2 = lora.Linear(base_model.linear2.in_features, base_model.linear2.out_features, r=4)\n\n# 3. Mark only LoRA parameters as trainable\nlora.mark_only_lora_as_trainable(base_model)\n\n# Verify trainable parameters\nprint(\"Trainable parameters after LoRA conversion:\")\nfor name, param in base_model.named_parameters():\n    if param.requires_grad:\n        print(f\"  {name}: {param.shape}\")\n\n# Example usage (forward pass)\ninput_tensor = torch.randn(1, 10)\noutput_tensor = base_model(input_tensor)\nprint(f\"Output shape: {output_tensor.shape}\")\n\n# 4. Save only the LoRA-specific state_dict\nlora_weights = lora.lora_state_dict(base_model)\n# torch.save(lora_weights, 'my_model_lora.pt')","lang":"python","description":"This quickstart demonstrates how to integrate loralib into an existing PyTorch model. It involves replacing target `nn.Linear` (or `nn.Embedding`, `nn.Conv2d`) layers with their `lora.Linear` counterparts, then marking only the newly introduced LoRA parameters as trainable, and finally, saving only these LoRA-specific weights for efficient deployment."},"warnings":[{"fix":"Ensure you are searching specifically for 'loralib python' or 'loralib pytorch' for deep learning applications.","message":"The `loralib` Python package should not be confused with `LoRaLib`, which is an Arduino library for LoRa radio modules. They serve entirely different purposes, and searching broadly for 'LoRa library' can yield irrelevant results.","severity":"gotcha","affected_versions":"All"},{"fix":"For unsupported layer types, either manually implement LoRA adaptation, contribute to loralib, or evaluate alternative PEFT libraries.","message":"loralib directly supports `nn.Linear`, `nn.Embedding`, and `nn.Conv2d` layers for adaptation. If your model contains other types of layers that you wish to apply LoRA to, you might need to implement custom wrappers or manual adaptation, or consider using other PEFT libraries like Hugging Face's PEFT which may offer broader layer support.","severity":"gotcha","affected_versions":"All"},{"fix":"Always keep the original pre-trained model weights. LoRA checkpoints are typically small and are 'added' to the base model.","message":"When using `loralib`, you still require the original pre-trained model checkpoint to perform inference or further training, as `loralib` only adds low-rank update matrices and does not store the original model weights.","severity":"gotcha","affected_versions":"All"},{"fix":"Consider migrating to or starting new projects with Hugging Face's `PEFT` library for enhanced features, broader model support, and active development, especially when working with Transformer models.","message":"Hugging Face's `PEFT` (Parameter-Efficient Fine-Tuning) library now offers robust LoRA implementations and is often recommended for integrating LoRA with Hugging Face Transformers models, potentially providing more comprehensive features and broader model compatibility than the original `loralib` package.","severity":"deprecated","affected_versions":"All"}],"env_vars":null,"last_verified":"2026-04-16T00:00:00.000Z","next_check":"2026-07-15T00:00:00.000Z","problems":[{"fix":"Ensure the `nn.Linear` layer has been replaced with `loralib.Linear` (or `loralib.Embedding`/`loralib.Conv2d`) and that the LoRA parameters are initialized, typically by calling `loralib.mark_only_lora_as_trainable` or checking the `r` parameter of `loralib.Linear`.","cause":"Attempting to access LoRA-specific parameters (like `lora_A` or `lora_B`) on a standard `torch.nn.Linear` layer that has not been converted to `loralib.Linear`.","error":"AttributeError: 'Linear' object has no attribute 'lora_A'"},{"fix":"After converting your desired layers to `loralib` variants, call `loralib.mark_only_lora_as_trainable(model)` to set `requires_grad=True` only for the LoRA-specific parameters and freeze the base model weights. Verify with `for n, p in model.named_parameters(): if p.requires_grad: print(n)`.","cause":"This usually indicates that no parameters in the model have `requires_grad=True`, meaning the optimizer has nothing to update, often due to forgetting to call `loralib.mark_only_lora_as_trainable` or converting the wrong layers.","error":"RuntimeError: element 0 of tensors does not have enough grads for grad_scaler.update()"},{"fix":"If loading LoRA weights, first load the original pre-trained model, then replace relevant layers with `loralib` versions, and then load the LoRA state_dict using `model.load_state_dict(lora_weights, strict=False)`. If saving, ensure you understand if you need the full model or just the LoRA deltas.","cause":"You are trying to load a checkpoint containing only LoRA weights (saved with `lora.lora_state_dict()`) into a full model expecting all original weights, or vice versa.","error":"KeyError: 'SomeLayer.weight' when loading a state_dict saved using lora.lora_state_dict(model)"}]}