Alias-Free Torch
Alias-Free Torch is a Python library providing a simple PyTorch module implementation of Alias-Free GAN (Generative Adversarial Networks) concepts. It includes Alias-Free GAN style lowpass sinc filters, up/downsampling, and activation modules. The library aims to reduce aliasing artifacts in generated images, which is crucial for deep learning models like diffusion architectures, by integrating signal processing-based alias-free resampling techniques. The current version is 0.0.6, and the project is actively maintained, though it is described as an unofficial implementation and may not perfectly align with official StyleGAN3 implementations.
Common errors
-
ValueError: Minimum cutoff must be larger than zero. / ValueError: A cutoff above 0.5 does not make sense.
cause The `cutoff` parameter for `LowPassFilter1d` or `LowPassFilter2d` was set to a value less than or equal to zero, or greater than 0.5.fixEnsure the `cutoff` parameter is set to a floating-point value between 0 (exclusive) and 0.5 (inclusive), e.g., `LowPassFilter1d(cutoff=0.4)`. -
RuntimeError: "kaiser_window" is not implemented for the type ...
cause The installed PyTorch version is older than 1.7.0, which is when `torch.kaiser_window` and `torch.i0` were implemented.fixUpgrade your PyTorch installation to version 1.7.0 or higher. Example: `pip install --upgrade torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118` (adjust CUDA version as needed). -
RuntimeError: Expected 4-dimensional input for 2D convolution, but got 3-dimensional input of size...
cause Input tensor shapes do not align with the expected dimensions for 1D or 2D operations. The library's TODOs indicate that some operations may only support specific channel dimensions (e.g., `[B, 1, T]` or `[B, 1, H, W]`).fixReshape your input tensor to match the expected format. For example, for a 1D operation expecting `[B, C, L]`, ensure it's `[Batch, Channels, Length]`. If a specific operation expects a single channel, ensure it's `[B, 1, L]` or `[B, 1, H, W]` before passing it through the module.
Warnings
- gotcha The library explicitly states it is an 'unofficial implementation' and its filters and upsample/downsample behavior 'could be different with official implementation' (e.g., StyleGAN3). This may lead to subtle differences in behavior or results compared to original research papers.
- breaking Due to the `v0.0.x` versioning and rapid development, API changes and behavioral modifications are common. For instance, `v0.0.2` involved a 'Rewrite upsample, jinc applied', and `v0.0.3` included 'Bug fix for torch.special / remove print / split pad from conv_transpose', which can be breaking changes in functionality or required arguments.
- gotcha The library requires PyTorch version `torch>=1.7.0` because it depends on `torch.kaiser_window` and `torch.i0`. Pip's dependency checker may not enforce this for 'custom torch users', leading to runtime errors if an older PyTorch version is installed.
Install
-
pip install alias-free-torch
Imports
- LowPassFilter1d
from alias_free_torch.filter import LowPassFilter1d
- LowPassFilter2d
from alias_free_torch.filter import LowPassFilter2d
- UpSample1d
from alias_free_torch.resample import UpSample1d
- DownSample1d
from alias_free_torch.resample import DownSample1d
- Activation1d
from alias_free_torch import Activation1d
from alias_free_torch.act import Activation1d
Quickstart
import torch
import torch.nn as nn
from alias_free_torch.act import Activation1d
# Define a simple 1D activation module with ReLU as the base activation
# This will upsample, apply ReLU, then downsample to combat aliasing
activation_module = Activation1d(
activation=nn.ReLU(),
up_ratio=2,
down_ratio=2,
up_kernel_size=12,
down_kernel_size=12
)
# Create a dummy 1D input tensor (Batch, Channels, Length)
# Current versions often expect channel dimension to be 1 for many operations
input_tensor = torch.randn(1, 1, 64)
# Pass the input through the alias-free activation module
output_tensor = activation_module(input_tensor)
print(f"Input tensor shape: {input_tensor.shape}")
print(f"Output tensor shape: {output_tensor.shape}")
# Expected output shape: (1, 1, 64) if up_ratio and down_ratio cancel out