{"id":7917,"library":"alias-free-torch","title":"Alias-Free Torch","description":"Alias-Free Torch is a Python library providing a simple PyTorch module implementation of Alias-Free GAN (Generative Adversarial Networks) concepts. It includes Alias-Free GAN style lowpass sinc filters, up/downsampling, and activation modules. The library aims to reduce aliasing artifacts in generated images, which is crucial for deep learning models like diffusion architectures, by integrating signal processing-based alias-free resampling techniques. The current version is 0.0.6, and the project is actively maintained, though it is described as an unofficial implementation and may not perfectly align with official StyleGAN3 implementations.","status":"active","version":"0.0.6","language":"en","source_language":"en","source_url":"https://github.com/junjun3518/alias-free-torch","tags":["PyTorch","computer-vision","signal-processing","deep-learning","GAN","aliasing"],"install":[{"cmd":"pip install alias-free-torch","lang":"bash","label":"Install stable version"}],"dependencies":[{"reason":"Core deep learning framework; requires specific functions (kaiser_window, i0) introduced in version 1.7.0 or later.","package":"torch","optional":false}],"imports":[{"symbol":"LowPassFilter1d","correct":"from alias_free_torch.filter import LowPassFilter1d"},{"symbol":"LowPassFilter2d","correct":"from alias_free_torch.filter import LowPassFilter2d"},{"symbol":"UpSample1d","correct":"from alias_free_torch.resample import UpSample1d"},{"symbol":"DownSample1d","correct":"from alias_free_torch.resample import DownSample1d"},{"note":"Classes are organized into submodules (e.g., filter, resample, act) and should be imported directly from them.","wrong":"from alias_free_torch import Activation1d","symbol":"Activation1d","correct":"from alias_free_torch.act import Activation1d"}],"quickstart":{"code":"import torch\nimport torch.nn as nn\nfrom alias_free_torch.act import Activation1d\n\n# Define a simple 1D activation module with ReLU as the base activation\n# This will upsample, apply ReLU, then downsample to combat aliasing\nactivation_module = Activation1d(\n    activation=nn.ReLU(), \n    up_ratio=2, \n    down_ratio=2, \n    up_kernel_size=12, \n    down_kernel_size=12\n)\n\n# Create a dummy 1D input tensor (Batch, Channels, Length)\n# Current versions often expect channel dimension to be 1 for many operations\ninput_tensor = torch.randn(1, 1, 64)\n\n# Pass the input through the alias-free activation module\noutput_tensor = activation_module(input_tensor)\n\nprint(f\"Input tensor shape: {input_tensor.shape}\")\nprint(f\"Output tensor shape: {output_tensor.shape}\")\n# Expected output shape: (1, 1, 64) if up_ratio and down_ratio cancel out","lang":"python","description":"This quickstart demonstrates how to instantiate and use an `Activation1d` module with a standard PyTorch activation like ReLU. It showcases the basic tensor flow through an alias-free processing block, which performs upsampling, applies the activation, and then downsampling to mitigate aliasing effects. The example uses a 1D input tensor, as the library also supports 2D operations."},"warnings":[{"fix":"Be aware that results might not perfectly match official implementations. Conduct thorough validation if replicating specific research outcomes. Consider comparing against official source code if available.","message":"The library explicitly states it is an 'unofficial implementation' and its filters and upsample/downsample behavior 'could be different with official implementation' (e.g., StyleGAN3). This may lead to subtle differences in behavior or results compared to original research papers.","severity":"gotcha","affected_versions":"All versions (0.0.1 - 0.0.6)"},{"fix":"Always pin the exact version in `requirements.txt`. Review GitHub release notes and commit history carefully when upgrading between minor versions to identify specific changes.","message":"Due to the `v0.0.x` versioning and rapid development, API changes and behavioral modifications are common. For instance, `v0.0.2` involved a 'Rewrite upsample, jinc applied', and `v0.0.3` included 'Bug fix for torch.special / remove print / split pad from conv_transpose', which can be breaking changes in functionality or required arguments.","severity":"breaking","affected_versions":"All versions before 0.0.6"},{"fix":"Manually ensure your PyTorch installation is `torch>=1.7.0` before installing `alias-free-torch`. For example: `pip install torch>=1.7.0` then `pip install alias-free-torch`.","message":"The library requires PyTorch version `torch>=1.7.0` because it depends on `torch.kaiser_window` and `torch.i0`. Pip's dependency checker may not enforce this for 'custom torch users', leading to runtime errors if an older PyTorch version is installed.","severity":"gotcha","affected_versions":"All versions (0.0.1 - 0.0.6)"}],"env_vars":null,"last_verified":"2026-04-16T00:00:00.000Z","next_check":"2026-07-15T00:00:00.000Z","problems":[{"fix":"Ensure the `cutoff` parameter is set to a floating-point value between 0 (exclusive) and 0.5 (inclusive), e.g., `LowPassFilter1d(cutoff=0.4)`.","cause":"The `cutoff` parameter for `LowPassFilter1d` or `LowPassFilter2d` was set to a value less than or equal to zero, or greater than 0.5.","error":"ValueError: Minimum cutoff must be larger than zero. / ValueError: A cutoff above 0.5 does not make sense."},{"fix":"Upgrade your PyTorch installation to version 1.7.0 or higher. Example: `pip install --upgrade torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118` (adjust CUDA version as needed).","cause":"The installed PyTorch version is older than 1.7.0, which is when `torch.kaiser_window` and `torch.i0` were implemented.","error":"RuntimeError: \"kaiser_window\" is not implemented for the type ..."},{"fix":"Reshape your input tensor to match the expected format. For example, for a 1D operation expecting `[B, C, L]`, ensure it's `[Batch, Channels, Length]`. If a specific operation expects a single channel, ensure it's `[B, 1, L]` or `[B, 1, H, W]` before passing it through the module.","cause":"Input tensor shapes do not align with the expected dimensions for 1D or 2D operations. The library's TODOs indicate that some operations may only support specific channel dimensions (e.g., `[B, 1, T]` or `[B, 1, H, W]`).","error":"RuntimeError: Expected 4-dimensional input for 2D convolution, but got 3-dimensional input of size..."}]}