Monotonic Alignment Search
raw JSON → 0.2.1 verified Mon Apr 27 auth: no python
Independent package implementing the monotonic alignment search algorithm from Glow-TTS for aligning text and speech. Currently at v0.2.1, supports PyTorch (CPU/GPU) and NumPy backends. Requires Python >=3.10.
pip install monotonic-alignment-search Common errors
error ModuleNotFoundError: No module named 'monotonic_alignment_search' ↓
cause Package not installed or installed under a different name.
fix
pip install monotonic-alignment-search
error ImportError: cannot import name 'maximum_path_numpy' from 'monotonic_alignment_search' ↓
cause Attempting to use NumPy backend on v0.2.0+ where it was potentially removed or renamed.
fix
Use maximum_path (PyTorch backend) instead. If you need NumPy, downgrade to v0.1.1: pip install 'monotonic-alignment-search==0.1.1'
Warnings
gotcha Input attention must be non-negative and monotonic; the algorithm does not enforce this. Incorrect inputs may produce silent failures or suboptimal alignments. ↓
fix Ensure your attention matrix is monotonic and non-negative before calling maximum_path.
deprecated NumPy backend (maximum_path_numpy) was experimental in v0.1.1 and may be removed in a future release. ↓
fix Prefer the PyTorch backend (maximum_path) for stability. If you need NumPy, pin to v0.1.1.
breaking PyTorch backend removed or renamed? Not yet, but the v0.2.0 allowed choosing CPU/GPU; ensure your PyTorch installation is compatible. ↓
fix Install PyTorch separately according to your hardware (CPU/CUDA) before using this package.
Imports
- maximum_path wrong
from mas import maximum_pathcorrectfrom monotonic_alignment_search import maximum_path - maximum_path_numpy
from monotonic_alignment_search import maximum_path_numpy
Quickstart
import torch
from monotonic_alignment_search import maximum_path
# Example: random hard attention (batch=1, 1 source, 5 target, no mask)
attn_hard = torch.rand(1, 1, 1, 5).cumsum(3).round().long()
# Perform MAS
path = maximum_path(attn_hard, attn_hard.new_zeros(attn_hard.shape))
print(path.shape) # torch.Size([1, 1, 1, 5])