QUick and DIrty Domain Adaptation (QuDiDA)
QuDiDA is a micro library designed for quick and naive pixel-level image domain adaptation. It leverages scikit-learn transformers for its operations and is primarily intended as an image augmentation technique. The current version is 0.0.4, with releases being infrequent; the last update was in August 2021, suggesting a low-maintenance but functional status.
Warnings
- gotcha OpenCV, used internally by QuDiDA for image handling, reads images in BGR color format by default, not RGB. If your `scikit-learn` transformer or subsequent processing expects RGB input, you must explicitly convert the image after loading with `cv2.cvtColor(image, cv2.COLOR_BGR2RGB)` before passing it to the adapter or processing its output.
- gotcha QuDiDA is described as a 'micro library for very naive' adaptation that 'was not tested in public benchmarks.' This implies it may not be suitable for production-critical applications or highly complex domain adaptation challenges where robust performance and benchmarked results are required. Users should perform their own thorough evaluations.
- gotcha The library's last release was in August 2021, and its GitHub repository shows minimal activity (e.g., 1 issue, 0 pull requests). While functional, this suggests limited ongoing maintenance or feature development. Users should be aware that active support or new features might not be readily available.
Install
-
pip install qudida
Imports
- DomainAdapter
from qudida import DomainAdapter
Quickstart
import cv2
from sklearn.decomposition import PCA
from qudida import DomainAdapter
import os
# Create dummy image files for demonstration if they don't exist
# In a real scenario, these would be your source and target images.
if not os.path.exists('source.png'):
dummy_img = (255 * (0.5 + 0.5 * (1 + 0.2 * (0.5 - 0.5 * (0.5 * (2 * 0.5 - 1) + 0.5)) + 0.5))) * 255
cv2.imwrite('source.png', dummy_img.astype('uint8'))
if not os.path.exists('target.png'):
dummy_img_target = (255 * (0.5 + 0.5 * (1 + 0.2 * (0.5 - 0.5 * (0.5 * (2 * 0.5 - 1) + 0.5)) + 0.5))) * 255
cv2.imwrite('target.png', dummy_img_target.astype('uint8'))
# Initialize the DomainAdapter with a scikit-learn transformer and a reference (target) image
# For simplicity, using a dummy image if actual images are not present
ref_img_path = 'target.png'
source_img_path = 'source.png'
# Ensure dummy images exist for the quickstart to run
if not os.path.exists(ref_img_path) or not os.path.exists(source_img_path):
# If dummy images were not created above (e.g., due to a previous run or error),
# create them directly here for robustness.
import numpy as np
dummy_source = np.full((100, 100, 3), [100, 50, 200], dtype=np.uint8) # BGR blue-ish
dummy_target = np.full((100, 100, 3), [50, 200, 100], dtype=np.uint8) # BGR green-ish
cv2.imwrite(source_img_path, dummy_source)
cv2.imwrite(ref_img_path, dummy_target)
adapter = DomainAdapter(transformer=PCA(n_components=3), ref_img=cv2.imread(ref_img_path))
# Load the source image
source = cv2.imread(source_img_path)
# Apply domain adaptation
result = adapter(source)
# Save the result (optional, for verification)
# cv2.imwrite('result_adapted.png', result)
print("Domain adaptation applied successfully. Result image shape:", result.shape)
# Clean up dummy files
os.remove(source_img_path)
os.remove(ref_img_path)