FaceXlib: Basic Face Library
FaceXlib is a Python library providing ready-to-use face-related functions based on state-of-the-art open-source methods, including detection, alignment, recognition, parsing, and restoration. It is currently at version 0.3.0 and has an active, though somewhat irregular, release cadence with recent updates focusing on stability and functionality improvements.
Warnings
- gotcha Pre-trained models are downloaded automatically on the first inference. Users with unstable network connections may experience issues. It's recommended to pre-download models if connectivity is a concern.
- breaking Starting from v0.2.5, the library changed the default model root path to save models, so `facexlib` no longer strictly requires saving models in the `site-packages` directory. This might affect existing setups that relied on the old implicit path.
- gotcha Prior to v0.2.5, `FaceRestoreHelper` might have implicitly required GPU for operation. CPU-only usage was explicitly supported from v0.2.5 onwards.
- gotcha Version v0.2.2 updated the `cv2.estimateAffinePartial2D` method to use `cv2.LMEDS` for affine transformation estimation. This change aims for equivalence with skimage transform but might lead to subtle differences in face alignment results compared to earlier versions.
- gotcha Users sometimes encounter `ModuleNotFoundError: No module named 'facexlib'` even after installation, indicating potential environment or installation issues.
Install
-
pip install facexlib
Imports
- FaceRestoreHelper
from facexlib.utils.face_restoration_helper import FaceRestoreHelper
- init_detection_model
from facexlib.detection import init_detection_model
- init_parsing_model
from facexlib.parsing import init_parsing_model
Quickstart
import numpy as np
import cv2
from facexlib.utils.face_restoration_helper import FaceRestoreHelper
# Create a dummy image (e.g., a black square)
img = np.zeros((512, 512, 3), dtype=np.uint8)
# Add a white square to simulate a face for detection
img[200:300, 200:300] = 255
# Initialize FaceRestoreHelper
# upscale_factor: The factor to upscale the face. Set to 1 if no upscale needed
# det_model: The detection model to use, e.g., 'retinaface_resnet50' or 'retinaface_mobile0.25'
# device: 'cuda' or 'cpu'
face_helper = FaceRestoreHelper(upscale_factor=1, det_model='retinaface_resnet50', device='cpu')
# Read the image (can also be a path)
face_helper.read_image(img)
# Detect and align faces
# save_cropped_path: Optional path to save cropped faces
face_helper.get_face_landmarks_5(only_keep_largest=True)
face_helper.align_warp_face()
# Process the aligned faces (e.g., feed to a restoration model)
# This example just shows the aligned face
if len(face_helper.cropped_faces) > 0:
aligned_face = face_helper.cropped_faces[0]
print(f"Detected and aligned face of shape: {aligned_face.shape}")
# In a real scenario, you'd feed aligned_face to a restoration network
# For this quickstart, we'll just show its dimensions.
# cv2.imwrite('aligned_face.png', aligned_face) # Uncomment to save
else:
print("No faces detected in the image.")
# Clean up (release models if no longer needed)
del face_helper