FVCore: FAIR Computer Vision Core Library

0.1.5.post20221221 · active · verified Sat Apr 11

fvcore is a lightweight core library that provides common and essential functionalities shared across various computer vision frameworks developed by FAIR (Facebook AI Research), such as Detectron2, PySlowFast, and ClassyVision. It includes features like PyTorch layers, hierarchical flop/parameter counting tools, checkpointing utilities, configuration management, and hyperparameter schedulers. The library is actively maintained by the FAIR computer vision team and features type-annotated, tested, and benchmarked components.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to use `fvcore.nn.FlopCountAnalysis` to calculate the computational FLOPs (Floating Point Operations) of a simple PyTorch model. It initializes a model, creates dummy input, and then uses `FlopCountAnalysis` to report total FLOPs and FLOPs aggregated by operator and module.

import torch
import torch.nn as nn
from fvcore.nn import FlopCountAnalysis

class SimpleModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(3, 16, kernel_size=3, stride=1, padding=1)
        self.relu = nn.ReLU()
        self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
        self.fc = nn.Linear(16 * 16 * 16, 10) # Assuming input size 3x32x32

    def forward(self, x):
        x = self.pool(self.relu(self.conv1(x)))
        x = x.view(x.size(0), -1)
        x = self.fc(x)
        return x

model = SimpleModel()
inputs = (torch.randn(1, 3, 32, 32),) # Batch size 1, 3 channels, 32x32 image

flops = FlopCountAnalysis(model, inputs)
print(f"Total FLOPs: {flops.total()}")
print(f"FLOPs by operator: {flops.by_operator()}")
print(f"FLOPs by module: {flops.by_module()}")

view raw JSON →