Cog (Replicate)

0.17.2 · active · verified Fri Apr 10

Cog is an open-source tool for packaging machine learning models into standard Docker containers. It allows you to define a Python `Predictor` class with `setup` and `predict` methods, which Cog then uses to build a Docker image for local testing or deployment to platforms like Replicate. The current version is 0.17.2, and it receives frequent minor updates with occasional major releases introducing significant architectural changes.

Warnings

Install

Imports

Quickstart

Define a `Predictor` class inheriting from `BasePredictor`. Implement `setup()` to load your model and `predict()` to handle inference. `Input` specifies prediction inputs with types, descriptions, and defaults. `Path` is used for file-based inputs/outputs.

from cog import BasePredictor, Input, Path
import torch

class Predictor(BasePredictor):
    def setup(self):
        """Load the model into memory to make running multiple predictions efficient"""
        # Example: self.model = torch.load("./weights.pth")
        self.model = "a dummy model"

    def predict(
        self,
        text_input: str = Input(description="A text input"),
        scale: float = Input(description="Factor to scale by", default=1.5)
    ) -> str:
        """Run a single prediction on the model"""
        # Example: processed_input = self.preprocess(text_input)
        # output = self.model(processed_input, scale)
        output = f"Processed '{text_input}' with scale {scale}"
        return output

# To run locally with Cog CLI:
# 1. Create a cog.yaml file: `cog init`
# 2. Add dependencies (e.g., torch) to requirements.txt
# 3. Run prediction: `cog predict -i text_input="hello" -i scale=2.0`

view raw JSON →