Skip to main content

scorebook.inference_pipeline

Inference pipeline implementation for processing items through model inference.

This module provides a pipeline structure for handling model inference tasks, supporting preprocessing, model inference, and postprocessing steps in a configurable way.

InferencePipeline Objects

class InferencePipeline()

A pipeline for processing items through model inference.

This class implements a three-stage pipeline that handles:

  1. Preprocessing of input items
  2. Model inference
  3. Postprocessing of model outputs

Attributes:

  • model - Name or identifier of the model being used
  • preprocessor - Function to prepare items for model inference
  • inference_function - Function that performs the actual model inference
  • postprocessor - Function to process the model outputs

__init__

def __init__(model: str,
inference_function: Callable,
preprocessor: Optional[Callable] = None,
postprocessor: Optional[Callable] = None) -> None

Initialize the inference pipeline.

Arguments:

  • model - Name or identifier of the model to use
  • inference_function - Function that performs model inference
  • preprocessor - Optional function to prepare items for inference.
  • postprocessor - Optional function to process model outputs.

run

async def run(items: List[Dict[str, Any]],
**hyperparameters: Any) -> List[Any]

Execute the complete inference pipeline on a list of items.

Arguments:

  • items - List of items to process through the pipeline
  • **hyperparameters - Model-specific parameters for inference

Returns:

List of processed outputs after running through the complete pipeline

__call__

async def __call__(items: List[Dict[str, Any]],
**hyperparameters: Any) -> List[Any]

Make the pipeline instance callable by wrapping the run method.

Arguments:

  • items - List of items to process through the pipeline
  • **hyperparameters - Model-specific parameters for inference

Returns:

List of processed outputs after running through the complete pipeline