Skip to main content

scorebook.inference.openai

OpenAI inference implementation for Scorebook.

This module provides utilities for running inference using OpenAI's models, supporting both single response and batch inference operations. It handles API communication, request formatting, and response processing.

responses

async def responses(items: List[Any],
model: str = "gpt-4.1-nano",
client: Any = None,
**hyperparameters: Any) -> List[Any]

Process multiple inference requests using OpenAI's Async API.

This asynchronous function handles multiple inference requests, manages the API communication, and processes the responses.

Arguments:

  • items - List of preprocessed items to process.
  • model - OpenAI model to use.
  • client - Optional OpenAI client instance.
  • hyperparameters - Dictionary of hyperparameters for inference.

Returns:

List of raw model responses.

Raises:

  • NotImplementedError - Currently not implemented.

batch

async def batch(items: List[Any],
model: str = "gpt-4.1-nano",
client: Any = None,
**hyperparameters: Any) -> List[Any]

Process multiple inference requests in batch using OpenAI's API.

This asynchronous function handles batch processing of inference requests, optimizing for throughput while respecting API rate limits.

Arguments:

  • items - List of preprocessed items to process.
  • model - OpenAI model to use.
  • client - Optional OpenAI client instance.
  • hyperparameters - Dictionary of hyperparameters for inference.

Returns:

A list of raw model responses.

Raises:

  • NotImplementedError - Currently not implemented.