Skip to main content

scorebook.inference.portkey

Portkey inference implementation for Scorebook.

This module provides utilities for running inference using Portkey's API, supporting both single response and batch inference operations. It handles API communication, request formatting, and response processing.

responses

async def responses(items: List[Any],
model: str,
client: Optional[AsyncPortkey] = None,
**hyperparameters: Any) -> List[Any]

Process multiple inference requests using Portkey's API.

This asynchronous function handles multiple inference requests, manages the API communication, and processes the responses.

Arguments:

  • items - List of preprocessed items to process.
  • model - Model to use via Portkey.
  • client - Optional Portkey client instance.
  • hyperparameters - Dictionary of hyperparameters for inference.

Returns:

List of raw model responses.

batch

async def batch(items: List[Any],
model: str,
client: Optional[AsyncPortkey] = None,
**hyperparameters: Any) -> List[Any]

Process multiple inference requests in batch using Portkey's API.

This asynchronous function handles batch processing of inference requests, optimizing for throughput while respecting API rate limits.

Arguments:

  • items - List of preprocessed items to process.
  • model - Model to use via Portkey.
  • client - Optional Portkey client instance.
  • hyperparameters - Dictionary of hyperparameters for inference.

Returns:

A list of raw model responses.