Cloud Inference
Integrating Cloud-Based Inference Providers with Scorebook
Scorebook's evaluate
function can accept both synchronous and asynchronous evaluation callables. This allows for the
integration of cloud-based inference providers, such as OpenAI, Anthropic, Google, and AWS.
The scorebook.inference
module contains a number of pre-defined inference functions for utilizing APIs provided by a
number of inference providers.
Inference Providers
Scorebook's inference
module contains inference functions for the following providers:
- OpenAI
- AWS Bedrock
- Google Vertex
- Portkey
OpenAI Responses Example
from scorebook.inference_pipeline import InferencePipeline
from scorebook.inference.openai import responses
inference_pipeline = InferencePipeline(
model = "gpt-4.1-nano",
preprocessor = openai_preprocessor,
inference_function = responses,
postprocessor = openai_postprocessor,
)
To run and view the result of an evaluation with this evaluation pipeline, run Scorebook's Example 5.
OpenAI Batch Example
from scorebook.inference_pipeline import InferencePipeline
from scorebook.inference.openai import batch
inference_pipeline = InferencePipeline(
model="gpt-4.1-nano",
preprocessor=openai_preprocessor,
inference_function=batch,
postprocessor=openai_postprocessor,
)
To run and view the result of an evaluation with this evaluation pipeline, run Scorebook's Example 6.