This guide covers model selection, defaults, and how models are resolved at runtime.
Overview
Model configuration is controlled in three places:
- Agent default model (
Agent.model) - Run-time override (
RunConfig.model) - Provider behavior (
ModelProvider)
Defaults and Overrides
Model selection follows this order:
RunConfig.modelif provided.Agent.modelif provided.OPENAI_MODELenvironment variable.- Fallback to
gpt-4.1.
Agent<UnknownContext, TextOutput> agent =
Agent.<UnknownContext, TextOutput>builder()
.name("Assistant")
.instructions("You are a helpful assistant.")
.model("gpt-4.1-mini")
.build();
RunConfig config = RunConfig.builder().model("gpt-4.1").build();
RunResult<UnknownContext, ?> result = Runner.run(agent, "Hello", config);
Model Providers
The SDK uses a ModelProvider to resolve a Model by name.
Defaults:
- Provider:
OpenAIProvider - API key:
OPENAI_API_KEY - Implementation:
OpenAIResponsesModel(OpenAI Responses API)
Custom provider example:
public class MyProvider implements ModelProvider {
@Override
public CompletableFuture<Model> getModel(String modelName) {
return CompletableFuture.completedFuture(new MyModel(modelName));
}
}
RunConfig config =
RunConfig.builder().modelProvider(new MyProvider()).model("my-model").build();
Model Timeouts
RunConfig.modelTimeoutMssets per-call timeout (default: 60000ms).RunConfig.timeoutMssets overall run timeout.
RunConfig config =
RunConfig.builder()
.modelTimeoutMs(30000L)
.timeoutMs(120000L)
.build();
Structured Outputs
When you use JsonSchemaOutput, the OpenAI provider uses structured responses under the hood.
This lets you return typed data instead of raw text.
Model Settings (Advanced)
ModelSettings exists and is plumbed through Agent, but it is not fully exposed yet.
Expect this surface to expand as more configuration is wired up.
Notes
- OpenAI model IDs are account-specific and may change over time.
- If a model name is rejected, verify the ID in your OpenAI account and ensure your provider supports it.