Note
The OpenAI Java API Library is currently in alpha.
There may be frequent breaking changes.
Have thoughts or feedback? File an issue or comment on this thread.
The OpenAI Java SDK provides convenient access to the OpenAI REST API from applications written in Java. It includes helper classes with helpful types and documentation for every request and response property.
The OpenAI Java SDK is similar to the OpenAI Kotlin SDK but with minor differences that make it more ergonomic for use in Java, such as Optional
instead of nullable values, Stream
instead of Sequence
, and CompletableFuture
instead of suspend functions.
The REST API documentation can be found on platform.openai.com.
implementation("com.openai:openai-java:0.8.1")
<dependency>
<groupId>com.openai</groupId>
<artifactId>openai-java</artifactId>
<version>0.8.1</version>
</dependency>
Use OpenAIOkHttpClient.builder()
to configure the client. At a minimum you need to set .apiKey()
:
import com.openai.client.OpenAIClient;
import com.openai.client.okhttp.OpenAIOkHttpClient;
OpenAIClient client = OpenAIOkHttpClient.builder()
.apiKey("My API Key")
.build();
Alternately, set the environment with OPENAI_API_KEY
, OPENAI_ORG_ID
or OPENAI_PROJECT_ID
, and use OpenAIOkHttpClient.fromEnv()
to read from the environment.
import com.openai.client.OpenAIClient;
import com.openai.client.okhttp.OpenAIOkHttpClient;
OpenAIClient client = OpenAIOkHttpClient.fromEnv();
// Note: you can also call fromEnv() from the client builder, for example if you need to set additional properties
OpenAIClient client = OpenAIOkHttpClient.builder()
.fromEnv()
// ... set properties on the builder
.build();
Property | Environment variable | Required | Default value |
---|---|---|---|
apiKey | OPENAI_API_KEY |
true | — |
organization | OPENAI_ORG_ID |
false | — |
project | OPENAI_PROJECT_ID |
false | — |
Read the documentation for more configuration options.
To create a new chat completion, first use the ChatCompletionCreateParams
builder to specify attributes, then pass that to the create
method of the completions
service.
import com.openai.models.ChatCompletion;
import com.openai.models.ChatCompletionCreateParams;
import com.openai.models.ChatCompletionMessageParam;
import com.openai.models.ChatCompletionUserMessageParam;
import com.openai.models.ChatModel;
import java.util.List;
ChatCompletionCreateParams params = ChatCompletionCreateParams.builder()
.messages(List.of(ChatCompletionMessageParam.ofChatCompletionUserMessageParam(ChatCompletionUserMessageParam.builder()
.role(ChatCompletionUserMessageParam.Role.USER)
.content(ChatCompletionUserMessageParam.Content.ofTextContent("Say this is a test"))
.build())))
.model(ChatModel.O1)
.build();
ChatCompletion chatCompletion = client.chat().completions().create(params);
The OpenAI API provides a list
method to get a paginated list of jobs. You can retrieve the first page by:
import com.openai.models.FineTuningJob;
import com.openai.models.FineTuningJobListPage;
FineTuningJobListPage page = client.fineTuning().jobs().list();
for (FineTuningJob job : page.data()) {
System.out.println(job);
}
Use the FineTuningJobListParams
builder to set parameters:
import com.openai.models.FineTuningJobListPage;
import com.openai.models.FineTuningJobListParams;
FineTuningJobListParams params = FineTuningJobListParams.builder()
.after("after")
.limit(20L)
.build();
FineTuningJobListPage page1 = client.fineTuning().jobs().list(params);
// Using the `from` method of the builder you can reuse previous params values:
FineTuningJobListPage page2 = client.fineTuning().jobs().list(FineTuningJobListParams.builder()
.from(params)
.build());
// Or easily get params for the next page by using the helper `getNextPageParams`:
FineTuningJobListPage page3 = client.fineTuning().jobs().list(params.getNextPageParams(page2));
See Pagination below for more information on transparently working with lists of objects without worrying about fetching each page.
To make a request to the OpenAI API, you generally build an instance of the appropriate Params
class.
In Example: creating a resource above, we used the ChatCompletionCreateParams.builder()
to pass to the create
method of the completions
service.
Sometimes, the API may support other properties that are not yet supported in the Java SDK types. In that case, you can attach them using the putAdditionalProperty
method.
import com.openai.core.JsonValue;
import com.openai.models.ChatCompletionCreateParams;
ChatCompletionCreateParams params = ChatCompletionCreateParams.builder()
// ... normal properties
.putAdditionalProperty("secret_param", JsonValue.from("4242"))
.build();
When receiving a response, the OpenAI Java SDK will deserialize it into instances of the typed model classes. In rare cases, the API may return a response property that doesn't match the expected Java type. If you directly access the mistaken property, the SDK will throw an unchecked OpenAIInvalidDataException
at runtime. If you would prefer to check in advance that that response is completely well-typed, call .validate()
on the returned model.
import com.openai.models.ChatCompletion;
ChatCompletion chatCompletion = client.chat().completions().create().validate();
In rare cases, you may want to access the underlying JSON value for a response property rather than using the typed version provided by this SDK. Each model property has a corresponding JSON version, with an underscore before the method name, which returns a JsonField
value.
import com.openai.core.JsonField;
import java.util.Optional;
JsonField field = responseObj._field();
if (field.isMissing()) {
// Value was not specified in the JSON response
} else if (field.isNull()) {
// Value was provided as a literal null
} else {
// See if value was provided as a string
Optional<String> jsonString = field.asString();
// If the value given by the API did not match the shape that the SDK expects
// you can deserialise into a custom type
MyClass myObj = responseObj._field().asUnknown().orElseThrow().convert(MyClass.class);
}
Sometimes, the server response may include additional properties that are not yet available in this library's types. You can access them using the model's _additionalProperties
method:
import com.openai.core.JsonValue;
JsonValue secret = errorObject._additionalProperties().get("secret_field");
For methods that return a paginated list of results, this library provides convenient ways access the results either one page at a time, or item-by-item across all pages.
To iterate through all results across all pages, you can use autoPager
, which automatically handles fetching more pages for you:
import com.openai.models.FineTuningJob;
import com.openai.models.FineTuningJobListPage;
// As an Iterable:
FineTuningJobListPage page = client.fineTuning().jobs().list(params);
for (FineTuningJob job : page.autoPager()) {
System.out.println(job);
};
// As a Stream:
client.fineTuning().jobs().list(params).autoPager().stream()
.limit(50)
.forEach(job -> System.out.println(job));
// Using forEach, which returns CompletableFuture<Void>:
asyncClient.fineTuning().jobs().list(params).autoPager()
.forEach(job -> System.out.println(job), executor);
If none of the above helpers meet your needs, you can also manually request pages one-by-one. A page of results has a data()
method to fetch the list of objects, as well as top-level response
and other methods to fetch top-level data about the page. It also has methods hasNextPage
, getNextPage
, and getNextPageParams
methods to help with pagination.
import com.openai.models.FineTuningJob;
import com.openai.models.FineTuningJobListPage;
FineTuningJobListPage page = client.fineTuning().jobs().list(params);
while (page != null) {
for (FineTuningJob job : page.data()) {
System.out.println(job);
}
page = page.getNextPage().orElse(null);
}
This library throws exceptions in a single hierarchy for easy handling:
-
OpenAIException
- Base exception for all exceptions -
OpenAIServiceException
- HTTP errors with a well-formed response body we were able to parse. The exception message and the.debuggingRequestId()
will be set by the server.400 BadRequestException 401 AuthenticationException 403 PermissionDeniedException 404 NotFoundException 422 UnprocessableEntityException 429 RateLimitException 5xx InternalServerException others UnexpectedStatusCodeException -
OpenAIIoException
- I/O networking errors -
OpenAIInvalidDataException
- any other exceptions on the client side, e.g.:- We failed to serialize the request body
- We failed to parse the response body (has access to response code and body)
To use this library with Azure OpenAI, use the same OpenAI client builder but with the Azure-specific configuration.
OpenAIOkHttpClient.Builder clientBuilder = OpenAIOkHttpClient.builder();
/* Azure-specific code starts here */
// You can either set 'endpoint' directly in the builder.
// or set the env var "AZURE_OPENAI_ENDPOINT" and use fromEnv() method instead
clientBuilder
.baseUrl(System.getenv("AZURE_OPENAI_ENDPOINT"))
.credential(BearerTokenCredential.create(
AuthenticationUtil.getBearerTokenSupplier(
new DefaultAzureCredentialBuilder().build(), "https://cognitiveservices.azure.com/.default")
));
/* Azure-specific code ends here */
OpenAIClient client = clientBuilder.build();
ChatCompletionCreateParams params = ChatCompletionCreateParams.builder()
.addMessage(ChatCompletionMessageParam.ofChatCompletionUserMessageParam(
ChatCompletionUserMessageParam.builder()
.role(ChatCompletionUserMessageParam.Role.USER)
.content(ChatCompletionUserMessageParam.Content.ofTextContent("Who won the world series in 2020?"))
.build()))
.model("gpt-4o")
.build();
ChatCompletion chatCompletion = client.chat().completions().create(params);
List<ChatCompletion.Choice> choices = chatCompletion.choices();
for (ChatCompletion.Choice choice : choices) {
System.out.println("Choice content: " + choice.message().content().get());
}
See the complete Azure OpenAI examples in the Azure OpenAI example.
Requests that experience certain errors are automatically retried 2 times by default, with a short exponential backoff. Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, 429 Rate Limit, and >=500 Internal errors will all be retried by default. You can provide a maxRetries
on the client builder to configure this:
import com.openai.client.OpenAIClient;
import com.openai.client.okhttp.OpenAIOkHttpClient;
OpenAIClient client = OpenAIOkHttpClient.builder()
.fromEnv()
.maxRetries(4)
.build();
Requests time out after 10 minutes by default. You can configure this on the client builder:
import com.openai.client.OpenAIClient;
import com.openai.client.okhttp.OpenAIOkHttpClient;
import java.time.Duration;
OpenAIClient client = OpenAIOkHttpClient.builder()
.fromEnv()
.timeout(Duration.ofSeconds(30))
.build();
Requests can be routed through a proxy. You can configure this on the client builder:
import com.openai.client.OpenAIClient;
import com.openai.client.okhttp.OpenAIOkHttpClient;
import java.net.InetSocketAddress;
import java.net.Proxy;
OpenAIClient client = OpenAIOkHttpClient.builder()
.fromEnv()
.proxy(new Proxy(Proxy.Type.HTTP, new InetSocketAddress("example.com", 8080)))
.build();
This library is typed for convenient access to the documented API. If you need to access undocumented params or response properties, the library can still be used.
To make requests using undocumented parameters, you can provide or override parameters on the params object while building it.
FooCreateParams address = FooCreateParams.builder()
.id("my_id")
.putAdditionalProperty("secret_prop", JsonValue.from("hello"))
.build();
To access undocumented response properties, you can use res._additionalProperties()
on a response object to get a map of untyped fields of type Map<String, JsonValue>
. You can then access fields like ._additionalProperties().get("secret_prop").asString()
or use other helpers defined on the JsonValue
class to extract it to a desired type.
We use the standard OkHttp logging interceptor.
You can enable logging by setting the environment variable OPENAI_LOG
to info
.
$ export OPENAI_LOG=info
Or to debug
for more verbose logging.
$ export OPENAI_LOG=debug
This package generally follows SemVer conventions, though certain backwards-incompatible changes may be released as minor versions:
- Changes to library internals which are technically public but not intended or documented for external use. (Please open a GitHub issue to let us know if you are relying on such internals).
- Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an issue with questions, bugs, or suggestions.
This library requires Java 8 or later.