1
- ![ Maven Central] ( https://img.shields.io/maven-central/v/com.theokanning .openai-gpt3 -java/client?color=blue )
1
+ ![ Maven Central] ( https://img.shields.io/maven-central/v/com.launchableinc .openai-java/client?color=blue )
2
2
3
3
> ⚠️OpenAI has deprecated all Engine-based APIs.
4
- > See [ Deprecated Endpoints] ( https://github.com/TheoKanning /openai-java#deprecated-endpoints ) below
4
+ > See [ Deprecated Endpoints] ( https://github.com/launchableinc /openai-java#deprecated-endpoints ) below
5
5
> for more info.
6
6
7
7
# OpenAI-Java
@@ -40,14 +40,14 @@ as well as an example project using the service.
40
40
41
41
### Gradle
42
42
43
- ` implementation 'com.theokanning .openai-gpt3 -java:<api|client|service>:<version>' `
43
+ ` implementation 'com.launchableinc .openai-java:<api|client|service>:<version>' `
44
44
45
45
### Maven
46
46
47
47
``` xml
48
48
49
49
<dependency >
50
- <groupId >com.launchableinc.openai-gpt3- java</groupId >
50
+ <groupId >com.launchableinc.openai-java</groupId >
51
51
<artifactId >{api|client|service}</artifactId >
52
52
<version >version</version >
53
53
</dependency >
@@ -63,7 +63,7 @@ Your client will need to use snake case to work with the OpenAI API.
63
63
### Retrofit client
64
64
65
65
If you're using retrofit, you can import the ` client ` module and use
66
- the [ OpenAiApi] ( client/src/main/java/com/theokanning /openai/OpenAiApi.java ) .
66
+ the [ OpenAiApi] ( client/src/main/java/com/launchableincc /openai/OpenAiApi.java ) .
67
67
You'll have to add your auth token as a header (
68
68
see [ AuthenticationInterceptor] ( client/src/main/java/com/theokanning/openai/AuthenticationInterceptor.java ) )
69
69
and set your converter factory to use snake case and only include non-null fields.
@@ -217,7 +217,7 @@ stream: [OpenAiApiFunctionsWithStreamExample.java](example/src/main/java/example
217
217
### Streaming thread shutdown
218
218
219
219
If you want to shut down your process immediately after streaming responses,
220
- call `OpenAiService.shutdownExecutor()`.
220
+ call `OpenAiService.shutdownExecutor()`.
221
221
This is not necessary for non-streaming calls.
222
222
223
223
## Running the example project
@@ -252,7 +252,7 @@ Or functions with 'stream' mode enabled:
252
252
### Does this support GPT-4?
253
253
254
254
Yes! GPT-4 uses the ChatCompletion Api, and you can see the latest model
255
- options [here](https://platform.openai.com/docs/models/gpt-4).
255
+ options [here](https://platform.openai.com/docs/models/gpt-4).
256
256
GPT-4 is currently in a limited beta (as of 4/1/23), so make sure you have access before trying to
257
257
use it.
258
258
@@ -270,8 +270,8 @@ Make sure that OpenAI is available in your country.
270
270
271
271
### Why doesn't OpenAiService support x configuration option?
272
272
273
- Many projects use OpenAiService, and in order to support them best I've kept it extremely simple.
274
- You can create your own OpenAiApi instance to customize headers, timeouts, base urls etc.
273
+ Many projects use OpenAiService, and in order to support them best I've kept it extremely simple.
274
+ You can create your own OpenAiApi instance to customize headers, timeouts, base urls etc.
275
275
If you want features like retry logic and async calls, you'll have to make an `OpenAiApi` instance
276
276
and call it directly instead of using `OpenAiService`
277
277
0 commit comments