-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handling non-JSON payloads with asDecodedServerSentEventsWithJSONData() #622
Comments
I've just downloaded the latest OpenAPI document for OpenAI, and I think we might need some changes. It currently provides responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/CreateChatCompletionResponse" It returns the following body:
The problem is that our streaming helpers in OpenAPI runtime operate as extensions on This works fine when the documented content type is This appears to be an OpenAPI doc issue because it is actually returning a
Let me play with this a bit more and report back, it might be we just need the doc to be patched. ISTR this used to work :) |
OK, so I think there's potentially a combination of issues here:
(1) will need to be addressed in the upstream OpenAPI doc, by just adding another content-type to the response, which I have done locally. In the meantime, you might be able to swizzle the content-type in a client middleware. --- a/openapi.yaml
+++ b/openapi.yaml
@@ -61,6 +61,9 @@ paths:
application/json:
schema:
$ref: "#/components/schemas/CreateChatCompletionResponse"
+ text/event-stream:
+ schema:
+ $ref: "#/components/schemas/CreateChatCompletionResponse"
x-oaiMeta:
name: Create chat completion Once (1) is addressed, you immediately run into (2), where it will happily decode all the events but will get stuck on the last event,
I have reproduced this in a mock client now and the workaround is to decode it using // Replace this...
let responses = try response.ok.body.text_event_hyphen_stream
.asDecodedServerSentEventsWithJSONData(of: Components.Schemas.CreateChatCompletionStreamResponse.self)
```swift
// ...with this...
let responses = try response.ok.body.text_event_hyphen_stream
.asDecodedServerSentEvents()
.filter { $0.data != "[DONE]" }
.asEncodedServerSentEvents()
.asDecodedServerSentEventsWithJSONData(of:
Components.Schemas.CreateChatCompletionStreamResponse.self) The re-encode/decode step will be unnecessary cost, and you could just encode with a I'll take a note to see if this |
@czechboy0 and I had actually worked that one out already in #614. I submitted a PR to the OpenAI OpenAPI repo, but haven't heard anything yet. Should've mentioned that in my initial issue description ...
Yes, that's exactly the problem I'm facing.
Brilliant! That works.
I'll go ahead and close the issue though? Or would you prefer to keep it open? |
Ah, well. Now I've been on the voyage of discovery too 😅
You're very welcome—glad to be of help 👍
Leave it open for now as a reminder for me. Tomorrow I'll close it and open one with a focussed description to investigate what, if anything, we need to do about the |
So I thought it would be fun to have a chat with ChatGPT itself about the API choice to terminate an SSE stream where all the payloads are JSON with a non-JSON value. It acknowledges that this is both unconventional and not very helpful for processing pipelines that are decoding the event stream :) This was after I did my own research and came to the following conclusions:
If there were wider precedent for it, I think we could adapt the helpers in some way. The only thing I can think of currently is to add an optional parameter to .asDecodedServerSentEventsWithJSONData(of: ExpectedType.self, streamTerminalData: "[DONE]") Better yet, maybe a boolean closure parameter. I'll park this here and see if more data points come up. |
Well, at least we’re not the only ones who think so then … :-)
Very interesting!
Would you mind me giving a PR a go? Or wouldn’t you merge it anyway until there’s more precedent.
Thank you for doing some more digging on this! |
I'm pretty open to that change, so long as it's API-backwards compatible and opt-in, and we're always grateful for folks willing to contribute! 🙏 What do you think @czechboy0? We should probably discuss the shape of the parameter before you put any time into the coding though. There are at least a few options for it, e.g.:
|
I also like the idea of customizing the terminating byte sequence. I think it should be a closure that takes an |
@czechboy0 wrote
Yeah, that'll do.
My lean, too. I think this is much more opinionated and useful than a filter. Probably want one of these for both @paulhdk wrote
Wanna take a stab at it? |
Agreed. I do not think we need these for JSON Sequence and JSON Lines, as those require the content to be JSON (unlike SSE). Does anyone disagree? |
Agreed. |
Sure! I'd love to give at shot. Thank you both for the design details! |
Here we go. Hope these are along the lines of what you had in mind: |
### Motivation As discussed in apple/swift-openapi-generator#622, some APIs, e.g., ChatGPT or Claude, may return a non-JSON byte sequence to terminate a stream of events. If not handled with a workaround (see below)such non-JSON terminating byte sequences cause a decoding error. ### Modifications This PR adds the ability to customise the terminating byte sequence by providing a closure to `asDecodedServerSentEvents()` as well as `asDecodedServerSentEventsWithJSONData()` that can match incoming data for the terminating byte sequence before it is decoded into JSON, for instance. ### Result Instead of having to decode and re-encode incoming events to filter out the terminating byte sequence - as seen in apple/swift-openapi-generator#622 (comment) - terminating byte sequences can now be cleanly caught by either providing a closure or providing the terminating byte sequence directly when calling `asDecodedServerSentEvents()` and `asDecodedServerSentEventsWithJSONData()`. ### Test Plan This PR includes unit tests that test the new function parameters as part of the existing tests for `asDecodedServerSentEvents()` as well as `asDecodedServerSentEventsWithJSONData()`. --------- Co-authored-by: Honza Dvorsky <[email protected]>
Question
Hi!
I'm generating code with the OpenAI OpenAPI specification and am iterating over streamed responses from the
chat/completions
endpoint withasDecodedServerSentEventsWithJSONData()
.The problem is that API returns
ChatCompletionStreamResponse
JSON payloads until the stream is done, which is denoted by a simple[DONE]
payload.Because
[DONE]
isn't valid JSON, the last chunk of a successful stream will throw aDecodingError.dataCorrupted()
.Currently, I'm just handling it with an empty catch block to avoid the crash, but that seems like a hacky workaround. So, I was wondering whether there was a better solution.
The following ideas haven't been successful so far:
ClientMiddleware
to match incoming payloads for the[DONE]
bytes and toss the payload if they're found.[DONE]
and is connect with theChatCompletionStreamResponse
through ananyOf
.Do you have any suggestions?
The text was updated successfully, but these errors were encountered: