Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat: Get chunks of response from HTTP client rather than waiting for all data to arive before calling callback #2251

Open
ruslan-ilesik opened this issue Feb 6, 2025 · 0 comments

Comments

@ruslan-ilesik
Copy link

Notice
I am working on a project requiring me to work with open AI api and specifically gpt. There is specific type of response from them called streams
https://platform.openai.com/docs/api-reference/chat/create (streaming part).

Point that drogon HTTP client accumulates all of response until all of data was sent without ability to read chunks of data. Would be nice if that would be possible for example by adding additional callback

Is your feature request related to a problem? Please describe.
Yes, my problem requires extension of HTTP client to be able to get chunks of response, not only when it all was sent back

Describe the solution you'd like
New type of callback which returns chunks of data which arrive at response, rather than accumulate all of it until request is done

Describe alternatives you've considered
Alternative solution i have considered - using different http lib, for example lib curl, but would be nice to have such feature inside of drogon.

Additional context
Add any other context or screenshots about the feature request here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant