-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch to long running tasks for the multiplexer #344
base: main
Are you sure you want to change the base?
Conversation
We could probably have a single low res TimerHandle that resets on each loop and has a 10s leeway to re-arm. So we ensure we cancel both read/write tasks within timeout+10s. That we we aren't churning timer handles EDIT: with the task overhead gone, that is where all the time gets spent |
That test fails locally before and after this change so I think its intent is to overload the queue but its too fast now, and my local machine is too fast as well before the change. The queue size needs to be patched to be smaller so it can be overrun before it can be processed |
Maybe a RangedTimeout that does a callback on timeout |
Previously it only waited for the multiplexer to close but the task would still be finishing up. This became obvious after the assertion hit in NabuCasa#340
|
This is functional but test_multiplexer_data_channel_abort_full needs to be fixed since it doesn't abort anymore because the performance bottleneck / cpu drain is fixed and it no longer gets so far behind that the failure happens |
Testing TODO:
implement back pressure Implement pause and resume requests for a channel #353
download multi gig file
Known issues
fix https://github.com/NabuCasa/snitun/pull/344/files#r1950170492
server sni listener has the same design which is also likely to create many tasks on server side
snitun/snitun/server/listener_sni.py
Line 139 in da7a663
This reduced the amount of tasks being created while accessing the UI by ~96%
RangedTimeout
reduced the number of created TimerHandles by ~94% while accessing the UI bySome considerations
While these changes are designed for the client side, this combination of these changes is expected significantly increase the number of active connections a serve can handle at one time.
Instead of creating many small tasks, create two long running ones for the reader/writer to avoid flooding the event loop with tasks when the connection is generating many packets.
This is still a bit of a WIP, but should be deferred until after a release with #303 so they get separate release cycles