Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Message batching for performance #270

Open
bemasc opened this issue May 14, 2015 · 0 comments
Open

Message batching for performance #270

bemasc opened this issue May 14, 2015 · 0 comments

Comments

@bemasc
Copy link
Contributor

bemasc commented May 14, 2015

Initial implementation in:
#269

Time-constant-free message batching requires (?) adding 'ack' messages, so that we can tell when there are too many outstanding queued messages and wait until some of them are cleared before adding more. At very high CPU load, we expect a reduction in the number of messages sent overall, because slow acks will result in batching.

Curiously, the initial implementation shows reduced throughput in uproxy-churn when only allowing a single outstanding message, but allowing up to 3 outstanding messages shows a small improvement (~10%) over baseline. More investigation might be appropriate, to confirm that this implementation is behaving as expected.

@bemasc bemasc self-assigned this May 14, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant