-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[#3090] Reduce Command Responses in Redis Connection Process (on_connect) #3268
Open
zeze1004
wants to merge
14
commits into
redis:master
Choose a base branch
from
zeze1004:connect-using-pipeline
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
zeze1004
force-pushed
the
connect-using-pipeline
branch
from
June 6, 2024 15:51
7d4f985
to
e365726
Compare
… and above, using MULTI command as a fallback
…e selection, and client caching
… and error handling
…ck if no exceptions occur.
…k where the MULTI command is executed.
…s (lines 461-468).
This update eliminates the condition checking for connection retries because the logic was refactored to execute MULTI and EXEC commands within the same try block. This adjustment resolves the issue with unintended reconnection attempts, ensuring proper handling of transactional commands
zeze1004
force-pushed
the
connect-using-pipeline
branch
from
June 13, 2024 12:25
d7eb12f
to
c4c21d0
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Pull Request check-list
#3090
Please make sure to review and check all of these items:
NOTE: these things are not required to open a PR and can be done
afterwards / while the PR is open.
Description of change
Please provide a description of the change here.
In this PR, I experimented with using MULTI and EXEC commands to batch multiple Redis commands into a single request. My goal was to reduce the number of network round-trips and improve overall performance by minimizing network latency.
Observations
After running the integration tests, I noticed that while the number of network requests decreased, the total execution time actually increased. Here’s what I found:
Increased BufferedReader Time
When we send multiple commands in one MULTI/EXEC block, the server responds with all the results in one go. Reading this large combined response takes longer, which increased the time spent in the BufferedReader.
Command Packing Overhead
Packing multiple commands into a single MULTI request requires additional processing to format the data correctly. This added some overhead to the command preparation phase.
Complex Response Parsing
Parsing the combined response from EXEC also turned out to be more complex and time-consuming. Each individual command’s result had to be handled separately from the large, single response, which added to the total processing time.
Test Result
Here are the Integration test results comparing the Original Logic' and the Modified Logic
[ Measuring the time for 1000 Redis connections ]
Conclusion
While the idea was to reduce network latency by batching commands, the extra time taken to read and parse the larger response offset these gains. It seems that for our case, the increased local processing outweighed the benefits of fewer network requests.
I’d love to get your feedback on this. Do you think there are other optimizations we should consider, or is there something I might have missed? Any insights would be greatly appreciated! cc. @chayim 🙇🏻
Thanks!