Fix memory leak when checking player list setting #261
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
While investigating for #258, I've used Valgrind to find a memory leak relating to the
PLAYER_LIST
message.Valgrind output after server setup completed and then running for ~30 seconds:
More specifically, when we periodically reload the
playerListSecret
setting,playerListSecret
would be overwritten before being deleted:OneLife/server/server.cpp
Line 13245 in 330d897
While the main functionality was introduced in #207, this memory leak was introduced shortly after by #212.
To fix this, I have refactored to only read the setting when we receive a
PLAYER_LIST
message and free it soon after. I also removed a reference to the secret which was used to decide how long to wait until killing the socket, which I deemed unnecessary. This implementation is similar to how other secret settings are already fetched (close to when they are used).OneLife/server/server.cpp
Lines 10812 to 10814 in 330d897
I have compiled the server before and after my fix, and observed Valgrind output in both cases. With the refactor applied, no memory leaks are detected. I have also tested the continued functionality of the
PLAYER_LIST
message with the refactor applied, Dictator was able to continue communicating without any changes or issues.Note the current production server is using ~4GB swap (detailed in #258), which I suspect is due to this issue. It's possible this is the sole cause of #258, but further monitoring will be required.