-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: coalesce tail entries with spare capacity during reclamation #345
base: main
Are you sure you want to change the base?
Conversation
Regression Detector (DogStatsD)Regression Detector ResultsRun ID: 2529b8f1-d2fd-4973-a7aa-3121357abd16 Baseline: 7.59.0 Optimization Goals: ✅ No significant changes detected
|
perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
---|---|---|---|---|---|---|
➖ | dsd_uds_100mb_3k_contexts_distributions_only | memory utilization | +0.07 | [-0.10, +0.23] | 1 | |
➖ | dsd_uds_10mb_3k_contexts | ingress throughput | +0.02 | [-0.00, +0.04] | 1 | |
➖ | dsd_uds_100mb_3k_contexts | ingress throughput | +0.01 | [-0.04, +0.05] | 1 | |
➖ | dsd_uds_1mb_50k_contexts_memlimit | ingress throughput | +0.00 | [-0.00, +0.00] | 1 | |
➖ | dsd_uds_500mb_3k_contexts | ingress throughput | +0.00 | [-0.01, +0.01] | 1 | |
➖ | dsd_uds_1mb_50k_contexts | ingress throughput | +0.00 | [-0.00, +0.00] | 1 | |
➖ | dsd_uds_100mb_250k_contexts | ingress throughput | +0.00 | [-0.00, +0.00] | 1 | |
➖ | dsd_uds_1mb_3k_contexts | ingress throughput | -0.00 | [-0.00, +0.00] | 1 | |
➖ | dsd_uds_512kb_3k_contexts | ingress throughput | -0.00 | [-0.01, +0.01] | 1 | |
➖ | quality_gates_idle_rss | memory utilization | -1.41 | [-1.54, -1.29] | 1 |
Bounds Checks: ❌ Failed
perf | experiment | bounds_check_name | replicates_passed | links |
---|---|---|---|---|
❌ | quality_gates_idle_rss | memory_usage | 0/10 |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
Regression Detector (Saluki)Regression Detector ResultsRun ID: d4ddd958-4595-44d9-8379-19d8af1409c6 Baseline: 091cb09 Optimization Goals: ❌ Significant changes detected
|
perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
---|---|---|---|---|---|---|
❌ | quality_gates_idle_rss | memory utilization | +5.75 | [+5.31, +6.20] | 1 | |
➖ | dsd_uds_1mb_50k_contexts_memlimit | ingress throughput | +2.90 | [+0.66, +5.14] | 1 | |
➖ | dsd_uds_100mb_3k_contexts_distributions_only | memory utilization | +2.37 | [+2.00, +2.73] | 1 | |
➖ | dsd_uds_500mb_3k_contexts | ingress throughput | +1.11 | [+1.02, +1.19] | 1 | |
➖ | dsd_uds_50mb_10k_contexts_no_inlining_no_allocs | ingress throughput | +0.03 | [-0.02, +0.08] | 1 | |
➖ | dsd_uds_100mb_3k_contexts | ingress throughput | +0.00 | [-0.05, +0.06] | 1 | |
➖ | dsd_uds_1mb_50k_contexts | ingress throughput | -0.00 | [-0.00, +0.00] | 1 | |
➖ | dsd_uds_1mb_3k_contexts | ingress throughput | -0.00 | [-0.00, +0.00] | 1 | |
➖ | dsd_uds_100mb_250k_contexts | ingress throughput | -0.00 | [-0.05, +0.04] | 1 | |
➖ | dsd_uds_512kb_3k_contexts | ingress throughput | -0.00 | [-0.01, +0.01] | 1 | |
➖ | dsd_uds_10mb_3k_contexts | ingress throughput | -0.01 | [-0.04, +0.02] | 1 | |
➖ | dsd_uds_50mb_10k_contexts_no_inlining | ingress throughput | -0.01 | [-0.08, +0.06] | 1 |
Bounds Checks: ✅ Passed
perf | experiment | bounds_check_name | replicates_passed | links |
---|---|---|---|---|
✅ | quality_gates_idle_rss | memory_usage | 10/10 |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
Regression Detector LinksExperiment Result Links
|
2b1fba4
to
9b029a2
Compare
Context
In
GenericMapInterner
(andFixedSizeInterner
, as well), we support the reclamation of interned strings which are no longer referenced. This is achieved by storing markers --ReclaimedEntry
-- that denote the holes in the backing buffer where strings can be stored, before needing to fallback to simply writing into the "spare capacity" of the backing buffer.Checking for reclaimed entries that can fit a string is, naturally, a little slower than just doing the equivalent of bumping a pointer. Storing reclaimed entries also requires allocations for the container that holds them. Overall, it's useful to avoid storing reclaimed entries when possible. One such case is when we're about to reclaim a "tail" entry that is adjacent to the spare capacity in the buffer.
Imagine a backing buffer split into three equal parts, where parts one and two are in use, and part three is free (spare capacity). When part two is reclaimed, we could add a reclaimed entry for it... or we could simply coalesce it with part three and adjust our state so parts two and three are merged together: you could imagine this as simply adjusting the "offset" field used to indicate the point in the buffer where spare capacity starts.
This reduces the size of the reclaimed entries list, which helps avoid it growing increasingly large over time, and it also means we can merge reclaimed entries with the spare capacity: we do this for reclaimed entries themselves, but we don't merge reclaimed entries with spare capacity when possible, which means we're not maximizing how much contiguous available capacity we have.
Solution
This PR implements support for coalescing tail entries with adjacent spare capacity. The code to do this is simple, since we only need to check if a merged reclaimed entry is adjacent to the spare capacity, and if so, remove the reclaimed entry and adjust
self.len
.Most of the code changed was related to the tests themselves, as they were written with the invariant of all dropped strings generating a reclaimed entry that would be available the next time an intern operation occurred.
Notes
It's very unclear if this will have any meaningful performance benefit or show up in benchmarks. I'm doing it because it's 1) possible and 2) it's the optimal thing to do.