-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[APR-253] enhancement: track bounds expressions and generate basic sizing guide #422
Conversation
Regression Detector (DogStatsD)Regression Detector ResultsRun ID: b6eaf5fa-3b72-43f8-a45e-65f1a161e2fb Baseline: 7.63.0-rc.2 Optimization Goals: ✅ No significant changes detected
|
perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
---|---|---|---|---|---|---|
➖ | quality_gates_idle_rss | memory utilization | +2.85 | [+2.74, +2.95] | 1 | |
➖ | dsd_uds_100mb_3k_contexts_distributions_only | memory utilization | +0.22 | [+0.07, +0.38] | 1 | |
➖ | dsd_uds_100mb_3k_contexts | ingress throughput | +0.00 | [-0.04, +0.04] | 1 | |
➖ | dsd_uds_512kb_3k_contexts | ingress throughput | +0.00 | [-0.01, +0.01] | 1 | |
➖ | dsd_uds_1mb_3k_contexts_dualship | ingress throughput | +0.00 | [-0.00, +0.00] | 1 | |
➖ | dsd_uds_100mb_250k_contexts | ingress throughput | +0.00 | [-0.00, +0.00] | 1 | |
➖ | dsd_uds_1mb_50k_contexts_memlimit | ingress throughput | +0.00 | [-0.00, +0.00] | 1 | |
➖ | dsd_uds_1mb_3k_contexts | ingress throughput | +0.00 | [-0.00, +0.00] | 1 | |
➖ | dsd_uds_1mb_50k_contexts | ingress throughput | -0.00 | [-0.00, +0.00] | 1 | |
➖ | dsd_uds_10mb_3k_contexts | ingress throughput | -0.00 | [-0.00, +0.00] | 1 | |
➖ | dsd_uds_40mb_12k_contexts_40_senders | ingress throughput | -0.00 | [-0.00, +0.00] | 1 | |
➖ | dsd_uds_500mb_3k_contexts | ingress throughput | -0.54 | [-0.68, -0.41] | 1 |
Bounds Checks: ❌ Failed
perf | experiment | bounds_check_name | replicates_passed | links |
---|---|---|---|---|
❌ | quality_gates_idle_rss | memory_usage | 0/10 |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
.with_expr(UsageExpr::product( | ||
"buffers", | ||
UsageExpr::config("dogstatsd_buffer_count", self.buffer_count), | ||
UsageExpr::config("dogstatsd_buffer_size", get_adjusted_buffer_size(self.buffer_size)), | ||
)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a criticism of what you have, but this sort of example usage really drives home that it feels like we may eventually want to do something to spruce this up and make it more foolproof.
Like I don't want to just proc macro-ify everything for the sake of writing proc macros, but this is definitely the sort of thing that could lend itself to using macros to help drive automatically, perhaps.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Random thought just to get it down on paper: maybe a proc macro/declarative helper macro pair that just generates methods based on a configuration type, where then instead of what is in place here, it would let you do something like:
#[derive(Computed)]
pub struct DogStatsDConfiguration {
...
}
# Calculating the bounds further down:
builder.minimum().with_expr(computed! { "buffers" => config(buffer_count) * config(buffer_size) });
which would desugar to something like:
impl DogStatsDConfigurationi {
fn _computed_buffer_count(config: &DogStatsDConfiguration) -> UsageExpr {
UsageExpr::config("dogstatsd_buffer_count", config.buffer_count)
}
fn _computed_buffer_size(config: &DogStatsDConfiguration) -> UsageExpr {
UsageExpr::config("dogstatsd_buffer_size", config.buffer_size)
}
}
...
builder.minimum().with_computed("buffers", UsageExpr::product(
"buffers",
DogStatsDConfiguration::_computed_buffer_count(self),i
DogStatsDConfiguration::_computed_buffer_size(self),
));
Like obviously it's not perfect, and a little kludgy under the hood, but if it could help developers maybe... then I dunno, maybe worth some pain? Anyways, just scribbling out the idea. :P
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I definitely agree it could be nice to make this more seamless and that's clashing with my natural inclination to avoid proc macros 😅
I do think configuration values are the lowest hanging fruit for something like this, since we very much want to match exactly the actual configuration key and that's already part of things like serde(rename = "...")
, etc. I'm hesitant to go full configurable_component
with it, but something in that direction would very likely make sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I agree. It would have to be a delicate balance of extracting the information that's already there (like the user-visible configuration setting name via #[serde(rename = "...")]
) vs requiring additional input beyond the initial #[derive(...)]
to properly drive the calculations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, I like this approach a lot, and I think it lays a lot of the groundwork for further investment/improvements in UX down the road. 👍🏻
26f94fd
to
2a845cc
Compare
Signed-off-by: Luke Steensen <[email protected]>
Signed-off-by: Luke Steensen <[email protected]>
2a845cc
to
c509b13
Compare
Signed-off-by: Luke Steensen <[email protected]>
Regression Detector LinksExperiment Result Links
|
The larger change here is to the way that we track memory bounds expressions. Rather than simply adding everything up immediately, we now keep track of the structure of the expressions themselves, noting which are config values, struct sizes, etc. This allows us to export those calculations to build tools related to the memory bounds.
The second part is less invasive, and simply adds a flag that when set will have ADP write out an html file with the memory bounds calculations embedded as JSON data, and some functionality to evaluate those expressions with varying config values, etc. It also includes some very basic heuristics to generate config changes from a workload spec (currently just metric cardinality).