You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My project requires me to aggregate http response times into graphics at various resolutions, to do so i have hierarchical continuous aggregates using averages and maximums, i am currently trying to implement percentile aggregates (95p and 99p) with TDigest since it seems like the most applicable algorithm for my use case.
I found that having a series of aggregates using rollup() on the children aggregates fits my needs, but i cannot get it to work for one simple reason, it seems that if rollup() recieves only null values (as long as its more than one) a fatal error is thrown:
ERROR: called `Option::unwrap()` on a `None` value
The biggest issue is that even if i surround the function call by a CASE WHEN column IS NOT NULL it still calls and fails
This issue is both there to inform of a potential problem and to ask for assistance with my use case
I know the snippet i included might be very convoluted, sorry in advance i am here if you need any explanations
TimescaleDB version affected
2.18
PostgreSQL version used
17.2
What operating system did you use?
Ubuntu 17.2.1 x64
What installation method did you use?
Docker
What platform did you run on?
On prem/Self-hosted
Relevant log output and stack trace
psql:/docker-entrypoint-initdb.d/init.sql:887: ERROR: called `Option::unwrap()` on a `None` valueCONTEXT: SQL statement "INSERT INTO _timescaledb_internal._materialized_hypertable_30 SELECT * FROM _timescaledb_internal._partial_view_30 AS I WHERE I.ts_start >= $1 AND I.ts_start < $2 ;"2025-03-12 16:12:33.551 UTC [76] ERROR: called `Option::unwrap()` on a `None` value2025-03-12 16:12:33.551 UTC [76] CONTEXT: SQL statement "INSERT INTO _timescaledb_internal._materialized_hypertable_30 SELECT * FROM _timescaledb_internal._partial_view_30 AS I WHERE I.ts_start >= $1 AND I.ts_start < $2 ;"2025-03-12 16:12:33.551 UTC [76] STATEMENT: CREATE MATERIALIZED VIEW checks_tdigest_15m WITH (timescaledb.continuous) AS SELECT time_bucket('15 minutes', ts_start) AS ts_start, check_id, SUM(num_checks) AS num_checks, CASE WHEN bool_or(response_time IS NULL) THEN NULL ELSE rollup(response_time) END AS response_time, CASE WHEN bool_or(ttfb IS NULL) THEN NULL ELSE rollup(ttfb) END AS ttfb, CASE WHEN bool_or(dns IS NULL) THEN NULL ELSE rollup(dns) END AS dns, MAX(max_response_time) AS max_response_time, MAX(max_ttfb) AS max_ttfb, MAX(max_dns) AS max_dns, MIN(min_response_time) AS min_response_time, MIN(min_ttfb) AS min_ttfb, MIN(min_dns) AS min_dns, SUM(downtime)::FLOAT AS downtime, SUM(incidents)::INTEGER AS incidents, last(status, ts_start) AS status FROM checks_tdigest_5m GROUP BY 1, check_id;
How can we reproduce the bug?
-- Here is a tweaked barebones version of my init.sql file for my mock database, sincer the values are random i cant pinpoint exactly the rows that make it fail, if it works, you might have to reset the container and try again
-- Enable TimescaleDB extension
CREATE EXTENSION IF NOT EXISTS timescaledb;
CREATE EXTENSION IF NOT EXISTS timescaledb_toolkit;
-- Create the 'checks average' table
CREATE TABLE checks (
timestamp TIMESTAMPTZ NOT NULL,
check_id UUID NOT NULL,
response_time INT DEFAULT NULL,
ttfb INT DEFAULT NULL,
dns INT DEFAULT NULL
);
-- Convert 'checks' table into a hypertable
SELECT create_hypertable('checks', 'timestamp', if_not_exists => TRUE);
-- Insert mock data into the 'checks' table
DO $$
DECLARE
uuid_list UUID[];
base_time TIMESTAMPTZ := '2024-12-31 23:59:00';
i INT;
j INT;
random_value FLOAT;
BEGIN
-- Generate a list of UUIDs for checks
SELECT ARRAY_AGG(gen_random_uuid()) INTO uuid_list
FROM generate_series(1, 5);
FOR i IN 1..5 LOOP
-- Populate mock data with a 1 in 2 chance for an error result
FOR j IN 1..10000 LOOP
-- Generate a random value once per iteration
random_value := random();
INSERT INTO checks (timestamp, check_id, response_time, ttfb, dns)
VALUES (
base_time + (j * interval '1 minutes'),
uuid_list[i],
CASE
WHEN random_value < 0.5 THEN NULL
ELSE (random() * 100)
END,
CASE
WHEN random_value < 0.5 THEN NULL
ELSE (random() * 40)
END,
CASE
WHEN random_value < 0.5 THEN NULL
ELSE (random() * 20)
END
);
END LOOP;
END LOOP;
END $$;
-- tdigest
-- Create a 1-minute tdigest continuous aggregate
CREATE MATERIALIZED VIEW checks_tdigest_1m
WITH (timescaledb.continuous) AS
SELECT time_bucket('1 minute', timestamp) AS ts_start,
check_id,
tdigest(100, response_time) AS response_time,
tdigest(100, ttfb) AS ttfb,
tdigest(100, dns) AS dns
FROM checks
GROUP BY 1, check_id;
SELECT add_continuous_aggregate_policy('checks_tdigest_1m',
start_offset => INTERVAL '1 hour',
end_offset => INTERVAL '1 minute',
schedule_interval => INTERVAL '1 minute');
-- Create a 5-minute tdigest continuous aggregate
-- This one includes verification of if the fields are null but they go through regardless
CREATE MATERIALIZED VIEW checks_tdigest_5m
WITH (timescaledb.continuous) AS
SELECT time_bucket('5 minutes', ts_start) AS ts_start,
check_id,
CASE WHEN bool_or(response_time IS NOT NULL) THEN rollup(response_time) ELSE NULL END AS response_time,
CASE WHEN bool_or(ttfb IS NOT NULL) THEN rollup(ttfb) ELSE NULL END AS ttfb,
CASE WHEN bool_or(dns IS NOT NULL) THEN rollup(response_time) ELSE NULL END AS dns
FROM checks_tdigest_1m
GROUP BY 1, check_id;
SELECT add_continuous_aggregate_policy('checks_tdigest_5m',
start_offset => INTERVAL '1 hour',
end_offset => INTERVAL '5 minutes',
schedule_interval => INTERVAL '1 minutes');
-- Create a 15-minute tdigest continuous aggregate
CREATE MATERIALIZED VIEW checks_tdigest_15m
WITH (timescaledb.continuous) AS
SELECT time_bucket('15 minutes', ts_start) AS ts_start,
check_id,
rollup(response_time) AS response_time,
rollup(ttfb) AS ttfb,
rollup(dns) AS dns
FROM checks_tdigest_5m
GROUP BY 1, check_id;
SELECT add_continuous_aggregate_policy('checks_tdigest_15m',
start_offset => INTERVAL '1 hour',
end_offset => INTERVAL '15 minutes',
schedule_interval => INTERVAL '5 minutes');
-- Create a 30-minute tdigest continuous aggregate
CREATE MATERIALIZED VIEW checks_tdigest_30m
WITH (timescaledb.continuous) AS
SELECT time_bucket('30 minutes', ts_start) AS ts_start,
check_id,
rollup(response_time) AS response_time,
rollup(ttfb) AS ttfb,
rollup(dns) AS dns
FROM checks_tdigest_15m
GROUP BY 1, check_id;
SELECT add_continuous_aggregate_policy('checks_tdigest_30m',
start_offset => INTERVAL '1 day',
end_offset => INTERVAL '30 minutes',
schedule_interval => INTERVAL '5 minutes');
The text was updated successfully, but these errors were encountered:
What type of bug is this?
Unexpected error
What subsystems and features are affected?
Command processing, Continuous aggregate, Query executor
What happened?
My project requires me to aggregate http response times into graphics at various resolutions, to do so i have hierarchical continuous aggregates using averages and maximums, i am currently trying to implement percentile aggregates (95p and 99p) with TDigest since it seems like the most applicable algorithm for my use case.
I found that having a series of aggregates using rollup() on the children aggregates fits my needs, but i cannot get it to work for one simple reason, it seems that if rollup() recieves only null values (as long as its more than one) a fatal error is thrown:
The biggest issue is that even if i surround the function call by a
CASE WHEN column IS NOT NULL
it still calls and failsThis issue is both there to inform of a potential problem and to ask for assistance with my use case
I know the snippet i included might be very convoluted, sorry in advance i am here if you need any explanations
TimescaleDB version affected
2.18
PostgreSQL version used
17.2
What operating system did you use?
Ubuntu 17.2.1 x64
What installation method did you use?
Docker
What platform did you run on?
On prem/Self-hosted
Relevant log output and stack trace
How can we reproduce the bug?
The text was updated successfully, but these errors were encountered: