-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LIMIT is not pushed down #233
Comments
I'll note that on other tables in this database the CTID scan is used. But it seems like it still copies everything over, maybe just in parallel. |
If I enable filter push down and push down a filter that can hit an index and gives me no results then the query is fast. But I would have thought the scanner should be able to push down a limit when there are no aggregations. |
Thanks for reporting! This is currently an expected restriction - limit is not pushed down into Postgres yet. You can use select * from postgres_query('pg', 'select * from very_large_table limit 1'); |
Thanks for clarifying, makes sense 😄. I guess I'll have to do something like Feel free to close this issue if there's no plan to implement this or leave it open to track if you think it's a valid feature request. |
What happens?
If I do
SELECT * FROM pg.very_large_table LIMIT 1;
the query issued does not include a limit and thus the entire table is copied.To Reproduce
I don't see any pagination happening either.
My network usage goes through the roof as well (indicating lots of data is being copied).
OS:
MacOS
PostgreSQL Version:
15 (timescale)
DuckDB Version:
v0.10.4-dev124 cf5b770ccb
DuckDB Client:
CLI
Full Name:
Adrian Garcia Badaracco
Affiliation:
Pydantic
Have you tried this on the latest
main
branch?Have you tried the steps to reproduce? Do they include all relevant data and configuration? Does the issue you report still appear there?
The text was updated successfully, but these errors were encountered: