Follow

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Contact

Optimizing select count query if number of matching rows greater than number

I am running Postgresql table with multiple millions of rows. To simplify the problem lets assume that table has only 2 columns: ID(primary key) and TYPE. There is additional index created on TYPE column.
I run query which counts values in the table for given type:

SELECT count(id) FROM table where TYPE= 'XX';

The query works fine, however as the table is big it uses resources and takes significant time already. My actual use case does not need exact count if it is greater than 50. Is there a way to optimize this query in Postgresql, so it returns values from 0 to 50 and if it is greater it skips counting?

Or the only way to improve the performance is to store aggregate count in other table?

MEDevel.com: Open-source for Healthcare and Education

Collecting and validating open-source software for healthcare, education, enterprise, development, medical imaging, medical records, and digital pathology.

Visit Medevel

>Solution :

You can get high performance by adding the index:

create index ix1 on table (type, id);

Then the query can take the form:

select count(*)
from (
  select id from table where type = 'XX' limit 51
) x;

In this query the index scan will be limited to 51 index entries. The count — done in the external query — will return values between 0 and 51. If it’s 51, then it means there are more than 50 ids for that type.

Add a comment

Leave a Reply

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

Discover more from Dev solutions

Subscribe now to keep reading and get access to the full archive.

Continue reading