Optimizing select count query if number of matching rows greater than number

Advertisements

I am running Postgresql table with multiple millions of rows. To simplify the problem lets assume that table has only 2 columns: ID(primary key) and TYPE. There is additional index created on TYPE column.
I run query which counts values in the table for given type:

SELECT count(id) FROM table where TYPE= 'XX';

The query works fine, however as the table is big it uses resources and takes significant time already. My actual use case does not need exact count if it is greater than 50. Is there a way to optimize this query in Postgresql, so it returns values from 0 to 50 and if it is greater it skips counting?

Or the only way to improve the performance is to store aggregate count in other table?

>Solution :

You can get high performance by adding the index:

create index ix1 on table (type, id);

Then the query can take the form:

select count(*)
from (
  select id from table where type = 'XX' limit 51
) x;

In this query the index scan will be limited to 51 index entries. The count — done in the external query — will return values between 0 and 51. If it’s 51, then it means there are more than 50 ids for that type.

Leave a ReplyCancel reply