설문조사
PostgreSQL/PPAS 관련 듣고 싶은 교육은


총 게시물 163건, 최근 0 건
   

Scalar Subquery Optimization

글쓴이 : 모델광 날짜 : 2022-01-15 (토) 20:15 조회 : 1434

When you are in a project where you change the DBMS, you should be careful of how queries are executed.  The following was one of the patterns which puzzled me after migrating to PostgreSQL from ORACLE.

In Oracle the resulting data showed up on the screen very fast. But in PostgreSQL I had to see an hourglass.

Here are the scripts to model the pattern.


CREATE TABLE ORDERS_DETAIL(
ORD_LINE_NO BIGINT NOT NULL
,ORD_NO BIGINT NOT NULL
,PROD_ID VARCHAR(10) NOT NULL
,COMMENT VARCHAR(100)
,ORD_AMT BIGINT);

ALTER TABLE ORDERS_DETAIL ADD CONSTRAINT PK_ORDERS_DETAIL PRIMARY KEY(ORD_LINE_NO);
CREATE INDEX ORDERS_DETAIL_X01 ON ORDERS_DETAIL(ORD_NO, PROD_ID);

INSERT INTO ORDERS_DETAIL
SELECT i as ORD_LINE_NO  
          , mod(i,1000000) AS ORD_NO  
          , 'PP'||MOD(i,5) AS PROD_ID  
          , lpad('X',10,'Y') as comment
      , case when i < 1000 then i*100 else i end as prod_amt
  FROM generate_series(1,10000000) a(i);

CREATE TABLE PROD (PROD_ID VARCHAR(10) NOT NULL,PROD_NM VARCHAR(100) NOT NULL);
ALTER TABLE PROD ADD CONSTRAINT PK_PROD PRIMARY KEY(PROD_ID);

INSERT INTO PROD    
 SELECT PROD_ID, MAX(ORD_NO)||'TEST_NAME'        
  FROM ORDERS_DETAIL
 GROUP BY PROD_ID;


The following was the SQL statement in ORACLE.


SELECT A.ORD_NO

      ,(SELECT PROD_NM

          FROM PROD

         WHERE PROD_ID = A.PROD_ID) AS PROD_NM

  FROM ORDERS_DETAIL A

 WHERE A.ORD_NO BETWEEN 1 AND 100000;


The larger table ORDERS_DETAIL is used in the main query block, the smaller PROD is in the scaler subquery.


Below is the execution plan we get when we execute the query in PostgreSQL.


 Bitmap Heap Scan on orders_detail a (actual time=22.852..969.817 rows=1000000 loops=1)

   Recheck Cond: ((ord_no >= 1) AND (ord_no <= 100000))

   Heap Blocks: exact=8344

   Buffers: shared hit=1000003 read=9493

   ->  Bitmap Index Scan on orders_detail_x01 (actual time=21.998..21.999 rows=1000000 lops=1)

         Index Cond: ((ord_no >= 1) AND (ord_no <= 100000))

         Buffers: shared hit=3 read=1149

   SubPlan 1

     ->  Seq Scan on prod (actual time=0.000..0.001 rows=1 loops=1000000)

           Filter: ((prod_id)::text = (a.prod_id)::text)

           Rows Removed by Filter: 4

           Buffers: shared hit=1000000

 Planning:

   Buffers: shared hit=4

 Planning Time: 0.173 ms

 Execution Time: 998.220 ms


The following is how to read the plan.

1. Bitmap Heap Scan calls Bitmap Index Scan.

2. Bitmap Index Scan retrieves 1000000 rows(CTIDs) and calls SubPlan1 with its first row.

3. For each row from Bitmap index Scan SubPlan1 calls Seq Scan.

4. Seq Scan probes the prod table to find a match with prod_id.

5. When Seq Scan finds a match, it passes it up to SubPlan1.

6. SubPlan1 passes the result up to Bitmap Index Scan.

7. Bitmap index Scan calls SubPlan1 again with its second row.

8. It iterates the process of the operation 3 to 6 until Bitmap Index Scan exhausts 1000000 rows, which means that SubPlan1 is executed 1000000 times.

9. Bitmap Index Scan now can pass the results up to Bitmap Heap Scan.

10. Finally Bitmap Heap Scan rechecks the ord_no condition.


In the above plan, we cannot see any join operation performed. Therefore, it comes to no surprise that both queries, the main query and the scalar subqury, are optimized independently.

Prod.prod_id is matched for every single row retrieved from ORDERS_DETAIL_X01. That's the reason why the number of rows in the main query will be the man contributor to the total cost. The query will perform poorly for large tables in the main query - the query indeed executed 1000003+9493 block I/Os.

Further, other correlated scalar subqueries suffer from this problem.

Even though Oracle produces nearly the same execution plan, it is very fast - to be honest, looks fast. The block I/Os and execution time in Oracle are almost the same with those in PostgreSQL. But Oracle sends the intermediate result set to the screen as soon as it retrieves 15 rows, which makes the end user "feel super fast". PostgreSQL doesn't have such mechanism. It has to retrieve all 1000000 rows before sending the resulting set to the client.


In conclusion, problematic correlated subqueries in PostgreSQL have a specific shape with the following characteristics:

* SubPlan1 without a join,

* filter predicate on the SubPlan1

* high cost related to a high cardinality of the result set in the main query.


I highly recommend memorizing this pattern, because it can help you immediately identify a problematic correlated scalar subquery even in larger execution plans where the subquery is buried under many layers of cascaded views.


In the long run, is there a better way to run the same query in PostgreSQL?


The answer is yes, however we have to manually rewrite it to use a join.

For example the problematic query can be revised as:


SELECT A.ORD_NO, B.PROD_NM

  FROM ORDERS_DETAIL A LEFT JOIN PROD B

    ON A.PROD_ID = B.PROD_ID

 WHERE A.ORD_NO BETWEEN 1 AND 100000;


We got rid of the subquery. To get the same result we used a left join.


 Hash Left Join (actual time=22.215..251.179 rows=1000000 loops=1)

   Hash Cond: ((a.prod_id)::text = (b.prod_id)::text)

   Buffers: shared hit=9497

   ->  Bitmap Heap Scan on orders_detail a (actual time=22.192..96.225 rows=1000000 loops=1)

         Recheck Cond: ((ord_no >= 1) AND (ord_no <= 100000))

         Heap Blocks: exact=8344

         Buffers: shared hit=9496

         ->  Bitmap Index Scan on orders_detail_x01 (actual time=21.302..21.302 rows=1000000 loops=1)

               Index Cond: ((ord_no >= 1) AND (ord_no <= 100000))

               Buffers: shared hit=1152

   ->  Hash (actual time=0.014..0.014 rows=5 loops=1)

         Buckets: 1024  Batches: 1  Memory Usage: 9kB

         Buffers: shared hit=1

         ->  Seq Scan on prod b (actual time=0.009..0.010 rows=5 loops=1)

               Buffers: shared hit=1

 Planning:

   Buffers: shared hit=4

 Planning Time: 0.180 ms

 Execution Time: 278.516 ms


As you can see, the plan has a significantly lower number of block I/Os, 9497. The total block I/O entails retrieving both tables and combining them together with a hash join.

We have seen so far how the query can be rewritten to execute more efficiently.


Quite often, we try to just make it work first and optimize later, which is not productive. Writing a query without having any idea of how long it will take to run creates a problem that could have been avoided altogether by writing it the right way from the start.


Wrap Up

* Unfortunately, PostgreSQL doesn't optimally handle queries with correlated subqueries in the select clause.

  (For your reference, latest MSSQL handle them optimally, which means that the optimizer transforms the query into a left join.)

* Suboptimal correlated scalar subqueries have a specific shape that can be easily spotted in a convoluted execution plan.

* Transforming a correlated scalar subquery into a query using a left join can improve performance.


   

postgresdba.com