Phobia,
Unfortunately, one of Oracle's weaknesses is stepping through a result set in pieces.
The model you can use to avoid processing an INSERT for each row is to do a single INSERT by SELECT-ing the next "n" rows (where "n" is your limit for rows to process during a single COMMIT cycle).
The problem here, of course, is
to INSERT the NEXT "n" rows, the source-query result set includes (but does not INSERT) the previous rows that you have already INSERTed.
To illustrate, let's presume that your COMMIT limit is 10,000 rows per INSERT. This is how the INSERT logic would work:
Code:
INSERT INTO <new table>
SELECT <column-list>
FROM (SELECT rownum rn, x.*
FROM (SELECT <your 4-table join here>
ORDER BY <columns>) x)
WHERE x.rn between <y> AND <y+10000>;
COMMIT;
...where "y" is the number of the earliest row that has not yet been inserted.
Although you will do a single INSERT and COMMIT per 10,000 rows, you will be SELECT-ing this many total rows during the
total INSERT process:
[tt]
INSERT 1: SELECT rows 1-10,000, INSERT rows 1-10,000
INSERT 2: SELECT rows 1-20,000, INSERT rows 10,001-20,000
INSERT 3: SELECT rows 1-30,000, INSERT rows 20,001-30,000
...
INSERT last: SELECT rows 1-150M , INSERT rows 149,990,000-150,000,000[/tt]
So, if you INSERT 150M rows in 10K-row chunks, you will be actually reading/SELECTing 1,125,075,000,000 rows (1 trillion, 125 billion, 75 million rows).
I don't know for sure how the CPU cycles for SELECTing 1.1 Trillion rows + 15 Thousand INSERTs compares to 150 Million INSERTs, but I'll bet there is not much savings (if any) of doing the INSERTs in chuncks versus 1 INSERT per row.
Let us know your thoughts.
![[santa] [santa] [santa]](/data/assets/smilies/santa.gif)
Mufasa
(aka Dave of Sandy, Utah, USA)
[I provide low-cost, remote Database Administration services:
www.dasages.com]