Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations bkrike on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Optimal number of rows to insert before commit

Status
Not open for further replies.

gunjan14

Programmer
Sep 18, 2003
156
IN
Hi,

How to find out what is the optimal number of rows to insert into a table before doing a Commit so as to NOT have any rollback segment error?
We are doing Insert into a table by selecting from another table. Cannot have commit after every row insert. Also if we do commit after the full records have been inserted into new table then we will be having rollback segment error.
:)

Gunjan
 
What kind of error do you get? It may happen due to huge select, not huge insert.

Regards, Dima
 
When most databases were on simple drives, a commit was an expensive operation you tried to avoid. it was worthwhile to find the maximum nuber of records before a commit.

If you are using a hardware RAID to hold your Oracle data, there is almost always a NVRAM cache on the RAID device. I this senario you want to commit in small chunks that fit in cache, so you never need wait on a physical drive. most cache are 4 to 64 meg in size, so commiting about every 4 meg would be faster than commiting just before you ran out of roll back segments.

If you are still using simple disks, never mind.

I tried to remain child-like, all I acheived was childish.
 
I am cuurently writing the procedure to do this. My procedure looks something like this
create or replace procedure some_trans
(
table_name IN varchar2(30),
view_name IN varchar2(30),
specified_rows IN integer
)
as
x_no number;
time_step number;
time_start date;
max_time date;
time_end date;
BEGIN
...
...
...

SELECT max(row_num) into x_no
FROM (Select count(*) as row_num
from view_name
group by trunc(period_start_time, 'DD'))

IF x_no < specified_rows then
time_step := floor(specified_rows/x_no)
ELSE
time_step := 1
END IF

select min(period_start_time) into time_start,
max(period_start_time) into max_time
from view_name

while time_start <= max_time
time_end := time_start + time_step
set transaction ....
INSERT INTO table1 (col1,...,coln)
SELECT col1 ..... coln
FROM view
WHERE period_start_time >= time_start
AND period_start_time < time_end
commit
time_start := time_end
end loop

All the queries are dynamic, as the table name and view_name are passed from other wrapper script (in Perl). The columns (col1,..coln) are also attached during run time. In some of the tables, the number of column can go upto 300 also. And some of the columns have only 10 columns also. Now the problem is how to find 'specified_rows' effectively?
 
Hi,

For Oracle, this might not be the way to go.

Oracle's Tom Kyte recommends that we try to COMMIT only at the end of our logical transactions, and size our rollback tablespaces to cope with our largest transactions.

Try and search for ora-1555 .
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top