BarbaraFuhrmann
Programmer
Hi,
I created a procedure that suddenly changed it's runtime behaviour. It is like the following:
create or replace procedure DS3_1000 (P_SEQ number, P_STAMM_ID number, P_FLAG number)
begin
-- fkt_1005 deletes all records in the tables starting with T_ that belongs to the parameter P_SEQ
fkt_1005(P_SEQ, P_FLAG);
insert into T_STAMM (select P_SEQ, I_STAMM.* from I_STAMM where i_stamm_id = P_STAMM_ID);
insert into T_ICD (select P_SEQ, I_ICD.* from I_ICD where i_stamm_id = P_STAMM_ID);
insert into T_OPS (select P_SEQ, I_OPS.* from I_OPS where i_stamm_id = P_STAMM_ID);
insert into T_DRG (select P_SEQ, I_DRG.* from I_DRG where i_stamm_id = P_STAMM_ID);
INSERT INTO T_DS3_LS SELECT * FROM V_EXAMPLE_1 WHERE SEQ = P_SEQ;
-- in the real procedure there are 200 of those insert statements
INSERT INTO T_DS3_LS SELECT * FROM V_EXAMPLE_2 WHERE SEQ = P_SEQ;
EXCEPTION
when others then
dbms_output.put_line( 'DS3_1000 - Fehler ist aufgetreten: ' || sqlerrm);
rollback;
return;
end;
Important to know:
Tables starting with I_ contains the individual data of patients (start and end date of stay in hospital, all diagnoses, all procedures, ...). I_STAMM_ID is the unique key for hospitalisation. At the moment we store data for roundabout 1 000 000 hospitalisations in our database.
Tables starting with T_ have the same structur like those starting with I_ but are normaly empty (without data). We insert the data of 1 hospitalisation into them for the following processing, then delete it again. We do that because the way we did it befor - working directly with the data in the I_-tables - was so slow, that we had to make a redesign to increase runtime to a tolerable amount.
Views starts with V_ and use the data in the T_-tables. In the procedure we call round about 200 of them. They perform a lot of tests wether the data of the hospitalisation is coded correctly concerning the german coding guidelines for hospitals.
The programms calling the procedure take the results out of the table T_DS3_LS and than continue the particular process.
So far the preambel, now I come to the question:
The procedure worked with a tolerable performance up to a point in time 4 weeks ago. I can not say wether the performace was optimal, but for us it was okay. 4 weeks ago it starts to take more and more runtime. Runtime increased up to 500 % and more.
I tested every single of the V_-Views but each one of it taken on it's own is correct (no cartesian product, etc.).
So I am now looking for a possibility to change this processing back to it's former runtime. Can someone give me a hint which would be the best way to design it? Does someone know wether views have a worse perfomance than statements coded directly into the procedure? Or is there another way to go?
Thank you!
Barbara
I created a procedure that suddenly changed it's runtime behaviour. It is like the following:
create or replace procedure DS3_1000 (P_SEQ number, P_STAMM_ID number, P_FLAG number)
begin
-- fkt_1005 deletes all records in the tables starting with T_ that belongs to the parameter P_SEQ
fkt_1005(P_SEQ, P_FLAG);
insert into T_STAMM (select P_SEQ, I_STAMM.* from I_STAMM where i_stamm_id = P_STAMM_ID);
insert into T_ICD (select P_SEQ, I_ICD.* from I_ICD where i_stamm_id = P_STAMM_ID);
insert into T_OPS (select P_SEQ, I_OPS.* from I_OPS where i_stamm_id = P_STAMM_ID);
insert into T_DRG (select P_SEQ, I_DRG.* from I_DRG where i_stamm_id = P_STAMM_ID);
INSERT INTO T_DS3_LS SELECT * FROM V_EXAMPLE_1 WHERE SEQ = P_SEQ;
-- in the real procedure there are 200 of those insert statements
INSERT INTO T_DS3_LS SELECT * FROM V_EXAMPLE_2 WHERE SEQ = P_SEQ;
EXCEPTION
when others then
dbms_output.put_line( 'DS3_1000 - Fehler ist aufgetreten: ' || sqlerrm);
rollback;
return;
end;
Important to know:
Tables starting with I_ contains the individual data of patients (start and end date of stay in hospital, all diagnoses, all procedures, ...). I_STAMM_ID is the unique key for hospitalisation. At the moment we store data for roundabout 1 000 000 hospitalisations in our database.
Tables starting with T_ have the same structur like those starting with I_ but are normaly empty (without data). We insert the data of 1 hospitalisation into them for the following processing, then delete it again. We do that because the way we did it befor - working directly with the data in the I_-tables - was so slow, that we had to make a redesign to increase runtime to a tolerable amount.
Views starts with V_ and use the data in the T_-tables. In the procedure we call round about 200 of them. They perform a lot of tests wether the data of the hospitalisation is coded correctly concerning the german coding guidelines for hospitals.
The programms calling the procedure take the results out of the table T_DS3_LS and than continue the particular process.
So far the preambel, now I come to the question:
The procedure worked with a tolerable performance up to a point in time 4 weeks ago. I can not say wether the performace was optimal, but for us it was okay. 4 weeks ago it starts to take more and more runtime. Runtime increased up to 500 % and more.
I tested every single of the V_-Views but each one of it taken on it's own is correct (no cartesian product, etc.).
So I am now looking for a possibility to change this processing back to it's former runtime. Can someone give me a hint which would be the best way to design it? Does someone know wether views have a worse perfomance than statements coded directly into the procedure? Or is there another way to go?
Thank you!
Barbara