Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations TouchToneTommy on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Does the use of Views decrease performance?

Status
Not open for further replies.

BarbaraFuhrmann

Programmer
Aug 20, 2002
118
DE
Hi,

I created a procedure that suddenly changed it's runtime behaviour. It is like the following:

create or replace procedure DS3_1000 (P_SEQ number, P_STAMM_ID number, P_FLAG number)
begin
-- fkt_1005 deletes all records in the tables starting with T_ that belongs to the parameter P_SEQ
fkt_1005(P_SEQ, P_FLAG);

insert into T_STAMM (select P_SEQ, I_STAMM.* from I_STAMM where i_stamm_id = P_STAMM_ID);

insert into T_ICD (select P_SEQ, I_ICD.* from I_ICD where i_stamm_id = P_STAMM_ID);

insert into T_OPS (select P_SEQ, I_OPS.* from I_OPS where i_stamm_id = P_STAMM_ID);

insert into T_DRG (select P_SEQ, I_DRG.* from I_DRG where i_stamm_id = P_STAMM_ID);

INSERT INTO T_DS3_LS SELECT * FROM V_EXAMPLE_1 WHERE SEQ = P_SEQ;
-- in the real procedure there are 200 of those insert statements
INSERT INTO T_DS3_LS SELECT * FROM V_EXAMPLE_2 WHERE SEQ = P_SEQ;

EXCEPTION
when others then
dbms_output.put_line( 'DS3_1000 - Fehler ist aufgetreten: ' || sqlerrm);
rollback;
return;
end;

Important to know:

Tables starting with I_ contains the individual data of patients (start and end date of stay in hospital, all diagnoses, all procedures, ...). I_STAMM_ID is the unique key for hospitalisation. At the moment we store data for roundabout 1 000 000 hospitalisations in our database.

Tables starting with T_ have the same structur like those starting with I_ but are normaly empty (without data). We insert the data of 1 hospitalisation into them for the following processing, then delete it again. We do that because the way we did it befor - working directly with the data in the I_-tables - was so slow, that we had to make a redesign to increase runtime to a tolerable amount.

Views starts with V_ and use the data in the T_-tables. In the procedure we call round about 200 of them. They perform a lot of tests wether the data of the hospitalisation is coded correctly concerning the german coding guidelines for hospitals.

The programms calling the procedure take the results out of the table T_DS3_LS and than continue the particular process.

So far the preambel, now I come to the question:

The procedure worked with a tolerable performance up to a point in time 4 weeks ago. I can not say wether the performace was optimal, but for us it was okay. 4 weeks ago it starts to take more and more runtime. Runtime increased up to 500 % and more.
I tested every single of the V_-Views but each one of it taken on it's own is correct (no cartesian product, etc.).

So I am now looking for a possibility to change this processing back to it's former runtime. Can someone give me a hint which would be the best way to design it? Does someone know wether views have a worse perfomance than statements coded directly into the procedure? Or is there another way to go?

Thank you!
Barbara
 
You should probabably consider running it with SQL*Trace turned on to find out which statements are taking the longest. Obvious things that come to mind are:

a) is the amount of data in the t_* tables increasing substantially over time ?

b) Why do you need to issue a delete command against an ID which you are generating from a sequence ? If the sequence is incrementing all the time, the number should never be reused.

c) Is there an index on the column which matches P_SEQ on all the tables you are deleting from ?

d) Do you clear down the i* tables each time the job runs ? How do you do this ? Do you use a delete or do you truncate ?
 
Hi Dagon,

thank's for you reply. Here is what I can say:

a) no, the amount can't increase because we delete the data from the table befor we insert new data

b) I think that is because I'm a chicken and wanted to be double soure that there is no data with that sequence number. You're right, I should delete this delete command. But do you think it really affects the runtime?

c) no, because there is so less data in those tables that the optimizer won't use it:
T_STAMM: 1 record
T_DRG: 1 record
T_ICD: between 1 and maximum of 100 records, average is 6
T_OPS: between 1 and maximum of 100 records, average is 4

d) what does "clear down the I* tables" mean? We do not delete those tables because we collect the hospitalisation data in it. As I posted befor, the actual amount is 1 000 000 records in I_STAMM. Therefor we created the solution with the T_-tables to get a better performance.

I never worked with SQL*Trace before. What do I have to do. In addition I am pretty sure that the reason can not be in the coding of one of the V_-Views because the procedure was performing as ever up to a certain point in time an then - without any change in the coding or tables or indices - runtime increased more and more every time we started the job.

Barbara
 
I think I got your temporary and permanent tables the wrong way round. I thought you were going temp to perm rather than the other way round.

I would suggest you truncate the T* tables rather than delete from them. This will be faster and will avoid any sort of fragmentation that could result from repeatedly inserting and then deleting data from them.

Also, are there indexes on the i_stamm_id columns of the I* tables ?

To turn trace on, you need to check/set the value of the init.ora parameter timed_statistics to true and then run a command "alter session set sql_trace true" before running you
 
Your procedure", that was meant to say.

A trace file should then be produced in the directory specified by the user_dump_dest parameter of the init.ora file. This should show you which statements are slow.
 
Hi Dagon,

thank you for the fast reply.

I will try the SQL*Trace but I think I have to wait for a moment to start and stop the database because I cannot set the parameter in the init.ora during runtime.

I'll come back when I had the possibility but it will be next week.

I think I can't truncate the T_-tables because there may be two processes that are using the procedure at the same time (but with different I_STAMM_ID and SEQ).

There is an index on the I_STAMM_ID in the I_-tables.

Barbara
 
If the table is being used by more than one process simultaneously, that too could be a source of performance problems from the need to lock it. Why not use a global temporary table instead ? Each user would then just see their own view of the data and it would be cleared down automatically when the user logged out or committed.
 
Hi Dagon,

I can exclude interdependency with other processes in so far as I have the one most important for our production running even through the night. At nighttime it is the only process running on the whole database (I can be sure on that because we only have 3 persons working on the database) and it shows the same behaviour as well.

The solution is without real temporary tables because I did not understand the way it works when I started creating our software. So I omitted temporary tables and found other ways to the goal I was going for.
Do you think that a real temporary table would bring a performance benefit?

Barbara
 
Providing there is no locking, I think the difference in performance would be fairly minimal. It might be worth truncating the temporary tables before running the job just to exclude the possibility that they are really badly fragmented.

Also, have you analysed all the tables (both i* and t*) to make sure the statistical information is up to date ?
 
Hi Dagon,

the entire database is analyzed twice a month. But when the problems started I analyzed them again.

Yours,
Barbara
 
What exactly do the v_* views consist of ? Are they straightforward selects from the t_* tables or do they do more complex stuff ?
 
Dagon,

The "v$*" views are as the "v" implies..."Virtual". They are not static components of the database's data dictionary; the Oracle instance creates and populates the "v$*" views (depending upon which view it is) at instance "STARTUP" at "NOMOUNT", at "MOUNT", and at "OPEN". They all disappear at instance "SHUTDOWN" (i.e., none of their data survive to the next startup). This means that you can query some of the "V$*" views even before Oracle mounts the database.

The data that are present in these views are often redundant with static data dictionary views "DBA_*", "ALL_*", "USER_*", et cetera, but the non-redundant values are typically instance-related entries such as you would see in "V$INSTANCE", et. al.

Does this resolve your question(s)?

[santa]Mufasa
(aka Dave of Sandy, Utah, USA)
[ Providing low-cost remote Database Admin services]
Click here to join Utah Oracle Users Group on Tek-Tips if you use Oracle in Utah USA.
 
Hi Dagon,

the V_-Views I create and previous wrote about are not the same then the V$-Views Oracle brings with it and Mufasa refers to.
My V_-Views do selects on the T_-Tables and one or two tables that contains kind of catalogues (e.g. catalogue of all diagnosis, that can be coded). Here is an example:

CREATE VIEW V_DS3_AKK05_V1
AS
SELECT DISTINCT
T_STAMM.SEQ AS SEQ,
T_STAMM.I_STAMM_ID AS I_STAMM_ID,
'AKK.05' AS LEITSATZ,
S_LEITS.VERS AS VERS,
1 AS SUBSELECT,
T_ICD_1.DIAG_ART||' '||T_ICD_1.DIAG_V||' und ND '||T_ICD_2.DIAG_V
AS ERKLAER
FROM T_STAMM, T_ICD T_ICD_1, T_ICD T_ICD_2, S_LEITS
WHERE T_STAMM.I_STAMM_ID = T_ICD_1.I_STAMM_ID
AND T_STAMM.SEQ = T_ICD_1.SEQ
AND T_STAMM.I_STAMM_ID = T_ICD_2.I_STAMM_ID
AND T_STAMM.SEQ = T_ICD_2.SEQ
AND S_LEITS.LEITS_KEY = 'AKK.05'
AND T_STAMM.ENTL_KH BETWEEN S_LEITS.GUELT_AB and S_LEITS.GUELT_BIS
AND S_LEITS.VERS = 1
AND T_ICD_1.DIAG_NR < T_ICD_2.DIAG_NR
AND ( T_ICD_1.DIAG_V = 'I70.24' AND T_ICD_2.DIAG_V = 'R02'
OR T_ICD_1.DIAG_V = 'R02' AND T_ICD_2.DIAG_V = 'I70.24'
OR T_ICD_1.DIAG_V = 'I83.0' AND T_ICD_2.DIAG_V = 'L97'
OR T_ICD_1.DIAG_V = 'L97' AND T_ICD_2.DIAG_V = 'I83.0'
OR T_ICD_1.DIAG_V = 'I83.2' AND T_ICD_2.DIAG_V = 'L97'
OR T_ICD_1.DIAG_V = 'L97' AND T_ICD_2.DIAG_V = 'I83.2' )
;

As you remember we perform round about 200 of those V_-Views to test the data. Of course they are of different complexity but most are as simple as the above one.

I did run a trace yesterday that showed the V_-Views using the most runtime. What I still do not understand is the change of the runtime behaviour. Why was it okay until middle of june and now is growing and growing and growing ??? Even old versions of the procedure do have this new behaviour.

Yours,
Barbara
 
Yes, I know what the v$* views are and I wasn't referring to those.

The view looks fairly complex and could be the cause of problems. Try running some selects against the views separately and see what performance they give.

You say all the T* tables are fairly small. What about S_LEITS ?
 
S_LEITS actually has 283 entries. It contains catalogue data (text, origin, period of validity) to each test we perform with the V_-Views.

Do you think this one is complex? To me it is one of the simple ones. What I did was improving the views by "Explain plan". This shows that the T_-Tables are accessed by full table scans what is okay because they are so small. The S_LEITS is accessed via Index, so that I think this is okay, too.

What I still do not understand is the change of the runtime behaviour. Why was it okay until middle of june and now is growing and growing and growing ??? Even old versions of the procedure do have this new behaviour.


 
The only way I think you are going to get to the bottom of this is to run SQL*Trace against the stored procedure and find out which statement(s) are causing the problem.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top