Hi,
I have a process that updates a table with three triggers on it.
Two of the triggers are row level before update/insert. One is statement level.
The two row level triggers have catch code that checks to see if the only field being updated is "sent_flag" and that the value is "Y". If this is the case the trigger exits, this is to make sure that the update process does not cause the rest of the code to fire for only this operation. It is designed to speed up this update process. I have checked that they work as anticipated and they do.
The problem is that the team doing this update process for our table are external and are reporting an update speed of about 6 rows per second. Our table has 0.5 million rows and a composite primary key used to find the row to update "sent_flag".
The explain plan for their updates shows this pk is being used.
I have checked pure selects using the same syntax and this reports about 150 ms response, which would appear good.
I have set up a test environment to duplicate this problem and found that for 10 rows updating on our table, with triggers enabled, that it takes about 0.5 seconds. With the triggers disabled it takes either 0 secs or 0.1 secs.
It would appear that even just the checking of this update of the "sent_flag" in the trigger slows down the process by 50.
We have contemplated a number of possible solutions:
1) Disable all triggers on the table prior to this load. However this is a production DB and other processes may need these triggers to fire (though this is unlikely). Possibly we could lock the table prior to the load, but this means any other process needing it will be locked out and fail (again unlikely but possible).
2) Engage multiple processes to update the table.
3) Prevent the firing of the trigger at all by some other means such as the "where" clause of the trigger rather than coding the check in the main part of the code. Not sure if this would make any difference as the trigger would still fire to some level until it hit the where clause.
Open to other suggestions from anyone.
Also a further question. If a row level trigger is fired, does the code for it remain cached in memory or does it get reloaded each time. I am of the opinion like packages it would be cached, though some are saying otherwise.
If anyone has any other suggestions I would be most appreciative. This is becoming an urgent issue for our production run.
cheers
simmo
I have a process that updates a table with three triggers on it.
Two of the triggers are row level before update/insert. One is statement level.
The two row level triggers have catch code that checks to see if the only field being updated is "sent_flag" and that the value is "Y". If this is the case the trigger exits, this is to make sure that the update process does not cause the rest of the code to fire for only this operation. It is designed to speed up this update process. I have checked that they work as anticipated and they do.
The problem is that the team doing this update process for our table are external and are reporting an update speed of about 6 rows per second. Our table has 0.5 million rows and a composite primary key used to find the row to update "sent_flag".
The explain plan for their updates shows this pk is being used.
I have checked pure selects using the same syntax and this reports about 150 ms response, which would appear good.
I have set up a test environment to duplicate this problem and found that for 10 rows updating on our table, with triggers enabled, that it takes about 0.5 seconds. With the triggers disabled it takes either 0 secs or 0.1 secs.
It would appear that even just the checking of this update of the "sent_flag" in the trigger slows down the process by 50.
We have contemplated a number of possible solutions:
1) Disable all triggers on the table prior to this load. However this is a production DB and other processes may need these triggers to fire (though this is unlikely). Possibly we could lock the table prior to the load, but this means any other process needing it will be locked out and fail (again unlikely but possible).
2) Engage multiple processes to update the table.
3) Prevent the firing of the trigger at all by some other means such as the "where" clause of the trigger rather than coding the check in the main part of the code. Not sure if this would make any difference as the trigger would still fire to some level until it hit the where clause.
Open to other suggestions from anyone.
Also a further question. If a row level trigger is fired, does the code for it remain cached in memory or does it get reloaded each time. I am of the opinion like packages it would be cached, though some are saying otherwise.
If anyone has any other suggestions I would be most appreciative. This is becoming an urgent issue for our production run.
cheers
simmo