Performance issues
Performance issues
(OP)
The load jumps and the system bogs down (becomes unusable) on our campus application when the registrar department logs on. They have five of the 30 users. Their functions are intensive but the system used to still fly (until two weeks ago). We've reindexed and the logs are backed continuously as Informix has suggested, but still the problem persists. Anybody have ideas on this? Thanks for your time.
Jim
Jim
RE: Performance issues
Any chance they are running reports that they were not running before? We have 200 users. The database is tuned for OLTP. If just one user runs a report it can cause the checkpoints to get so long that the system is virtually unusable.
I've been told that it is impossible to tune an Informix instance so it runs well for both OLTS and DSS so if this is your problem you'll have to talk to your users about running reports in batch at night.
If you are POSITIVE that nothing has changed in two weeks and everyone is doing exactly what they have been doing then perhaps your server needs to be rebooted. Are you running on a Unix box? Rebooting can help clean up shared memory. There are other ways to clean up shared memory but rebooting is the easiest and it also gets rid of many other 'leftovers'.
Finally - check the number of extents in the tables being used by the registrar department. If the tables were not correctly created you could have a situation where one or more table has too many extents. This causes logs of I/O and performance problems. Under ideal circumstances a table should not have more than 8 extents.
Hope this helps a little.
Jane
RE: Performance issues
The extents info is most helpful and we're exploring that issue. I assume that the extents can change based on a condition(s). We just started verifying with oncheck. I assume that's going to help. This area is new to me that's why I'm asking until I get time to read the details on extents and oncheck.
Thanks again for the pointer.
Jim
RE: Performance issues
It's not very easy to check for number of extents. This is the way I do it:
onstat -T > somefile (will list all tables in your instance)
The last column on the right is the number of extents.
To figure out which table a row refers to you have to take the value listed under 'tblnum', convert it to decimal (it's in hex) and then select the information for that table from systables:
select tabname from systables where partnum = nnnnnnn
(where nnnnnnn is the decimal value that you figured out above)
If your table has too many extents you will need to rebuild it. There are a couple of ways you can do that. Let me know if that's the case and I'll tell you more.
Good luck.
Jane
RE: Performance issues
When the system has problems have you looked at the Informix log? Don't know about your system but on ours it's in $INFORMIX/online.log (where 'online.log' is whatever you called it in your configuration file)
RE: Performance issues
I run into problem of tables have too many extents. How could i rebuild these tables with acceptable extents?
Btw, how do you come up with statement of 8 extents per table is reasonable? I'm running IUS 9.14 on Irix 6.5
RE: Performance issues
There are two ways to rebuild a table that I know of:
1 - unload the data (with the sql 'unload' command), rename the table (for a backup), recreate it with the correct extent sizes and different index names. Load the data into the new empty table. When the load is done and everything looks OK, drop the backup table you renamed above.
or (and this is what I do)
2 -
a - figure out what the "next" extent size should be
b - Then change the next extent size with the following sql statement:
alter table tablename modify next size nnnnn;
(Where nnnnn is what you want the next extent size to be.)
This command runs instantly.
c - Do an 'onunload' of the table to disk.
d - Rename the table (for a backup).
e - Do an 'onload' from the file you just unloaded to disk.
You will have to rename any indexes in the 'onload' statement.
f - Grant the correct permissions on the table since it won't have any unless you
do.
g - Check to be sure it looks OK and then drop the backup you created in step
'd' above.
Even though this second method is more complicated it will run much faster than the first. If you don't have room in your database for two full copies of the table (the backup and the new one) then be sure to do an 'onunload' of the table to disk or tape just for safety sake.
RE: Performance issues
Jim
RE: Performance issues
Curiously, on checking extents, found that some of the files listed in the onstat -T were not there n the systables. Also, at least a dozen files had over 50 extents. Haven't done any rebuild with these yet. Any further info will be appreciated, and I will post a reply with results of rebuilds in the future. Thank you for all your help.
Jim
RE: Performance issues
Many of the tables that will show up are temporary tables created by Informix. Don't ask me what they're for. I asked Informix once and got one of those garbled responses that made me think they didn't know either. They tend to come and go depending upon system usage.
For the tables with more than 50 extents - you really need to rebuild these. When you do you might want to rebuild them so that the "next size" contains all the current data. Then after the rebuild change the "next size" to something more reasonable - something that will last you at least a year or two without going over the 8 or 10 mark.
If you have lots of fragmentation in your database and your extent sizes need to be very large it's possible you may not be able to grab and extent as big as you define. Not much you can do about that except export and import the whole damned instance. Usually much more trouble than it's worth.
One warning - do NOT do anything to system tables regardless of the extents (systables, sysindexes, etc.)
RE: Performance issues
whose been there. I especially like the details and the warning you've provided. You have been most helpful. Have
a super day.
Jim
RE: Performance issues
Dear Jim,
If you have too many tables to be defragmanted in your database, it is difficult to do unload & load of all the tables using sql by creating tables & backing up etc. I think one more easiest way of defragmanting all the tables in your database is to take onunload of your database and again load the database using the onload command. Before droping your database and loading the database using onload you need to do a zero level archive or other backup method for safety.
After loading you can alter the big tables next extent size to meet your requirement.
G.R.P.
RE: Performance issues
select tabname Tabla, count(*) Fragmentos, sum(size) Tamaqo from sysextents where dbsname = 'dbname' group by tabname having count(*) > 7 order by 2 DESC, 1
either you must use for a particular table in a database:
oncheck -pt database:table
To defragment a particular table, you can change next extent size and then create a cluster index, this process rebuild the table and eliminate some extents. Be carefull with long transactions. Although ideally you must change first extent size and next extent size and then unload/reload the table.
Manuel Gimbert
Database Administrator
Hospital Reina Sofía
Córdoba
Spain
RE: Performance issues