Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations wOOdy-Soft on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Tough performance issue

Status
Not open for further replies.

svar

Programmer
Aug 12, 2001
349
GR
I have a lot of tables, and not all of them are filled equally.
Inserts to tables that have a lot of entries(see the count below), take a long time (about .06 secs in mysql, 0.09-0.1 in DBI), for example

mysql> INSERT INTO T1 VALUES ('3CCF571C1A881188010403000002','072','7269','','','','','','',3103,1,24,'2002-09-01','05:46:52',0.99,13.7560,24.0480,0.0710,190.839,16,0,0,1,0,0,0,0,0,0,0,0,0,3,4,0,0,0,0,0,0,10,0,'1',1,'0',0,1,0,'200205010250_ampsc1','AMP_THES',4,'-12015927269-1-95-3855206',1,0);
Query OK, 1 row affected (0.06 sec)

inserts into tables like T2 which have few elements take a much shorter time:
mysql> INSERT INTO T2 VALUES ('3CCF571C1A881188010403000002','072','7269','','','','','','',3103,1,24,'2002-09-01','05:46:52',0.99,13.7560,24.0480,0.0710,190.839,16,0,0,1,0,0,0,0,0,0,0,0,0,3,4,0,0,0,0,0,0,10,0,'1',1,'0',0,1,0,'200205010250_ampsc1','AMP_THES',4,'-12015927269-1-95-3855206',1,0);
Query OK, 1 row affected (0.00 sec)

mysql> SELECT COUNT(*) FROM T2;
+----------+
| COUNT(*) |
+----------+
| 509 |
+----------+
1 row in set (0.00 sec)

mysql> SELECT COUNT(*) FROM T1;
+----------+
| COUNT(*) |
+----------+
| 782910 |
+----------+
1 row in set (7.43 sec)

Of course DELETES are similar(actually much worse with a where)
mysql> DELETE FROM T1 WHERE date ='2002-09-01';
Query OK, 1 row affected (40.19 sec)

mysql> DELETE FROM T2 WHERE date ='2002-09-01';
Query OK, 1 row affected (0.24 sec)


I presume the problem is that in the table definitions I have a couple of unique constraints and
the INSERT time difference is due to mysql
checking all entries in the table for a unique violation, hence the insert would be slow:

CREATE TABLE T1 (
corrno varchar(43) default NULL,
anum varchar(30) default NULL,
bnum varchar(30) default NULL,
chnum varchar(30) default NULL,
ocnum varchar(30) default NULL,
redirectnum varchar(30) default NULL,
sccpanum varchar(30) default NULL,
sccpbnum varchar(30) default NULL,
truebilnum varchar(30) default NULL,
opc int(5) default NULL,
dpc int(5) default NULL,
cic int(6) default NULL,
date date default NULL,
start_time time default NULL,
value decimal(30,2) default NULL,
iamtoacm decimal(7,4) default NULL,
iamtoans decimal(7,4) default NULL,
reltorlc decimal(7,4) default NULL,
duration decimal(12,3) default NULL,
rcv int(3) default NULL,
rcvabn int(1) default '0',
location int(3) default NULL,
reldir int(1) default NULL,
servicecode int(3) default NULL,
transmedused int(3) default NULL,
tcaprobspec int(3) default NULL,
rsptype800 int(3) default NULL,
ccquerytype int(5) default NULL,
ccrsptypelidb int(3) default NULL,
cccsdi int(3) default NULL,
ccpsdi int(3) default NULL,
ccsan int(3) default NULL,
nati_calling int(2) default NULL,
nati_called int(2) default NULL,
nati_chrg int(2) default NULL,
nati_ocn int(2) default NULL,
nati_redir int(2) default NULL,
nati_sccp_calling int(2) default NULL,
nati_sccp_called int(2) default NULL,
nati_truebilnum int(2) default NULL,
calling_pty_cat int(2) default NULL,
called_pty_cat int(1) default NULL,
isdn_user_part_indicator char(1) default NULL,
isdn_user_part_pref int(3) default NULL,
isdn_access_indic char(1) default NULL,
interwrk int(1) default NULL,
echo int(1) default NULL,
smslength int(3) default '0',
file varchar(30) default NULL,
link varchar(8) default NULL,
fileind int(11) default NULL,
unicor varchar(100) NOT NULL default '',
stp int(1) default '0',
alarm int(1) default '0',
UNIQUE KEY unicor (unicor),
UNIQUE KEY corrno (corrno)
) TYPE=InnoDB;


The question is what can be done about this?
I guess getting rid of the unique constraints
would help, but then I would be getting some unwanted duplicates. I should add that currently
I need to do about 100,000
inserts/10 minutes and the system is too slow for this
as it is, while it used to be very fast. So, I am looking fro what can be done to avoid performance degradation
as the tables fill.
The server is a dual PIII x1000, 2GB RAM Linux 2.4.18
machine but the inserts are done from a client(a 1000 PIII , 1GB RAM Linux machine). Below is /etc/my.cnf
# Example mysql config file for very large systems.
> #
> # This is for large system with memory of 1G-2G where
> the system runs mainly
> # MySQL.
> #
> # You can copy this file to
> # /etc/mf.cnf to set global options,
> # mysql-data-dir/my.cnf to set server-specific options
> (in this
> # installation this directory is /var/lib/mysql) or
> # ~/.my.cnf to set user-specific options.
> #
> # One can in this file use all long options that the
> program supports.
> # If you want to know which options a program support,
> run the program
> # with --help option.
>
> # The following options will be passed to all MySQL
> clients
> [client]
> #password = your_password
> port = 3306
> socket = /var/lib/mysql/mysql.sock
>
> # Here follows entries for some specific programs
>
> # The MySQL server
> [mysqld]
> port = 3306
> socket = /var/lib/mysql/mysql.sock
> skip-locking
> set-variable = key_buffer_size=500M
> set-variable = key_buffer=100M
> set-variable = max_allowed_packet=1M
> set-variable = table_cache=512
> set-variable = max_connections=10
> set-variable = tmp_table_size=400M
> set-variable = sort_buffer=2M
> set-variable = record_buffer=2M
> set-variable = thread_cache=8
> set-variable = thread_concurrency=4 # Try number of
> CPU's*2
> #set-variable = myisam_sort_buffer_size=64M
> set-variable = myisam_sort_buffer_size=4M
> innodb_data_home_dir=
> #and then use absolute file paths
> #innodb_data_file_path=ibdata1:2000M;ibdata2:2000M;ibdat
> a3:2000M;ibdata4:2000M;ibdata5:2000M;ibdata6:2000M;ibdat
> a7:2000M;ibdata8:2000M
> innodb_data_file_path=/var/lib/mysql/ibdata1:2000M;/var/
> lib/mysql/ibdata2:2000M;/var/lib/mysql/ibdata3:2000M;/va
> r/lib/mysql/ibdata4:2000M;/var/lib/mysql/ibdata5:2000M;/
> var/lib/mysql/ibdata6:2000M;/var/lib/mysql/ibdata7:2000M
> ;/var/lib/mysql/ibdata8:2000M;/usr/ibdata/ibdata9:2000M;
> /usr/ibdata/ibdata10:2000M;/usr/ibdata/ibdata11:2000M;/u
> sr/ibdata/ibdata12:2000M;/ibdata/ibdata13:2000M;/ibdata/
> ibdata14:2000M;/ibdata/ibdata15:2000M;/var/lib/mysql/ibd
> ata16:2000M;/var/lib/mysql/ibdata17:2000M;/var/lib/mysql
> /ibdata18:2000M;/var/lib/mysql/ibdata19:2000M;/var/lib/m
> ysql/ibdata20:2000M;/var/lib/mysql/ibdata21:2000M
> #;/usr/ibdata/ibdata9:2000M;/usr/ibdata/ibdata10:2000M
> ## Comment next line if you do not need recovery(the
> hostname-bin.xxx files)
> #log-bin
> ##
> server-id = 1
> #set-variable = open-files-limit=8192 #ulimit is
> 1024,hard 8192
> set-variable = innodb_buffer_pool_size=1100M
> # so that innodb_buffer_pool
> size+key_buffer+max_connections*
> (sort_buffer+record_buffer+2M)=1100+100+10*(2+2+2)
> =1260<2000M=RAM
> set-variable = innodb_additional_mem_pool_size=700M
> set-variable = innodb_log_file_size=1000M
> set-variable = innodb_log_buffer_size=20M
> # helps for large transactions
> # Uncomment the following if you are using BDB tables
> #set-variable = bdb_cache_size=384M
> #set-variable = bdb_max_lock=100000
>
> # Point the following paths to different dedicated disks
> #tmpdir = /tmp/
> #log-update = /path-to-dedicated-directory/hostname
>
> [mysqldump]
> quick
> set-variable = max_allowed_packet=256M
>
> [mysql]
> no-auto-rehash
> #safe-updates # Remove the comment character if you
> are not familiar with SQL
>
> [isamchk]
> set-variable = key_buffer=256M
> set-variable = sort_buffer=256M
> set-variable = read_buffer=2M
> set-variable = write_buffer=2M
>
> [myisamchk]
> #set-variable = key_buffer=256M
> #set-variable = sort_buffer=256M
> #set-variable = read_buffer=2M
> #set-variable = write_buffer=2M
> set-variable = key_buffer=25M
> set-variable = sort_buffer=25M
> set-variable = read_buffer=2M
> set-variable = write_buffer=2M
>
> [mysqlhotcopy]
> interactive-timeout
> ------------------------------------------------------
>


Any suggestions?
Thanks, svar
 
I have no real performance problems with tables of roughly 40M recs so...

Have you tried the &quot;OPTIMIZE TABLE mytab;&quot; command? Putting holes in an ISAM table with DELETEs *will* slow performance if you don't optimize every so often.

Be warned tho, do the optimize when there is some free time because it's gonna lock the table.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top