Here are some major areas that affect data insert performance:
Batch vs Transaction Processing
If there are no other processes that need access to the data during the load process, then LOCK the table for the insert process. A user lock on the table reduces the overhead from lock management by DB2.
Keep volitale tables in their own tablespaces. Again, DB2 has less lock management to contend with for the tablespace.
Whenever possible use the DB2 LOAD facility. The loader can make great use of parallelism (when possible).
Triggers and Constraints
If there are data validation triggers or constraints on the table, they will fire for every row inserted. Disabling the triggers and constraints requires that the data be CLEAN. This may require redundant code to be written for the data to guarantee data integrity.
Tablespaces
If speed is an issue DO NOT USE SMS TABLESPACES. SMS requires DB2 overhead to check to see if new space needs to be allocated before each block insert.
Use DMS with multiple (3 or more) same-sized containers. Ideally, each container on a different controller, at least different disks.
Try to avoid using RAID5 or RAID-S for heavily inserted/updated tables/tablespaces and log files. RAID 5/S is slower on writes.
Commit Rate
Commit rates are important. Tuning the commit rate requires some testing. Concurrent processes, bufferpool sizes, and log sizes all are factors. Don't try to insert 1.2 million rows, then commit. But, 5/10/20/100K commit rates will depend on your system resources.