It all depends on what you mean with "complex report". If it is the SQL to the database that is slow in the retrieval of your data (which you can check with "view data" it could be (literally! COULD be) improved with a better fitting universe, or adding optimiser hints to the SQL (ORACLE). However if one or more conditions addresses a non-indexed field, performance can be quite slow or even non-existent. (great tool is TOAD for ORACLE which gives you explain plan facility to look at the path the SQL takes).
Sometimes, adding aggregate-tables (datawarehousing) can (greatly) improve report performance, cause aggregation has already been performed at database-level, giving you faster access to data.
However, there is a second level to report performance, starting immediately after query completes. At client level calculations and formatting have to be performed. Here is were client CPU really matters. We did some elaborate performance tests (actually to get new PC's!) years ago, which showed that for a very complex report upgrading from 133 to 350 Mhz gave a 5-fold faster performance. What certainly matters are calculations with 'nested' variables (variables which use other variables etc , etc) or applying lots of filters in reports consisting of multiple tables.
And even then some complex reports took ages to format , other complex ones were quite fast. This is something where documentation seems non-existent. T. Blom
Information analist
Shimano Europe
tbl@shimano-eu.com
One check would be to refesh the report and check the refresh time of each of your dataproviders, see if it is retrieving the data that is slowing down your report or the calculations done on the report side. This will enable you to know whether the efficiency improvements need to be made in the universe or the report.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.