Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Chriss Miller on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Poor Query Performance w/backend Database 1

Status
Not open for further replies.

Trevil

Programmer
Jun 19, 2003
459
US
I have recently taken over an application that is deployed in 70+ countries and that has a history of poor performance.

The facts are: Access 2000; Win-2000; frontend sits on user workstation; backend resides on server; typically between 1 and 5 users; 60+ tables; transaction data tables have about 20,000 records; after user log-in, screen previously would take 40 seconds to load (I now have it down to about 7 seconds).

I have made tons of changes that have helped, but think I have stumbled across an issue that I need help with. In recent testing I have been watching the network traffic on my test workstation and have seen LOTS of activity. To further analyze, I grabbed a tool that shows upload / download statistics and started playing with the SQL in a query. What I have found is that if I code the SQL to return all fields from a table or return only one field, the bytes transferred are the same (~4MB). Selection criteria makes no difference either.

Is there a way to reduce the network traffic in a simple query where the table(s) must reside on the network? I have already moved 40+ "semi-static" tables into the frontend, which was a big help (37 seconds went to 20 seconds).

SQL Server is not an option for at least 4 months.

Thank You!
 
if I code the SQL to return all fields from a table or return only one field, the bytes transferred are the same
Even if the SQL is in a saved query on the BackEnd ?

Hope This Helps, PH.
Want to get great answers to your Tek-Tips questions? Have a look at FAQ219-2884 or FAQ222-2244
 
Hmmm, I'm running a test and it appears to work. I need to run another test (but first gotta get my car out of the garage before they lock it) Will retest in about 2 hours when I get home.

Thank You Very Much!!
 
PHV

Where the SQL is stored won't make any difference. The SQL can only run on the client and the data must be sucked from the server.

Trevil

The only way really you can cut down network traffic is to use indexes. Without an index, Access is forced to bring across all the data. As a general rule if you have half an inclination to stick an index on a field, then do it.

The principal draw-back with indexes is you may get locking if users are simultaneously trying to update records on the same index page, although I'm not sure whether row-level locking now overcomes that.

Jet is as efficient as any other relational database but equally esoteric when it comes to tuning. If you can't be bothered, then change to MSDE which should work ok for 5 users. It will probably sort out your problem without any specific need to tune.



 
Thank you for your input. I've done a little more testing and here are the results.
1. Open recordset, returning all fields, all rows (13,166): download = 445KB
2. Open recordset, returning only one field, all rows = 0KB (Cache?)
3. Open recordset, returning one field, one row using index = 12KB
I need to leave for most of the day, but will continue testing different combinations this weekend.

I appreciate the assistance from both of you!!
 
I doubt the number of fields makes any different. Normally Jet stores complete records on a page. The unit of access is a page (although Access might bring across bigger chunks, it won't bring smaller ones). Therefore you usually will get all the fields dragged across even if you discard most of the fields. The exception to this is certain large objects like memo and OLE fields.

You might find that if you asked for a 100 records using the index it would still only be 12Kb. This is quite a small amount of data and maybe just a couple of index pages plus your data page.

 
Although I continue to test using different combinations, the preliminary results show I should use a query located in the backend (plus use an index!) In all tests, I’m using one table that has 36 fields and 13,166 records.
With a query stored in the backend, to return one row using an index, results in 12KB being downloaded. The same query, stored in the frontend, results in 179KB being downloaded. I still need to try and clean up several “ugly” queries that combine 30+ tables, but at least I have a clue now.

I appreciate the help you have provided. Thank You!
 
My theory - and I have no hard evidence to corroborate it - is simple queries will be reliably optimised by Access using eg Rushmore index manipulation. If you add a sub-query you are more likely to get area-sweeps (ie all data being retrieved from a table) than if you do equivalent joins. Then if you start adding lots of SQL, functions, selections on groupings and calculated fields, you risk the optimiser breaking the thing up and thus creating a lot of i/o, so if you have an issue with i/o across the network, arrange things so that your SQL is simple eg do complex calculations is a report. (A report's logic runs on the client which is not i/o- or cpu- bound ie there is huge spare power on the average workstation)

 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top