Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations bkrike on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Network performance issue Access 2000

Status
Not open for further replies.

bhoran

MIS
Nov 10, 2003
272
US
I have a few posts about this issue but I don't seem to get iit sorted I have an update query that runs in about 40 secs on the local machine but so far I have killed it after 2 hours over the network.

I have tried many things including turning autoname off, adding and deleteing indexes, changing record locks, setting up a new database (local machine) that links to the table in the DB on the network and performing the update on my local Db with no luck.

Currently it is updating around 46000 (all records) records on IBMUploadTemp and 57000 exist on GLIBMPlan.

Here is my query any ideas on how to speed it up?

UPDATE GLIBMPlan INNER JOIN IBMUploadTemp ON (GLIBMPlan.FY = IBMUploadTemp.Period_Year) AND (GLIBMPlan.PERIOD = IBMUploadTemp.Period_Num) AND (GLIBMPlan.Channel = IBMUploadTemp.Channel) AND (GLIBMPlan.ACCOUNT = IBMUploadTemp.Account) AND (GLIBMPlan.STATE = IBMUploadTemp.State) AND (GLIBMPlan.PGCODE = IBMUploadTemp.PGCode) AND (GLIBMPlan.Brand = IBMUploadTemp.Brand) AND (GLIBMPlan.ITEM = IBMUploadTemp.Item) SET GLIBMPlan.Cases = [IBMUploadTemp]![SumOfCases], GLIBMPlan.AMT = [IBMUploadTemp]![AMT];

I have built the table IBMUploadTemp in order to facilitate the update.

I also hav an append query that I use to append records that do not exist on GLIBMPlan I havn't tested that yet but may have similar problems (code follows:)

INSERT INTO GLIBMPlan ( ITEM, DESCR, Brand, PGCODE, PROMGRP, PACKSIZE, STATE, Cases, LD, FY, PeriodName, PERIOD, ACCOUNT, Channel, AMT )
SELECT AppendIBMUploadSelectUnion.Item, AppendIBMUploadSelectUnion.Descr, AppendIBMUploadSelectUnion.Brand, AppendIBMUploadSelectUnion.PGCode, AppendIBMUploadSelectUnion.PromoGroup, AppendIBMUploadSelectUnion.Packsize, AppendIBMUploadSelectUnion.State, Sum(AppendIBMUploadSelectUnion.SumOfCases) AS SumOfSumOfCases, AppendIBMUploadSelectUnion.LD, AppendIBMUploadSelectUnion.Period_Year, AppendIBMUploadSelectUnion.Period_Name, AppendIBMUploadSelectUnion.Period_Num, AppendIBMUploadSelectUnion.Account, AppendIBMUploadSelectUnion.Channel, Sum(AppendIBMUploadSelectUnion.AMT) AS SumOfAMT
FROM AppendIBMUploadSelectUnion LEFT JOIN GLIBMPlan ON (AppendIBMUploadSelectUnion.Channel = GLIBMPlan.Channel) AND (AppendIBMUploadSelectUnion.Period_Year = GLIBMPlan.FY) AND (AppendIBMUploadSelectUnion.Period_Num = GLIBMPlan.PERIOD) AND (AppendIBMUploadSelectUnion.Account = GLIBMPlan.ACCOUNT) AND (AppendIBMUploadSelectUnion.State = GLIBMPlan.STATE) AND (AppendIBMUploadSelectUnion.PGCode = GLIBMPlan.PGCODE) AND (AppendIBMUploadSelectUnion.Brand = GLIBMPlan.Brand) AND (AppendIBMUploadSelectUnion.Item = GLIBMPlan.ITEM)
GROUP BY AppendIBMUploadSelectUnion.Item, AppendIBMUploadSelectUnion.Descr, AppendIBMUploadSelectUnion.Brand, AppendIBMUploadSelectUnion.PGCode, AppendIBMUploadSelectUnion.PromoGroup, AppendIBMUploadSelectUnion.Packsize, AppendIBMUploadSelectUnion.State, AppendIBMUploadSelectUnion.LD, AppendIBMUploadSelectUnion.Period_Year, AppendIBMUploadSelectUnion.Period_Name, AppendIBMUploadSelectUnion.Period_Num, AppendIBMUploadSelectUnion.Account, AppendIBMUploadSelectUnion.Channel, GLIBMPlan.Channel, GLIBMPlan.ITEM, GLIBMPlan.Brand, GLIBMPlan.PGCODE, GLIBMPlan.ACCOUNT, GLIBMPlan.STATE, GLIBMPlan.FY, GLIBMPlan.PERIOD
HAVING (((GLIBMPlan.Channel) Is Null) AND ((GLIBMPlan.ITEM) Is Null) AND ((GLIBMPlan.Brand) Is Null) AND ((GLIBMPlan.PGCODE) Is Null) AND ((GLIBMPlan.ACCOUNT) Is Null) AND ((GLIBMPlan.STATE) Is Null) AND ((GLIBMPlan.FY) Is Null) AND ((GLIBMPlan.PERIOD) Is Null));

thanks in advance
 
No that is the unique key string. The only way around it I imagine is to concatenate all the fields into 1 field and do the join on that field.

As long as I trim every field before concatenation it should work.

Can you see anything wrong with that approach?
 
Hi

No, not really, I have no 'real' reason to think this is the problem, except that the timings you gave to update that many records are abnormally high, and the immediate point which struck me about the SQL was the complex join.

Another point which may help is to set the query up as a sqved query (ie a querydef), I think (but am not sure) that saved queries run faster than SQL built on the fly due to action by the optimiser

Regards

Ken Reay
Freelance Solutions Developer
Boldon Information Systems Ltd
Website needs upgrading, but for now - UK
 
They are both saved queries kicked off within a macro.

I will try adding the concatenated field and see how if that makes a difference.
 
Thanks Foolio I have all the combined fields indexed.

I have now concatenated all those fields into a field named ID, I now have a join only between the 2 new ID fields, and performance is still very slow 30mins so far and counting.


This is really doing my head in.

If I add a field and update =Now() and let it run this evening will the Now() value be directly related to the time of the writing of the last record or will it be the time the update is kicked off? Do you think this timestamp will significantly affect the performance of the query?

Can someone think of another way for me to tell when the query finished if I leave it to run over night (I haven't been patient enough to see this query return over the network yet).
 
set up a macro that 1. runs the query, 2. runs some other query or function that marks the time.

Functions like Now() are evaluated once during a query, at least for simple queries. I don't know what would happen if you manage to call Now() inside a subquery. Weird. Custom-built functions are called every time. So I've used Now() as the datestamp for an import process, and it assigned the same value to every line imported in that query. Anyway.

Sorry I have nothing to add--which I don't. It looks like you've tried most everything I can think of.


Pete

--
Find common answers using Google Groups:

Corrupt MDBs FAQ
 
Your problem seems less to do with the database and more with the network. There shouldn't be that bit of a discrepancy between running it locally as opposed to on the network. Dumb question but, did you also compact and repair the database on the network? Also another idea is maybe you could have one field pair in the inner join and the rest in the where clause. The fact that you have all of those inner joins taking place is very network intensive.

As for timing your query...I run this procedure for timing modules...if you run the query off of a command button you could do this.

Private Sub cmdButton_Click()

'Error handler
On Error Goto cmdButton_Click_Err

Dim sngBegin as Single
Dim sngEnd as Single

sngBegin = Timer
Docmd.OpenQuery "yourquery"

'All is well, query complete
sngEnd = Timer

'Alert the user of time taken for process to run
MsgBox Format(sngEnd-sngBegin,"000000.00") & " seconds to run."

cmdButton_Click_Exit:
Exit Sub
cmdButton_Click_Err:
MsgBox err.number & chr(13) err.description

Resume cmdButton_Click_Exit

End Sub
 
Thanks foolio12 and Omega36, I have used that code, kicked it of last night at 4:44pm it is now 10:52am the next day and the process is still running!
 
Omega36,

I used your code and got an answer of -008576.64 if I translate it out it comes to around -143 mins or 2hrs 23mins. From the formula it looks like it was 2hr and 23 mins short of 24 hrs. Am I reading that correctly in that the process took 21 hours and 38mins?

Cheers
 
In this case since the network seems to be very slow, can you import the table to your desktop. Do the update on the desktop. Then, delete the table on the server and then export the updated table back to the server. You might need to make sure nobody else is in the database on the server while you are doing this so they don't have their updates over written.
 
Its worth a crack, I will look into it, as you say my main concern is having other users using that table at the time so I would have to kick all users out before the process starts.
 
I have now setup a FE database on the administrator users machine which imports the tables and rexpotrs them after the upadtes have occurred this has reduced processing from 21 hours to 35 mins.

Thanks for all your help!

I just have 1 question regarding this now. The new FE database that I setup allows entry using the default workgroup even though all the security from the original DB is intact within the copy.

To get over this do I need to create a new database and import all the objects?

This being the case how in security can I stop someone from copying the databse opening it on their machine and then copying it back to the server overwriting the original?

I already have security setup so that if a user tries to open it on the server (without using the correct shortcut and wrkgrp) it won't open with the default wrkgrp.

Cheers
Brad
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top