Griff has a good point worth checking, but I never heard that a USE takes that long.
Obviously, at first, everybody would assume you're just misjudging this and not the USE is what takes long, the time could come from a query or scanning the file or whatever reads you do. Let's estimate that: A SELECT * NOFILTER, reading full 60mb in 20 seconds would mean a network bandwidth of only 24 Mbit (3 MB per second = 24 Mbit/s). I hope your LAN is today's standard 1 Gbit = 1000 Mbit. Besides that, this would not vary much. Of course, more users slow down a LAN, but the maximum number of users will be reached before noon, this does not degrade over the whole day.
You could make the final check about that measuring times of USE with SET COVERAGE. Log to a local file for each user and let the start code of the application copy the previous log to a share or do it when the application ends. You can limit this to just logging the USE lines and then you'll see. It'd be cumbersome to establish it depending on how OOP this code is, there might be a good place not directly before the individual USE and after them, as they would be very individual places in code, but start logging in a base form LOAD and already end again in a base form INIT or ACTIVATE or SHOW to catch the phase of opening tables in the DE or where you do the USE. By the way, a DE opening the tables would not contribute to the coverage log, as the log only logs code lines, the DE with its cursor objects will iterate them and open the tables, but that'll be native base behavior not getting lines in coverage. So it's good you do code USE commands for that matter to be able to make that timing visible and measurable.
Thinking what could take long for the USE itself, which does not much more than a FOPEN operation establishing a filehandle, the only mechanism that's worth noticing is table validation, SET TABLEVALIDATE tells more about that. This mainly was introduced to reduce or prevent the most frequent reason for DBFs breaking, a wrong record count saved in the header. The behavior can now be controlled quite precisely, you can decide to check the reccount at opening or closing or both times. A check does a header lock and that would be the only operation that could take longer. There's a big suspicion though, that this is not responsible for the 30 seconds time it needs. VFP does only try to lock once, not repeatedly, this will also not change with REPROCESS setting, if VFP can't lock the head it'll do a reccount check by actually checking a consistent file length with the count in the header, but that also won't take that long, as it only means determining the file size and in low-level terms this might be coming from file system information or from an FSEEK to the end of the file, both are short operations.
So even when validation at opening of a dbf is turned on, the single lock try and otherwise check without it shouldn't take long, no matter if oplocks play a role or not. Oplocks are not really locks, they are a caching mechanism acting as-if the file is exclusive, but not in real exclusive mode, any other client, eg the one for which USE takes long, would break this "exclusive" mode. I bet this will still mean the header lock fails, because even though an oplock isn't exclusive, the client having the oplock acts as if it has exclusive access and it will need to take notice and write out any changes - break the oplock - before a real lock of any kind can be made, otherwise this could lead to inconsistent behavior of the client having the table open for longer already with an oplock still thinking it can act on it as if it has exclusive access, while the new client already thinks he has a lock, a real lock - and a header lock is a file lock, it's not just a (soft) record lock. Nevertheless, VFP will only try once and not getting a lock immediately tell the system "never mind".
So again, in short: I don't think the tablevalidation would explain a long USE time, as even a header check configured to be made at USE would not reprocess until it gets a lock, it would just try once and then do an alternative check. I don't see how
that could take long.
But: For the purpose of ruling tablevalidation out, you could set this to 0 for a week or so. You don't lose much, tablevalidation was introduced with VFP8 and we were living without it in VFP7 and earlier, too. It's no reason to keep it off, but once you find this to be the problem, there'd be a chance to finally figure out what's slowing this down so much. Especially in conjunction with COVERAGE logging you'd find out a lot about that, instead of letting your application become a lab for that you can of course also experiment with test code. Such measuring runs could be made over night, for example, "emulating" usage of files from multiple clients, but it's also not a big deal to do so with the full real conditions of application, LAN and users in your application, you can just generate much more measurements and test data with a separate code experiment.
If there is the slightest chance this still simply is due to reading in whole DBFs (and FPTs) you will just need to change your ways, COVERAGE will tell that, no matter if you only log form starts including the USEs but excluding the timings of query or other read/write access, you'll be able to identify or rule out the USEs. And setting TABLEVALIDATE to 0 only for some forms and not others, you can also see how that influences USE time.
Bye, Olaf.
Olaf Doschke Software Engineering