×
INTELLIGENT WORK FORUMS
FOR COMPUTER PROFESSIONALS

Log In

Come Join Us!

Are you a
Computer / IT professional?
Join Tek-Tips Forums!
  • Talk With Other Members
  • Be Notified Of Responses
    To Your Posts
  • Keyword Search
  • One-Click Access To Your
    Favorite Forums
  • Automated Signatures
    On Your Posts
  • Best Of All, It's Free!

*Tek-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail.

Posting Guidelines

Promoting, selling, recruiting, coursework and thesis posting is forbidden.

Students Click Here

VFP 9.0 :: Application Performance Network Environment

VFP 9.0 :: Application Performance Network Environment

VFP 9.0 :: Application Performance Network Environment

(OP)
Hi,

I have an application in a network environment, where users have windows 10 (bits).

Since Windows 10 came, they have been complaining about performance...

For example, a table with 60 Mb can take 20 seconds just to open...
At this moment, i just open tables when i need to execute a operation (read, delete, add or update data) and then i close the table.
I open tables with USE command.

The application works reasonably well at the beginning, but with the passing of the day it starts to get slow ... Having the user to disconnect it and reconnect.

Can anyone help me?

I don't know what to do anymore ....

Thanks,
Fernando Pereira

RE: VFP 9.0 :: Application Performance Network Environment

Sounds a bit op's lockish

Do you know much about that?

SMB3 on Windows 10. What kind of server do you have?

Regards

Griff
Keep Smileing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.

RE: VFP 9.0 :: Application Performance Network Environment

Griff has a good point worth checking, but I never heard that a USE takes that long.

Obviously, at first, everybody would assume you're just misjudging this and not the USE is what takes long, the time could come from a query or scanning the file or whatever reads you do. Let's estimate that: A SELECT * NOFILTER, reading full 60mb in 20 seconds would mean a network bandwidth of only 24 Mbit (3 MB per second = 24 Mbit/s). I hope your LAN is today's standard 1 Gbit = 1000 Mbit. Besides that, this would not vary much. Of course, more users slow down a LAN, but the maximum number of users will be reached before noon, this does not degrade over the whole day.

You could make the final check about that measuring times of USE with SET COVERAGE. Log to a local file for each user and let the start code of the application copy the previous log to a share or do it when the application ends. You can limit this to just logging the USE lines and then you'll see. It'd be cumbersome to establish it depending on how OOP this code is, there might be a good place not directly before the individual USE and after them, as they would be very individual places in code, but start logging in a base form LOAD and already end again in a base form INIT or ACTIVATE or SHOW to catch the phase of opening tables in the DE or where you do the USE. By the way, a DE opening the tables would not contribute to the coverage log, as the log only logs code lines, the DE with its cursor objects will iterate them and open the tables, but that'll be native base behavior not getting lines in coverage. So it's good you do code USE commands for that matter to be able to make that timing visible and measurable.

Thinking what could take long for the USE itself, which does not much more than a FOPEN operation establishing a filehandle, the only mechanism that's worth noticing is table validation, SET TABLEVALIDATE tells more about that. This mainly was introduced to reduce or prevent the most frequent reason for DBFs breaking, a wrong record count saved in the header. The behavior can now be controlled quite precisely, you can decide to check the reccount at opening or closing or both times. A check does a header lock and that would be the only operation that could take longer. There's a big suspicion though, that this is not responsible for the 30 seconds time it needs. VFP does only try to lock once, not repeatedly, this will also not change with REPROCESS setting, if VFP can't lock the head it'll do a reccount check by actually checking a consistent file length with the count in the header, but that also won't take that long, as it only means determining the file size and in low-level terms this might be coming from file system information or from an FSEEK to the end of the file, both are short operations.

So even when validation at opening of a dbf is turned on, the single lock try and otherwise check without it shouldn't take long, no matter if oplocks play a role or not. Oplocks are not really locks, they are a caching mechanism acting as-if the file is exclusive, but not in real exclusive mode, any other client, eg the one for which USE takes long, would break this "exclusive" mode. I bet this will still mean the header lock fails, because even though an oplock isn't exclusive, the client having the oplock acts as if it has exclusive access and it will need to take notice and write out any changes - break the oplock - before a real lock of any kind can be made, otherwise this could lead to inconsistent behavior of the client having the table open for longer already with an oplock still thinking it can act on it as if it has exclusive access, while the new client already thinks he has a lock, a real lock - and a header lock is a file lock, it's not just a (soft) record lock. Nevertheless, VFP will only try once and not getting a lock immediately tell the system "never mind".

So again, in short: I don't think the tablevalidation would explain a long USE time, as even a header check configured to be made at USE would not reprocess until it gets a lock, it would just try once and then do an alternative check. I don't see how that could take long.

But: For the purpose of ruling tablevalidation out, you could set this to 0 for a week or so. You don't lose much, tablevalidation was introduced with VFP8 and we were living without it in VFP7 and earlier, too. It's no reason to keep it off, but once you find this to be the problem, there'd be a chance to finally figure out what's slowing this down so much. Especially in conjunction with COVERAGE logging you'd find out a lot about that, instead of letting your application become a lab for that you can of course also experiment with test code. Such measuring runs could be made over night, for example, "emulating" usage of files from multiple clients, but it's also not a big deal to do so with the full real conditions of application, LAN and users in your application, you can just generate much more measurements and test data with a separate code experiment.

If there is the slightest chance this still simply is due to reading in whole DBFs (and FPTs) you will just need to change your ways, COVERAGE will tell that, no matter if you only log form starts including the USEs but excluding the timings of query or other read/write access, you'll be able to identify or rule out the USEs. And setting TABLEVALIDATE to 0 only for some forms and not others, you can also see how that influences USE time.

Bye, Olaf.

Olaf Doschke Software Engineering
https://www.doschke.name

RE: VFP 9.0 :: Application Performance Network Environment

(OP)
Hi Griff,

I know practically nothing about op's lockish. Or SMB3 on Windows 10...

I have this problema in some clients. And these clients use, as a server:
  • Windows Server Standard 2007
  • Windows Server Standard 2007
  • Windows 10 Pro (Not Server)
I already had a lot of problems with Windows 10 update 1803... Data loss, problems with the database... And, after some time, i resolve with:
  • set-smbclientconfiguration -DirectoryCacheLifetime 0
  • set-smbclientconfiguration -FileInfoCacheLifetime 0
The app in question has more than 300 customers, and 600 users ... The only thing in common is windows 10.
But performance problems have never disappeared, and they have been intensifying.
What users say is that the application starts well, and over time it gets slower ...

The code may not be perfect, but I do not believe it is there, because it all started with windows 10 .. And since then I have already made changes, some I don't even know whether good or bad ... in an attempt ..

Any ideas?
Thanks

RE: VFP 9.0 :: Application Performance Network Environment

I have seen USE, or any file opening operation, when Windows lets the 'owner' of a file open it first and does some kind of ops lock thing
and won't let anyone else get to it - sometimes until the owner lets the file go.

I've not seen this behaviour in a few years - so the server would probably have been a Windows NT Server from around 2000?

If the OP is talking about a modern server maybe running on a VM, you can get problems though.

Regards

Griff
Keep Smileing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.

RE: VFP 9.0 :: Application Performance Network Environment

(OP)
Ok,

This can happen in one or another client, but in most clients we are talking about Windows Server 2012.
I also have clients, where there is a windows 10 Pro PC, where the database is, and the others also access by virtual network unit.

you started by talking about SMB3.
what can i do to check if it can be of this or not?

Some aspects of code:
  • SET REFRESH TO 0, -1
  • SET TABLEVALIDATE TO 2
  • I use a lot of macros
  • I don't use Requery
  • I don't Forms Data Envirnment
  • I don't use Buffer Mode
  • Access to the database is done through virtual network units.
  • Table opening example: USE 'X:\myApp\DataBase\myTable.dbf' ALIAS 'tableX' AGAIN SHARED IN 0
For network environments, any aspect that should be taken into account when programming? Or any configuration you should use?

RE: VFP 9.0 :: Application Performance Network Environment

Well, I can see you have already had to look at SMB then.

Server 2007 is not the latest tech, getting a bit hard to support. I would imagine there is a chance you are running fairly small hard drives, perhaps they are close to capacity or starting to fail?

What is the nature of the app?

Is the data significantly large? Have you got a well indexed design?

Regards

Griff
Keep Smileing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.

RE: VFP 9.0 :: Application Performance Network Environment

(OP)
Application is an erp. my clients' main management software.

Manages from purchases, sales, inventory, production

Best Regards,
Fernando Pereira

RE: VFP 9.0 :: Application Performance Network Environment

(OP)
I have data bases with 1.2 Gb... Yes its significantly large.

I use a lot of indexes, and rushmore says its ok... I use a lot of SEEKS and not SELECTS...

Best Regards,
Fernando Pereira

RE: VFP 9.0 :: Application Performance Network Environment

I presume you have a pack and reindex facility...

So we are down to Windows 10 not really liking Server 2007. Are you sure it's 2007, I know there was a 2000, then I thought it was 2003 and 2008?

Could you set up a similar installation at your own office and write a bit of test software to run on a couple of Windows 10
workstations opening a table and closing it - leaving it running for a while to see if it slows down?

Regards

Griff
Keep Smileing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.

RE: VFP 9.0 :: Application Performance Network Environment

(OP)
I will try.

What recommendations do you have for client server environments?

If you have a document registration form, and where the lines are recorded as they are inserted, do you recommend that you always open the line table? Or open only once?

At this moment, when opening the document I access the table and remove the data for a cursor.
Then whenever I need to record new lines, delete, change, open the table again and make the change and close it right away.

Is this not the problem?
Often opening / closing files?

Best Regards,
Fernando Pereira

RE: VFP 9.0 :: Application Performance Network Environment

It worked before you had Windows10?

It's not my approach, I open tables in my form's data environment and keep them open all day unless I close the form and open a different one.

There is no reason not to do it your way though, should be safer, perhaps not as quick.

Regards

Griff
Keep Smileing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.

RE: VFP 9.0 :: Application Performance Network Environment

Could your workstations be disconnecting from the share after a timeout of some sort, if you open a table, download a subset and then close it... could Windows 10 be releasing the share after ten minutes or so and having to reconnect/authenticate when you go to use the file again?

Regards

Griff
Keep Smileing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.

RE: VFP 9.0 :: Application Performance Network Environment

Often opening and closing files gives clients an opportunity to get an oplock.

Oplocks can be turned off in server 2012 (I don't know where you see 2007 mentioned, Griff), but the other option to not get oplocks happening is multiple users having files open. So that kind of working with DBFs indeed makes them more frequent. By the way, it's not a good strategy to "turn off" poplocks by setting up one client to always have all dbfs open.It would need two extra clients, as one would have oplocks for all files which any secondary client would need to break. So no good idea. It can still be controlled in 2012, too.

I'd still like to see the timings, whatever else you do, could you set coverage for a client and extract the time a USE takes? It's not that straight forward, I know, but I'd really like to see how much there is to that.

So for example do

CODE

SET COVERAGE TO C:\logs\usetablex.log ADDITIVE
USE 'X:\myApp\DataBase\myTable.dbf' ALIAS 'tableX' AGAIN SHARED IN 0
SET COVERAGE TO 

Or as suggested start logging in load and end when your code USEing tables has run, ie in Init or Activate or Show.

You then also don't need coverage profiler, a simple APPEND FROM CSV is good to see the log data. In case of putting coverage right before and after USE you will only have 2 lines in the coverage file per use, (unfortunately the SET COVERAGE TO, turning it off, also is in the log) so each odd line number of that log is an interesting time. You could also sue SECONDS() and more, but actually the measurement done is quite precise and you need less code than with anything else, too.

Reading in a log into a cursor is simple:

CODE

Create Cursor curCoverage (bExecTime B, cClass C(128), cMethod C(128), iLine I, cFile C(254), iStacklevel I)
Set Point To '.'
Append From C:\logs\usetablex.log Type CSV 
And then you could also simply index bExecTime descending and look for the single lines (like USE would be) that take longest to run. Of course, it doesn't tell what in detail runs long, but I'd analyse what times you have, I guess often enough it will also be short, that's for clients which get the oplock, then it's long, for other clients waiting for the oplock to break. If you can also synchronize the timings of the logs (for example by file creation/last update time) then you could see a consistency of which client having a shot USE time causes which other client to have a long one, and whether there is a preference, a priority in the network, etc.

It would also be easy enough to extract a list of (cFile, cClass, cMethod, Line) where execution time is long. The type of code decides what is set, a PRG obviously is not a method of a class (unless you run a line in the class definition within a PRG, but this tuple is a unique location in your code. The extremes execution times might also turn out to be something else, too, if you log more than the USE commands. USE might not be your only bottleneck and time might also be spent diffusely, just by the sheer quantity of iterations, for example. Avoiding queries is no guarantee to run faster, queries can be better optimizable than your manual code can be.

Bye, Olaf.

Olaf Doschke Software Engineering
https://www.doschke.name

RE: VFP 9.0 :: Application Performance Network Environment

(OP)
Initially I had this approach, and at the time I didn't have these performance problems.

But in mid-2018, due to the heaps of problems I had caused by Windows update 1803, I changed heaps of things, until I realized the problem was:
  • set-smbclientconfiguration -DirectoryCacheLifetime 0
  • set-smbclientconfiguration -FileInfoCacheLifetime 0
At that time I sometimes had the problem of a table, although it was in the data environment, sometimes it didn't open and then it gave an error.
How do you place the tables in the data environment? Is it by code?

Do you use buffermode on tables, which then require tableupdate?

No workstations can't be disconnecting from the share after a timeout, because i open the exe by the same network drive, to test it.

Best Regards,
Fernando Pereira

RE: VFP 9.0 :: Application Performance Network Environment

(OP)

What about the NoDataOnLoad option?
Do you use it?

Best Regards,
Fernando Pereira

RE: VFP 9.0 :: Application Performance Network Environment

NoDataOnLoad only makes sense for views. You don't load data with a USE of a DBF.
And the NoDataOnLoad property of objects in the data environment also is disregarded with DBFs and only plays a role with views.

Bye, Olaf.

Olaf Doschke Software Engineering
https://www.doschke.name

RE: VFP 9.0 :: Application Performance Network Environment

Quote (Fernando Pereira)


I have this problema in some clients. And these clients use, as a server:
Windows Server Standard 2007
Windows Server Standard 2007
Windows 10 Pro (Not Server)

Regards

Griff
Keep Smileing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.

RE: VFP 9.0 :: Application Performance Network Environment

Fernando,

I see your questions go to Griff, but opening tables by DE is not the difference that would make him have fewer oplocks problems, it's more keeping all tables open. After three or four users are in a system, and if that is about just a bunch of DBFs anyway, nobody will have an oplock, all DBFs are opened shared and that has much more to do with keeping them open than with opening them by data environment.

You can also do the same with USE, oplocks only have a chance to cause trouble when they can be established, which means if nobody has a file open. And then the first one also has no trouble.

Bye, Olaf.

Olaf Doschke Software Engineering
https://www.doschke.name

RE: VFP 9.0 :: Application Performance Network Environment

Okay Griff,

I see it now. With multiple customers in several network situations the problem gets harder to manage, of course. But actually 2007 server would make turning oplocks off also easier, wouldn't it?
As said, last time I posted a lengthier Microsoft technical article it described turning off oplocks for systems still supporting that and it ended with 2012, 2007 is earlier, so should work out, too.

https://docs.microsoft.com/en-US/windows-server/st...

Bye, Olaf.

Olaf Doschke Software Engineering
https://www.doschke.name

RE: VFP 9.0 :: Application Performance Network Environment

(OP)
Ok Olaf,

But on the other hand if it is always opening and closing tables, can it also cause a problem of correct performance?

I was also not clear, sorry. In the same way I can open and close many times, depending on the user's operations.
Wouldn't it be better to open it once, and just close it when the form is closed?

Could part of the problem be here? In this approach?

Best Regards,
Fernando Pereira

RE: VFP 9.0 :: Application Performance Network Environment

Yes, that's what I already said here:

Quote (myself)

Often opening and closing files gives clients an opportunity to get an oplock.
And so users will get them. And they are even not the ones seeing a problem, it's always the next one.


But these problems then fade out if always two or more have a file open, especially the third one sees no problem again. The second one has already broken the oplock of the first user, he had a long wait time, but then this phase is over and from then on all users work shared without oplocks and without the handling and timing problem of breaking an oplock.

This has little to do with connections. Whether a mapped drive or UNC path resolving is taking time to connect you again is a separate issue about file access and about your authentication and permission/access to files. Anyway, yes, keeping files open is helpful, perhaps, read back, I told you the conditions. Any single user of a file has no problem, any group of 3 or more constantly having a file handle also remove the oplocks problem. But when you tidy up you often have to scenario of a second user following a first user. Especially when they both phone with each other to look at the same data, which I guess is not rarely happening an any work process, I know users.

Griff also has no real control about that 3 user condition by just keeping files open, but they are more likely open by many. When oplcoks can't occur they can't cause problems, of course.

But then, it's not a DE that keeps files open, it's simply not closing it. So you can stay with your USEs, just wait with closing until form end. The form closing will also close files when you don't open them with a DE, so also no reason for one. Indeed forms without a private data session will not close tables when such forms close, the datasession that already existed before they started still exist when they close and so tables are kept open unless you configure the DE to then also close its files. But it's no good reason at all to do everything in default datasession 1. Also notice, datasession and DE are a different thing.

A DE closes tables it opened with form closing. And as far as I understand Griff he also doesn't prevent that, it's just when users work in one form for a long time all the tables used within stay open all the time.

Bye, Olaf.

Olaf Doschke Software Engineering
https://www.doschke.name

RE: VFP 9.0 :: Application Performance Network Environment

I don't think there was a server 2007... The sequence runs NT Server, 2000, 2008, 2012, 2016 and 2019

Regards

Griff
Keep Smileing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.

RE: VFP 9.0 :: Application Performance Network Environment

Well, tell that Fernando rofl2

Quote (Fernanod Pereira)

I have this problema in some clients. And these clients use, as a server:
Windows Server Standard 2007
Windows Server Standard 2007
Windows 10 Pro (Not Server)

Maybe 2008 was meant, anyway, the article https://docs.microsoft.com/en-US/windows-server/st... has a lot of sections, just scroll through it to find advice for a certain server and some clients, too.

Bye, Olaf.

Olaf Doschke Software Engineering
https://www.doschke.name

RE: VFP 9.0 :: Application Performance Network Environment

Just one more note about this: It's also not rare all discussion about SMB and oplocks is mute and there are other problems.

Coverage logging really is a fine way to identify bottlenecks, it logs everything that causes execution time except very few things and especially as you USE tables by command you will see how long these really take, which will not show in case a DE is used. So best chances to find out more.

I also point out to a thread giving an idea how to actually get some information into the log, by making use of it logging names of procedures. thread184-1775724: hints in the coverage log
You can even log current time, if you create a routine called "thetimeis"+TTOC(datetime(),1), for example all within string of a script you then start with EXECSCRIPT.
So you can even get current time on the log, followed by a normal coverage log line with exectime of the USE command.

And as already suggested, you can find out long-running single lines as easy as sorting by exectime descending.

Bye, Olaf.

Olaf Doschke Software Engineering
https://www.doschke.name

RE: VFP 9.0 :: Application Performance Network Environment

You could try this from the command line

CODE

Reg add hklm\System\CurrentControlSet\Services\Lanmanworkstation\Parameters /v DirectoryCacheLifetime /t REG_DWORD /d 0 

Based on info from here:

https://www.zubairalexander.com/blog/how-to-fix-sl...

Regards

Griff
Keep Smileing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.

RE: VFP 9.0 :: Application Performance Network Environment

Which actually is what this does:

Quote (Fernando Perreira)

set-smbclientconfiguration -DirectoryCacheLifetime 0

https://docs.microsoft.com/en-US/windows-server/st... really is comprehensive about what helps in what situation, when you know its oplocks. But I've also seen it so often the discussion just locks into that one topic, as if it is the only cause.

And then you never hear back. And never know whether it helped or not. And the next discussion starts based on the same level.
Get information, Measure. I know the deal is to solve a problem and not just analyze it, but you also never know how much this helps and accelerates things if noone ever uses a chance for further investigation.

Bye, Olaf.

Olaf Doschke Software Engineering
https://www.doschke.name

RE: VFP 9.0 :: Application Performance Network Environment

of course Olaf, you are right!

Regards

Griff
Keep Smileing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.

RE: VFP 9.0 :: Application Performance Network Environment

It's okay, Griff, everyone saw me overlooking things, too.

I also understand when a problem causes panic mode, but you see it just takes one or two lines in code to have some more log information and that helps both with a healthy and with a problematic application, no matter if the problem comes from the hardware or LAM settings or is inherent. Measuring times and tracing what happens is a quite usual aspect also to have a long term recording of the norm and fluctuations and indeed coverage can also be done at runtime, not just within the IDE.

Bye, Olaf.

Olaf Doschke Software Engineering
https://www.doschke.name

RE: VFP 9.0 :: Application Performance Network Environment

(OP)
You are absolutely right.

I already implement some logs, and is the USE method that takes the seconds.
I will analyse with more details.

And will also test just closing tables when closing forms.

Thanks,
Fernando Pereira

RE: VFP 9.0 :: Application Performance Network Environment

Okay, I bet COVERAGE will be more precise, but nonetheless, I also see you USE IN 0, so we are talking about the opening step. Perhaps TABLEVALIDATE 2 is not the best option as it is not just about a check when opening or closing. Try what you gain by actually turning it completely off and SET TABLEVALIDATE TO 0. Yes, I mean that, though you risc a reccount error, build in a simple dbf fix opening a table with validate at 0 and then append blank and delete, that fixes the reccount again, least problem to worry about.

And then, do you log with current time? Can you see concurrency, see who else last opened a dbf before someone had a long use time? Also log when someone closes a table? Have a count on how many users have a table open? It's all not automatically in coverage logging, but those logs with a little help of some markers of current time would enable to see that as precisely as system clocks are synced with an internet time server.

Bye, Olaf.

Olaf Doschke Software Engineering
https://www.doschke.name

RE: VFP 9.0 :: Application Performance Network Environment

One idea that I got while going for a walk: Did you compare USE with any other file opening, especially FOPEN in read mode and for comparison also with read-write mode? Any file, not a DBF, in the same share. Just to see what portion is spent on network connection, permission checks, and getting the handle. VFP will not do much else in regard to that. Then another operation and its time consumption of interest would be Fseek(handle,0,2) on files opened.

Bye, Olaf.

Olaf Doschke Software Engineering
https://www.doschke.name

RE: VFP 9.0 :: Application Performance Network Environment

(OP)
Hi,

I applied only two changes to see how it was today.

These being:
  1. Opening the table when necessary, and only closing it when closing the form.
  2. Open table using "Use Database!tableX AGAIN SHARED IN 0" and not "USE 'X:\myApp\DataBase\myTable.dbf' ALIAS 'tableX' AGAIN SHARED IN 0"
The first situation should only be reflected after the first use of the table.
The second whenever a table is opened.

The table that I had to take 20 seconds to open, became instantaneous ... And that only happened after applying point 2.
the other point I left to validate the application's behavior over time, that is, if it was getting heavier, and this also didn't happen.

Conclusions:
  • Opening a table only when necessary, leaving it open and only closing at the end seems to be better.
    But here I have to be careful with the Windows cache and the records are not lost.
  • Opening the table through the database is better than using the dbf file path.
    Is it because here windows does not manage in the same way?
In your opinion, any conclusion of this experience?

Thanks,
Fernando Pereira

RE: VFP 9.0 :: Application Performance Network Environment

First of all I don't think you can make definite conclusions after one day of changes. Even though, congratulations on getting use to instantaneous on that table, but, well, you could just have been lucky. Ine thing is okay though, we have discussed and confirmed that, keeping tables open means more often you have the situation 2 or more clients use a file and none of them has an oplock.

But once more: oplocks themselves don't take time, breaking oplocks takes time. And you never can avoid getting an oplock. One user will always be the one that uses a file. And if you change your way, start your app in anticipation of the 20 second wait in a celebration like a new shop opening as the first user of the day, you don't have a 20 second wait. Because you're not the one that has the problem, then.

It's the second user, not the second at that day, the one who gets the second file handle on the file. If the first user of a dbf close the dbf before a second user uses it that also has no problem. As you had this problem regularly, though, the situation of a second user must be quite often. Now you will still always have a second user and that will have a long USE time. unless the first use was so recent, that breaking the op├člock isn't much effort.

And what is breaking an oplock, what has VFP to do with that? Well, nothing. The VFP runtime was as is, before that mechanism was introduced or changed to the bad behavior for VFP and also Access and other software. As oplocks are not real locks, a second user never gets rejected access, a second user demanding shared use of a file gets that, but before he gets that the OS breaks the oplock, and the first client also isn't asked whether he likes that or not, the system does that, but whatever goes wrong on the level of OS clients now asked to flush their cache, maybe VFP is involved with its caching mechanism here. It shouldn't, as VFP has no idea it had an oplock, you can't detect that you have one and VFP does only act on files differently when it knows it has real exclusive access.

So, I don't know, I'd always expect someone to have once a long USE, that's only never happening when there never is the concurrent use of a file. But what's better in going through this phase of one user getting this "penalty" is that when now a third or fourth open the file and the number of handles never goes back to 1, no oplocks will be given anymore.

There is one much simpler solution, though, and that is to prevent them from happening at all. Did you even look at the link I gave I think 3 times already? https://docs.microsoft.com/en-US/windows-server/st...

Now, regarding your changes: USE database!longtablename will finally also use the same file, workarea and alias name have no influence whatsoever on oplocks. And the only thing that changes is that you look at the DBC once more. But there is another reason this actually make almost no difference for DBFs of a DBC: If you open a DBF by its file name without reference like database!longtablename, you still also open the DBC or verify the consistency of the backlink of a DBF to its DBC stored in its header. Did you ever notice? If you close all tables and databases and just use a DBF of a DBC you also have the DBCopened automatically? So you have a flurry of side actions there anyway, no matter how you do it.

Once a DBC, you have a high chance it's completely read into memory, as a DBC isn't very large, unless you add a lot of stored procs into it. Then this flurry of side actions also becomes inexpensive. Plus all users use the DBC file and will surely only rarely close it.

So, all in all, I only expect 1 to make a difference. And it also only plays a role if despite all settings to turn them off, oplocks still are made.

Again 2008 Server has a solution for that, so you can then also spare changes, almost. I think there is another good aspect of rather long term use of files, I never had good experiences with the tidyness of closing everything you can right away again, you always need to build up from scratch again.

Regarding saving into the dbf file. In buffering you can see the dbf stored value with curval(), from any client, whether you have buffered changes yourself or not, this is also a chance to even predict you get write conflict when your buffered value differs from both curval() and oldval() and so tableupdate wants to change the DBF, but detects that while you changed something from oldval to your workarea value curval() also changed. That's not a technical problem, but the way conflicts are meant is, look, pay attention, someone else already changed the value differently than you wan to. You can always choose the force option and disregard that, making the last change win.

Effectively you have that situation without using buffering. all changes go to the dbf, so whoever saves last saves his change.

Bye, Olaf.

Olaf Doschke Software Engineering
https://www.doschke.name

RE: VFP 9.0 :: Application Performance Network Environment

To some more practical advice about how buffering is in general not bad for having control about what is saved to the DBF.

You hinted on closing DBFs also to ensure even caches are written. On one side you never have full control over what the OS and hardware do with their caching on top of VFPs, which also is on top of and separate from buffering. You could still be victim of a false assumption about not buffering. The only time data is saved back to conrolsource is when a control loses focus. You don't write every single keystroke you do in a textbox or editbox or every choice made in a combobox. VFP writes only after valid says ok. And you can't trigger that by calling valied, focus has to change. Mike can sing you a song about what that means with a save button on a toolbar. It's general VFP base knowledge the toolbar save button needs to ensure focus change from the form.activecontrol, usually using the trickery to set focus to what already has focus, as that also causes the whole chain of events.

On the other side you can verify what's in the file. There are two things that read from the DBF: SQL-Select and CURVAL(). CURVAL() only works with buffering, but let me show how that could be your verification and how buffering isn't a problem with write conflicts. If you worked with no buffering, you accept the "last change wins" strategy of shared data access and that's easy to do with the force option of tableupdate.

So here we go, all essential code that's usually in several parts of a form:

CODE

* sample data preparation
* only need to run this once.
* can run multiple times anyway
Close Tables All
Close Databases All
Cd Getenv("TEMP")
Erase oldvalsample.*

Create Database oldvalsample.Dbc
Create Table oldvalsample.Dbf (iid Int Autoinc, cData c(15) Default "blank")
Append Blank
Use
Close Databases All
* only need to run this once.
* ------

* clean start
Close Tables All
Close Databases All
Clear
Cd Getenv("TEMP")
Set Exclusive Off
Open Database oldvalsample.Dbc

* preconditions:
Set Multilocks On

* init/load
Use oldvalsample.Dbf In 0
Select oldvalsample
CursorSetProp("Buffering",5,"oldvalsample")

* Now working on data (anywhere and anytime within form lifetime)
* Update yourtable...
* Insert into yourtable (...)
* Replace
* Append Blank, whatever, here let's just do
Select oldvalsample
? 'initial'
? oldvalsample.cData, Oldval("cData","oldvalsample"), Curval("cData","oldvalsample")
* will show blank, blank, blank, all initial values

Replace cData With "new"
? 'after replace'
? oldvalsample.cData, Oldval("cData","oldvalsample"), Curval("cData","oldvalsample")
* will show new, blank, blank, you only changed the workarea buffer until now

* save button
If Not Tableupdate(2,.T.,"oldvalsample")
   ? 'whoops, something went wrong'
Else
   ? 'after save'
   ? oldvalsample.cData, Oldval("cData","oldvalsample"), Curval("cData","oldvalsample")
   * will show all the same new, so notice, after tableupdate oldval is updated,
   * the new saved value becomes the old value which later is the basis of judging conflicts
   * (if you don't force your changes)
EndIf
* change user 1 buffer in preparation of demo about workarea, buffer, curval, oldval
replace cData with "buffer"

* now demonstrating when oldval and curval could differ and
* proving curval doesn't mean your current (buffer) value
* but current dbf value
* this happeing from another user:
? 'user2'
Use oldvalsample.Dbf In 0 Again Alias curvaldemo
Select curvaldemo
CursorSetProp("Buffering",5,"curvaldemo")
? 'initial'
? curvaldemo.cData, Oldval("cData","curvaldemo"), Curval("cData","curvaldemo")
Replace cData With "change" In curvaldemo
? 'after replace'
? curvaldemo.cData, Oldval("cData","curvaldemo"), Curval("cData","curvaldemo")
* change, new, new
* save button
If Not Tableupdate(2,.T.,"curvaldemo")
   ? 'whoops, something went wrong'
Else
   ? 'after save'
   ? curvaldemo.cData, Oldval("cData","curvaldemo"), Curval("cData","curvaldemo")
   * will show change, change, change
   ? 'What user 1 sees at this time:'
   ? oldvalsample.cData, Oldval("cData","oldvalsample"), Curval("cData","oldvalsample")
   * will show new, new, change
Endif

* back to initial user
? 'user 1'
Select oldvalsample
Replace cData With "last change"
* seeing the conflict: your change from oldval to sys(2015) differs from current value "changed" from other user
? 'after replace'
? oldvalsample.cData, Oldval("cData","oldvalsample"), Curval("cData","oldvalsample")
* will show last change, new, change

* neverhteless, last change wins:
* save button
If Not Tableupdate(2,.T.,"oldvalsample")
   ? 'whoops, something went wrong'
Else
   ? 'after save'
   ? oldvalsample.cData, Oldval("cData","oldvalsample"), Curval("cData","oldvalsample")
   * will show last change, last change, last change
   * last change wins.
Endif 

Essential things to take from this sample:
1. Curval() only changes with Tableupdate, it reflects the currently stored value of a field, so it can be your validation of save.
And this is the good news about this: You can test and convince yourself, whether TABLEUPDATE works as it should in a network with its configuration and behavior. And I've seen things not working as expected, there's quite a lot of variation of what to better use or not use. But once you see Curval() confirming it reads back what you write you're not needing to implement this as a permanent validation step (you could, again in the interest of login and seeing changes from the norm). But you see buffering and tableupdate give you a means to save and have a definite point in time for saving changes that does not need to close a dbf.
2. Tableupdate with force option knows no conflict, this comes closest to no buffering in avoiding to need any confllict checks, changes come as they go. You may experiment with .f. for no force
3. workarea (buffered), oldval and curval can be 3 different values
4. oldval is updating once you tableupdate and flush your buffer, it doesn't stay what was value when you firs USE a table

And if you remove the replace cData with 'buffer' you also see user 1 will see the change of user2, this just shows buffers don't decouple you from other users changes, only after you store something in them, they start empty, not with all old values. When you have nothing buffered for a record, the dbf also is read and a workarea field becomes curval, too, just like working unbuffered.

So there also is no need to fear totally different behavior from buffering, you're not isolating yourself from other users' changes, you're only isolating others from your changes until you commit them.

Andy Kramer once wrote a lengthy article about why you never need row buffering. The essence is you can also do single row table updates yourself in table buffered mode. And you have better control when they happen. You can really struggle even when you have events like grids before and after rowcolchange. Taking it in your hands you then also get .f. from a tableupdate in the worst case, not an error triggered by some code that changes record or even just a grid click, making the source of this error extremely diffuse.

The only conflicts you could get with the force option of tableupdate are about violating table and field rules, also violating index uniqueness if there's any chance at all, but those things will then also happen to you working unbuffered and if you don't have them you also don't introduce them just by switching to buffering.

And last not least: You don't need to establish edit mode or a save button, you can autosave per timer or you make focus changes already events to tableupdate, if you want to come closest to unbuffered modes. But you see how much more actions non-buffered working on DBFs mean to the file system and why that might also be a reason to have problems in the first place. Buffering means less frequent acting on DBF files, less concurrency, less problems.

Bye, Olaf.

Olaf Doschke Software Engineering
https://www.doschke.name

RE: VFP 9.0 :: Application Performance Network Environment

(OP)
Hi,

First of all thanks to everyone who helped me.

I was able to stabilize and solve the problem, and the solution went through:
  • Cache settings in smb
  • the tables with the most impact open only once, and close when the menu is closed.
Best Regards,
Fernando Pereira

Red Flag This Post

Please let us know here why this post is inappropriate. Reasons such as off-topic, duplicates, flames, illegal, vulgar, or students posting their homework.

Red Flag Submitted

Thank you for helping keep Tek-Tips Forums free from inappropriate posts.
The Tek-Tips staff will check this out and take appropriate action.

Reply To This Thread

Posting in the Tek-Tips forums is a member-only feature.

Click Here to join Tek-Tips and talk with other members! Already a Member? Login

Close Box

Join Tek-Tips® Today!

Join your peers on the Internet's largest technical computer professional community.
It's easy to join and it's free.

Here's Why Members Love Tek-Tips Forums:

Register now while it's still free!

Already a member? Close this window and log in.

Join Us             Close