Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations sizbut on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

VFP File Lock Issues with Windows Server 1

rbeitler

IS-IT--Management
Sep 17, 2003
3
US
About 2.5 weeks ago we starting seeing issues with file locks. Typically getting error 1705 "File access is denied". We have an internal VFP app that runs in 3 locations, each with its own server, and started seeing this at all locations. For the most part, this is code that has been running fine for years with no issues.
The code in particular where these issues happen do have some things in common:
  • If a dbf is open shared, but then needs to be opened exclusive, the error will happen even in no one else is using the dbf (usually temporary tables)
  • Calling FOPEN on a log file during a loop where is is repeatedly doing FOPEN and FCLOSE on the same file.
So in both cases it seems to be tripping on itself so to speak.
The files are accessed over the network via Windows Server. We tried to disable oplocks using the set-smbshare and setting the LeasingMode to none. Every time we reboot the server everything works for a day or two. Then issues start to come back. We tried disabling antivirus, no joy.

We are in the process of refactoring our code to avoid this situation, like open exclusive the first time, keep log files open, etc.

Has anyone else who still uses VFP seen any similar behavior?
 
So in both cases it seems to be tripping on itself so to speak.

Tripping on itself? That suggests timing issues.

Try inserting some timing delays in your code to see if that solves the problem. If it does, then you will have a better idea how to attack the problem.
 
So in both cases it seems to be tripping on itself so to speak.
The files are accessed over the network via Windows Server. We tried to disable oplocks using the set-smbshare and setting the LeasingMode to none. Every time we reboot the

Years ago we had terms for this scenario, "Deadlock" and "The Deadly Embrace", which was a situation where you have two resources competing for access. In some cases one process is waiting for another process to complete before moving on, yet the second process could, in fact, be waiting for the first process to complete, hence the deadlock.

To avoid a Deadlock, you should consider trapping those situations so they are not locking something while waiting to get access to something else that is locked because it's possible that the other task is doing likewise waiting for whatever locks you already have.

@mmerlinn is correct, timing plays a big role. You should try to limit the amount of time you need exclusive use of anything, whether it's a record or a file. Personally, I only open DBFs in shared mode unless I am reindexing or altering the structure, but if you have no workaround, you need to keep those exclusive uses as short as possible and release the exclusive use as quickly as possible.

Back in the 80s, before true shared networking was possible what people did was work in batches. You wrote your information to a temp file, which you had exclusive use of, then had a way to combine your temp file with the live master later.

You can use the same idea with the log file. If an FOPEN is trying to write to a log the entire time, see if you can save it to a temp file, which is always exclusive and then append the resulting file to the complete log at the end, so instead of keeping the full log open the whole time, it only needs to open it for a fraction of a second while your temporary log is appended.

HOWEVER, if the system has worked all these years, and you've never needed any of these techniques before, there's still a chance it's from a hanging lock from an aborted session. Simply stated. if there are no other people using a file, it should never block you from opening it exclusively.

One scenario I've seen is that a user may have a file open, then their system goes into suspend or hibernate mode, or even reboots or shuts down unexpectedly, and somehow the server doesn't realize they are no longer there. Another scenario I've seen is where a client had a web portal that allowed people to view some of the data online. The web server typically keeps those files open long after the web session closes. Lastly, I've seen backup systems that interfere with file access.

If you have a traditional Windows shared folder, you should be able to see what files are open at any given time and by which account(s) and even know if there are any record locks in use. If nothing else, once you identify who the system thinks is using the file you can determine whether it's even somebody who is no longer logged in.
 
Are you using views in a shared dbc, too? You did explicitly tal about dbfs, I know, and log files, that are neither views. I mention it because for views it's a known issue, that in the moment a view is opened, an error 1705 can happen, and the solution to that is to let every user have thier own copy of a view dbc and only share a dbc with dbfs.

If a dbf is open shared, but then needs to be opened exclusive, the error will happen even in no one else is using the dbf (usually temporary tables)
Why use temp tables instead of cursor? I guess there is a reason. But then are they created locally or in the share? Everything you create for a single user, that's used in a report, for example, should not even get a remote chance of causing trouble with other users creating the same temp file in the same place. There's TEMP for temp files.

Is Terminal server involved? MS does give user sessions each a separate local TEMP folder on the terminal server, which avoids double usage of the same temp file, of course, but if you set temp files in your application to something else than the OS this could cause double usage of the same file again on different user sessions running in the same terminal server. I therefore strongly recommend using the Windows TEMP folder always, even if you're not using terminal server right now, it makes the transition a lot easier, if it would ever be considered.

If you have a dbf for which the lock problem occurs most often, it pays to use procmon for file monitoring to see what has file handles on files and see what's happening on the dbfs, whether it's really the same client tripping over its own usage.

Both local AV and server side AV could potentially act on share files, the server AV because it's examining all server side files an also shares, local AV could because a process is opening files of a server share. So AV still might being the issue.
 
Last edited:
By the way, doing some experimenting:

Using one VFP IDE (process) with one datasession:
Using a dbf shared or exclusive in one workarea, then trying
USE thesame.dbf EXCLUSIVE leads to error 3, not 1705
Which also means you can even use the same dbf twice in the same datasession with another alias name,i.e.
USE thesame.dbf EXCLUSIVE ALIAS othername AGAIN IN 0 works.


Using one VFP IDE (process) with two datasessions:
Trying USE thesame.dbf EXCLUSIVE also triggers error 3, not 1705.

Using a second VFP IDE (process):
Trying USE thesame.dbf EXCLUSIVE triggers error 1705.

It's exhaustive enough for me, but you might consider further cases, like exclusive access with other alias name and AGAIN clause in a second datasession (but still the same process). I'm also currently only using a notebook not joined to a network. So actual network challenges are untested, actual multi users, too.

I'm quite sure, though, that the same user could only trip on its own usage with error 1705 in a separate process, like the same application started twice. If your application logic doesn't consider that case, it's not really the same user for the file system, if it's a second process, even when that runs on the same Windows user account on the same client PC.

Otherwise it should be a second user or process access to the same file, also possibly by AV running client or server side or both. As you say this is happening on multiple locations each with their own server, it's pointing towards a windows update causing that, but I'm not aware of a new general locks problem in networking than the over a decade old oplocks problem. A new user starting the application multiple times to do things in parallel, for example, would also only be a new "pattern" on one location, not on all three locations at the same time, very unlikely, unless a new feature is making that a logical solution to users. The least probable, but still also a scenario I wouldn't encounter for the first time is usage of an old feature for the first time, even though it's already old. That means, a feature that never worked, but was just also never used. Also quite unlikely in three locations, so perhaps rather a newer feature, have you introduced something in the time the problem began to occur?
 
Last edited:
Thank you for all of the replies.

Few things:
  • We are not using views or dbc. Just dbf "free" tables.
  • The "temporary" dbfs that get zapped, we like them to persist on the shared disk so that users can see the information from the last batch and reprint etc.
  • We have a global error handler that presents the user with the error and the option to "Retry". If they choose this, the RETRY command is issued. When this access denied issue started, using this retry option let's the program continue successfully. In the case of the USE EXCLUSIVE, this is a one time event and everything works. But in the case of a loop writing to the log file, it can cause repeated errors and if the user presses retry over and over, it will eventually get through ok.
  • I am also suspecting a windows update that changed the behavior of SMB.
  • I am refactoring as much of the problematic code to be more resilient (not a bad thing anyway)
  • We are also looking at moving the whole application to be run from the server using RDS or APP-V or whatever it is called now
 
The "temporary" dbfs that get zapped, we like them to persist on the shared disk so that users can see the information from the last batch and reprint etc.
That has contradicting aspects, doesn't it. It only remains until the next user reuses these temp dbfs, doesn't it? If you want to keep last printed data, then either do so locally or when in the global share, let each user have a directory so that concurrent usage also is avoided, not only rarely happening. If you only use one set of DBFs because reports data environments have these in them, well, think about using USE commands in the report dataenvironment event code and use GETENV("USERNAME") or ID() for the per user aspect.
 
On the topic of locks overall, it's not only a problem to USE a dbf exclusive, if you do everything shared you also get locking problems, even without any manual locks, because there's always the need to have at least temporary exclusive access to a file in a FAT or NTFS file system to write to a file. That's not a VFP mechanism or need, that's based on the file system philosophy and by design. Concurrent writing to DBFs will usually trigger error 108 ("File is in use by another user") in contrast to 1705 ("Fie acccess is denied"). Both errors can be resolved by RETRY, indeed, but it could take much too long. It's recommended to let VFP retry at least, that's configurable with SET REPROCESS.

If a file really is in use by another user, that's usually a reason to do something else instead until the file becomes available for exclusive access, again, but regarding some shared files for reports, that's not necessary with what I suggested already. The other common suggestion is to drive reports by cursors prepared before the report run and let those reports have NO private datasession to let them access the prepared cursors. Those don't have to be at a specific shared folder, they only have to exist and have specific alias names. To still be able to redo a report, debug report issues, etc. you can persist cursors simply using COPY TO. The persisted data can both be standaone free DBFs or DBC based DBFs, and so the usage of cursor doesn't contradict your needs, too. You'd never have a concurrency problem about the cursors, only about readong from source DBFs into cursors, of coursse, and later when writing to the same DBF files instead of a per user structure of data or other means of shared report data storage you could design.

Anyway both 1705 and 108 point out concurrency by at least a different process, and likely another user, not tripping on the own foot.
 
One of the things we are testing to help this issue is instead of running the vfp app on each client and accessing dbf over SMB, we have one user set up to use RDP into the file server and run the vfp app directly on the server. After cleaning up some file path issues, it is running well. Actually extremely well. Speed when processing files is an order of magnitude better. This makes sense as they are on the local hard drive (SSD) vs going over the network through SMB.
So at this point we are planning to move everyone to that setup by enabling remote desktop services so we can have more than 2 people connected.
But I have seen more seamless ways to do this with Citrix, or Microsoft RemoteApp. Did a little research and RemoteApp seems like it would be the way to go. So we are going to give this a shot.
Any caveats implementing RemoteApp with a VFP application?
 
Fine, if that helps. Well, the first topic to think about running a VFP exe remotely on a server: All users now use that one server (Windows Terminal Server could provide a better basis and also allow runnning multiple servers). The server is running multiple user sessions and so it has to be dimensioned with enough RAM and cpu cores as if you have a PC with 2,5,10, 100 users sitting in front of it with them logged in in parallel, not only one session active at a time, but all of them. Each user will only see its desktop on the remote client, but that's one of as many desktops as you run concurrent users. VFPs runtime RAM need is noly a few MB on GB RAM systems, so that's most of the time not the bottleneck, but all in all the server has to be dimensioned with enough RAM and all other resources used by as many users as you want to be able to serve concurrently.
 
The second thing to notice is that the user client is not running the software anymore, and also the server running your EXE is not getting a drive letter of a thumb drive a user inserts, for example, or any other attached hardware. RDP (Rmeote Desktop) allows to define that drives are forwarded in the form of \\tsclient\$Driveletter, so \\tsclient\$H for a USB drive mapped with drive letter H: at the client. Other things mnot available on the server obviously are local printers, so you'll use network printers. Overall , there are such things that require changes in your application. One sin is to have code using local drives that usually with every user having their own PC/client would not collide, say on drive C:\yourapp\somedata\, but now are a shared directory for all users on the servers C:\ drive. So anything you relied on to be exclusive to the user on his own local drives is not exclusive anymore, any hardware,printer atc of clients is not attached to the server, of course.

Also look into what you get from ID(), GETENV("TEMP") and other such environment variables when running on the server vs. on the client.
 

Part and Inventory Search

Sponsor

Back
Top