Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Chriss Miller on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Database Corruption on networked data 1

Status
Not open for further replies.

ELD

Technical User
Jun 29, 2001
1
US
Environment Netware 5.1 with latest patches
500Mhz Dual Processor 4GB Ram
Token Ring Switched at ring level
(soon migrating to switched ehternet)
W98SE on all workstations
Novell Client 3.21 on most workstations (Some at 3.3)
Most workstations are Dell 500Mhz or faster 128MB Ram
FoxPro 2.6 Visual 3.0 and Visual 6.0 Apps exist
500 users (300 actively in the files)
8500 Network files open peak
Traditional Netware Volume 65GB.
Backup Exec 8.5 with Open File Option

I am on the network side supporting the programming staff. We have been together for 5-6 years and grown from Netware 3.11 thru 4.11 (now being displaced) by 5.1 and have worked through a good deal of FoxPro / Netware issues.

I know that FoxPro database / index corruption will occur. It is occuring far more frequently now and I am trying to find anything on the network that is contributing. We do have some congestion on some of the rings during peak traffic. All our cable is grounded properly, runs at 16MB and is certified to CAT5 100MB standards.

The server is running 6% utilization at times, but more often than not 0-1% is common, with 30-35% during nightly backup operations, with 91% long term cache hits over several weeks.

I now seek your recommendations for optimizing the client, in areas like
Extended File Handles?
Close Behind Ticks?
Opportunistic Locing
and any setting that can optimize the reliability of the client for the FoxPro environment.

Any ideas would be welcomed. Again I am on the network support side. Certainly any ideas you might have for the programmers would be gratefully welcomed and passed on.

Thanks
 
HI
Just a few tips, if that has not been done already...

1. In all update/write codes in FoxPro prg codes, after the write, issue a FLUSH command. This ensures that the foxpro avoids the write cash and the data is completly written to harddisk. I have achieved a over 90% success rate to avoid corruption in the writes.
2. The report generation routines almost never create any data corruption. So this shall not be a problem for you.
3. Avoid using RLOCK() or for that matter any locking of records and then handover the access to users. Such a coding always leads to some user locking and another user afterwaiting a while, rebooting the system. This leads to corruption. Btween a record locking and the release of a lock, there shall be no user interaction. This is possible and in all my applications I have achieved it. This helps to speed up the users accessing the DataBse also.
4. As for tuning, make available to FoxPro enough memory (the extended memory) so that the extended version always runs and drives the software. This improves speed.
5. In a network situation, having the temporary files in the local drive (you can get this done by using config.fp). This avoids net work trafic.
6. Always try to use a resource file of your own. When the application starts, you can make a copy of your default resource file to the disk and then set that as your resource file. Example...
myRfile = SYS(3)
! copy mySavedRFile.* myRfile.*
SET RESOURCE TO myRfile
...
and while leaving the application delete the resource file.
This will keep the harddisk trim. Smaller the resource file, better performance.
These are all just an immediate view points from me. Hope this helps.

ramani :-9
(Subramanian.G)
FoxAcc
ramani_g@yahoo.com
LET KNOW IF THIS HELPED. ENOUGH EXPERTS ARE HERE TO HELP YOU OUT! BEST OF LUCK :)
 
It's been my observation that especially on Novell networks, that most data corruption was due to either bad network hardware (bad NIC, bad Cabling, bad switch, etc.) or bad configuration of the Novell clients. The default configuration is meant for non-database use, and does a lot of write behind caching - opening the possibility of "lost" data.

Novell Documentation:

FoxPro Database corruption with Client32(Last modified: 10DEC1998)
This document (2920765) is provided subject to the disclaimer at the end of this document.

Symptom
When creating and closing large numbers of files, Client32 seems to corrupt FoxPro databases. The Microsoft NetWare Client does not cause the same problem. After creating over 169, 170, or 171 files, the Client doesn't seem to be able to release the file handle.
Any database that opens more than 170 files can cause corruption when using Client32. The only known database that does this is FoxPro for DOS and Visual Fox Pro. Any Windows-based application, using Windows APIs should not cause this problem.
Made2Manage is one application that uses FoxPro, and opens more than 170 files.


Cause
The client was designed to only open up to 170 DOS files. This was the recommendation of the DOS developers kit. DOS under normal circumstances cannot open more than 170 files, but some Microsoft applications do exceed this limit.


Solution
In the NetWare 5 release of the Novell Client for WIN95 a new setting has been added under Advanced Settings: Use Extended File Handles. This setting is off by default. Some DOS applications such as Fox Pro will require this setting to be set to ON, and that the MAXIMUM FILE LOCKS PER
CONNECTION is set at the server for the number of file locks required by the application and client.
Key words: WIN95, Windows 95

Another User recommends:
"Right-click the red Novell "N" in the system tray.
Select "Novell Client32 Properties."
Click on the Advanced Settings tab.
Set Cache Writes to Off.
Set File Cache Level to zero.
For extra points, you can set Use Extended File Handles to On.

Restart and test.

This is not a bug in the Novell client. It's just a mistake on Novell's part, making it too aggressive in the pursuit of performance."


Rick

 
ELD,

I had the same problem as you for the longest time until I found out the problem was the Novell/Microsoft FAT's. I was corrupting the databases and indexes by using the DOS copy command. I solved the problem by using the slower foxpro copy.

For example. I had a DBF reindex routine that did this.
Use DBF exclu
use
run copy (netware file server hard drive.DBF) to (local Win computer hard drive.DBF)
if there is a FPT run copy FPT file also
Open a reindex database and get the stored index information.
reindex the DBF on the local drive.
use DBF
run copy (local hard drive.DBF) to (File server Drive.DBF)
if (CFT) copy FPT also
if (CDX) copy CDX also

What happened is that FoxPro wrote the the DBF/CDX header that it was running on a WIN OS and placed info in the header for that OS. After a short period of time the Novell OS would corrupt the DBF and CDX header.

to Solve the problem I changed the routine to
Use (file server.DBF) order 1 exclu
copy to c:\temp\DBF
use c:\temp\DBF
copy to (file server hard drive.DBF)
select 0
Open a reindex database and get the stored index information.
reindex the DBF on the file servers drive.
use

The reindex was now being done on the file server and FoxPro wrote the information for that OS to the header. Corruption of DBF and CDX's stopped.

In short, do not dos/win copy a DBF to a local drive then perform a function on it that required a header action, and then dos/win copy the file back to the server. David W. Grewe
Dave@internationalbid.com
ICQ VFP ActiveList #46145644
 
We kinda have the same problem here....

we use novell 4.2 servers, hardware ranging from
386sx25 4meg Ram to Celeron 500's 64megs, under foxpro
for Dos 2.5b, (40 sites, retail enviroment)

clients are VLM and Client32

we have 2 problems,

one is where records disappear, (i.e. in a browse
pageing down... all the records disappear).... to fix
this.... down the server & bring back up... no more
problems... (records come back) or you can... copy
the file to a new file... then back again....

Next one is a little more nasty, our sales database ranging
from 70-400megs.... hangs.... when updating...(foxpro DBF)

This is really strange... the client is totally killed...
the file has the index tag deleted... the records added...
then reindexed (trying to get around it)... Dies with the
records being added.... (above change was made to combat
the problem as a work around....)

These problems have only occured since the upgrade from
netware 3.11/12 to 4.2 at it dosen't happen all the time
i.e. it disappeared for 1 year... then came back over a month 6 sites were hit....

Anyway, some of oour sites... now have a korn(sp?) job that
shuts down the server at 2am or so... & brings it back
up on a tuesday night...

we are limited to the patches we can put on netware, some servers have 400 meg HDD's.... but the problem still occurs
on our newer sites with all the available patches....

Zim
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top