Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Chriss Miller on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Better database engine

Status
Not open for further replies.

clip4ever

Programmer
Oct 11, 2002
43
CA
We have a Clipper 5.2e DOS multi user LAN application running on Win 95 (XP soon)workstation connected to database on Win 2003 server. Paripherals are Scanner, Lazer Printer and label printer on COM port. Dealing with COM port is via a C program. most used function is: Scan a barcode, validate, save and print a label.

We are experiencing delays when the user scans to process an item barcode. Means it works fine for a while (1-2 seconds per item) then it takes 10, 20, 30 seconds.

Speed goes back to normal 2 seconds after purge and re-index of the database. DB is 8 MgBytes and 50000 records.

We started to wonder if the Clipper database engine is too small for such a task.

Is this true? Do we need to re-develop the application with a new technology? or there is a solution or a better database engine ?

Please help

clip4ever
 
clip4ever

My first OPINION is that you have a small database. Clipper has historically handled much larger databases easily. I doubt if your problem is a weakness of CLipper.

Do you have the code and ability to change it? I am curious as to why a purge and re-index is needed.

Jim Rumbaugh
 
Thanks for responding, and this my openion as well. I have the code and I can modify any of it. Purge purges old records to reduce size. Re-index is a regular weekly task we perform because we suspect regular corruption.

clip4ever
 
clip4ever,

I would echo Jim's comments, that the size isn't an issue for the underlying dbf, though it may be for the index. I recreate indexes nightly for most of the dbf that are connected to my app. Clipper DBFNTX, as an index schema, is notorious for getting corrupt.

The delay that you mentioned is probably related to the code being executed and not the DBF itself. I have millions of records and hundreds of meg's of data that I work with on a minute by minute basis. What you are doing is what Clipper was designed to do, and still does as good a job as anything else. Also making an assumption that the network isn't a culprit in this issue. What is the "PURGE" doing?

Are you using DBFNTX as the index schema? 8 meg's and 50,000 records isn't that big of a deal for Clipper to handle. Has this been going on forever, or did it just start recently?

Going to the next level with a new program & database is fine but that may not be necessary if this is just a badly written routine if that is the cause. Of course there are better database engines but again with the size of the data you are working with it shouldn't be an issue.

Jim C.




 
Thanks for responding.
Purge deletes recorsd older than a month, and followed by a re-index, once a week. Actually the size after we added another product, is now doubled(15 mg and 100000 records).
I understand that you re-index avery night? do you recommand this?

If you think it's the code, would it help if you have a look at the main scan program ?

Thanks
 
clip4ever,

Yes, I would, and I would also reindex everytime you added a large volume of records. Should take a minute or two, and if you still have the issue with the delay, it would point to the source code as the problem. A seek to a record should be instantaneous, the one to two seconds mentioned seem like a correct response time.

Thanks,

Jim C.
 
Hi, Clip4ever

I agree with the Jims, this does not sound like a large database at all. I have an application running database files over 500 meg in size with 2.2 million records and multiple indexes. I rebuild indexes (NOT reindex) weekly.

How many users are concurrently accessing the database?
Does the application lock the database or multiple records?
Do these slowdowns occur with just a single user active?

If you have access to the server, you can check the open files on the share and also see how many locks are outstanding. If you see the number of locks increasing as time goes on then maybe the app isn't unlocking in a timely fashion.

You also mention index corruption. DbfNtx is really bad for that, but normally it should only happen rarely, such as due to workstation crashes or people closing active windows or network problems.

Has the server been patched to disable opportunistic locking? That also can cause index corruption, and once an ntx is compromised results can be unpredictable. Hwvr I am not positive that is an issue with Win 2003 server; it certainly is with other Win versions though not with Novell which is what I normally use.

Jock
 
Thanks again for taking the Time to help and share the knowledge.

Are you aware of any innovation or a new database engine or library (anything) to avoid re-indexing?

We are open to suggestions and willing to spend to fix this problem in order to avoid milions on other options.

Please advise

Clip4ever
 
Thanks Jock for the input. Yes we have several concurrent users and we lock 1 record by user not more, and NO it happens apparently with many users actives.

clip4ever
 
If dbfntx is bad and buggy, are you aware of another method ?



 
Hi, Clip4ever

The DBFCDX driver has a much better reputation for reliability and also results in fewer files since all indexes are in one .cdx file which means lower overhead.

In terms of newer technology, the Advantage Database Engine is widely used with Clipper.

Jock
 
I came across CDX before when I upgraded to 5.2e but declined it because it required extensive changes to my app. I will have to reconsider it again since you believe it will help.

I will also inquire on the Advantage database and I welcome any tips lessons learned about it in order to try to sell it to the management.


Great response. I'm grateful.

clip4ever
 
Clip4Ever-

Are you packing the db after deleting large amounts of records, deleting the existing indices and indexing from scratch, or what?

David
 
The weekly Purge and reindex activity includes:

-- Deleting records older than a specific date.
-- Packing the tables.
-- Indexing the database tables using the INDEX ON statement, whic I believe recreates the index.

If you mean phusically deleting the NTX file using the DOS delete command before re-indexing ? Then no, we re-index using the same physical ntx file. Is there a difference ? Do you suggest to better phusically delete the file.NTX ?

clip4ever
 
clip4ever,

As David mentioned you should be phsyically deleting the indexes before executing the INDEX ON statement. If there is corruption you may be propagating the same corruption if you don't delete the index prior to INDEX ON. Use the

IF FILE("FILE.NTX")
DELETE FILE.NTX
ENDIF

test to see if it is present before you try to erase it.

On any of the high volume transaction you may want to recreate the indexes prior to using the program. Basically if you add a lot of records I would recreate the indexes, your doing that with the PACK & INDEX ON after you purge the old records. The volatility on top of a corrupt index may be causing the continuing issues you see.

Jim C.


 
Jim,

Deleting the NTX is an inerresting idea. I'll try it.
Re-creating the indexes here and there and every night is not possible because it is a 7/24 application and I need an outage in order to re-index or do anything.

A maintenance release is coming in a week or 2, and I'm convinced to try the CDX and deleting the index file(s) before re-indexing. I'll let you know the results.

Many thanks again, with some of these ideas I might be able to:

clip4ever
 
Hi,

I wouldn't be quite so quick to jump on the index corruption bandwagon - sure they can be problematic, but most of their problems are infrastructure related, poor network cabling, power instability and the like - I'm not 100% convinced they are as buggy as all that. That's not to discount them entirely though.

This speed thing is interesting though and smacks of an ownership issue in Windows.

Does your code open and close the table as and when required, or does it hold it open and just access as needed?

If it's the former, then the windows server may be using a technique called opportunistic file locking... this is where the workstation asks to open a file in shared mode and the server hands it a local copy in an exclusive mode (it makes the server look quicker).

A second workstation trying to open the table at the same time then has to wait while the server and first workstation negotiate the release and reopening of the file.



Regards

Griff
Keep [Smile]ing
 
Inerresting analysis. However, in my application, I open all databse tables in shared mode when I enter to the main menu and they all stay open until I exit back to the signon screen where I close most of them keeping a couple of control tables necessary for login.

According to your analysis I'm fine in this side.

Thanks

clip4ever
 
In that case I'd be looking at the network hardware - duff hdd/switch/router/nic or cable... Clipper can append a large multiple index to the largest file much faster than almost any other db I know.

Regards

Griff
Keep [Smile]ing
 
Clip4ever

What does the code do when you scan an item? Is it a read only proccess? or is something written. The fact that a re-index solves the problem, but it slows down as time goes on sounds like an index description problem, or the dreaded "set filter to" command. My experience with ntx indexes, is that when they go bad, you get junk, not a slow down.

Jim Rumbaugh
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top