Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Chriss Miller on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Creating Indexes 3

Status
Not open for further replies.

audiopro

Programmer
Apr 1, 2004
3,165
GB
I am saving a DBF file for backup purposes and for space saving reasons have opted not to save the CDX file.
How do I programmatically create a CDX file?
The language reference book I have covers creating CDX files with DBF files but not on their own.


Keith
 
The language reference book I have covers creating CDX files with DBF files but not on their own.
Not sure what you're trying to do here. A CDX file holds information about the records in the DBF so you can't create a CDX without there being a DBF for it. The CDX does not have to have the same name as the DBF but it's meaningless without a DBF.

Geoff Franklin
 
Sorry if I didn't make the situation clear.
The app is a shop till prog and I have a routine which automatically makes a daily backup. I have backed the DBF file only, to save space. If we need to reinstate one of the backups following corruption etc. a corresponding CDX file will have to be created to make the DBF useable within the app. I need to write a routine which will create an index for the backup file. I thought there was a CREATE INDEX command but I am unable to find any info on it.
Hope this explains the problem a little better.

Keith
 
Keith,

You can only create indexes by opening the dbf and issuing a command like INDEX ON.....

If you only backup the dbf files, then I would think you also need to record somewhere the TAG names & expressions for each cdx.

You can get these using the ATAGINFO function.

There is a small page (Index Creation for Tables) in the help on creating indexes which might help.

Hope that helps,

Stewart
PS If you want to get the best response to a question, please check out FAQ184-2483 first.
 
Keith,

This sounds like a bad idea to me.

If the worst happens and you have to restore from a backup, you will not be able to open the DBF if you no longer have the CDX. Any discussion about how to re-index it is academic, because you will not get that far.

If I were you, I would focus on ways of increasing the space you have available for your backups.

Mike


__________________________________
Mike Lewis (Edinburgh, Scotland)

My sites:
Visual FoxPro (www.ml-consult.demon.co.uk)
Crystal Reports (www.ml-crystal.com)
 
you will not be able to open the DBF if you no longer have the CDX
I suspect you mean that you can't open the dbf automatically.

This works on my PC from the Command Window:

Delete file Test.Cdx
Use Test
Click on "Ignore" when Fox complains about the missing CDX.

Geoff Franklin
 
Hi Geoff,

neverthless it's not really helpful. You can't write a script that opens all tables and creates all needed indexes without user interaction. You'd have to click on "ignore" for each table once...

If you want to save backup space, create a backup-dbc and copy the data without indexes to that database with COPY TO ... and DON'T use the WITH CDX| WITH PRODUCTION keyword.

Then you have all data and no cdx overhead and no indexes defined by missing as cdx file. Additionall you'd need what ATAGINFO gives you to index tables after a restore. It's not very trivial, as there are several index types that each need special cases, eg INDEX ON is not for generating a primary key.

I tried to push it even further, as a CVS or SDF export is even shorter than the DBF file, but you get problems with memos and/or binary data stored within fields. A simple chr(13) in a C(n) field destroys everything. All in all a decent harddrive with enough space and a 1:1 backup of all files is the easiest thing to do.

Bye, Olaf.
 

Another option is to use GenDBC on your database, to create a complete database structure including the indexes, and just use the index portion of the script to recreate the indexes.

Mike Gagnon

If you want to get the best response to a question, please check out FAQ184-2483 first.
 
In the usual scruffy Griffin style...

This will remove the CDX flag from a .dbf

Then you can create any index you like.

Code:
FUNCTION NOCDX
	PARAMETERS M.FILENAME
	PRIVATE M.FILENAME,M.HANDLE,M.FLG
	M.FLG = .F.
	IF FILE(M.FILENAME)
		M.HANDLE = FOPEN(M.FILENAME,2)
		IF M.HANDLE > 1
			FSEEK(M.HANDLE,28,0)
			FWRITE(M.HANDLE,CHR(0))
			M.FLG = .T.
			FCLOSE(M.HANDLE)
		ELSE
			MESSAGEBOX("Unable to Open File",48,"Problem")
		ENDIF
	ELSE
		MESSAGEBOX("Unable to Find File",48,"Problem")
	ENDIF
	
RETURN(M.FLG)

Regards

Griff
Keep [Smile]ing
 
I think the obvious answer is missing. Before backing up the DBF, open the table and issue DELETE TAG ALL. That will delete the CDX and clear the flag in the DBF.

If you have existing tables, you can write an ON ERROR handler to trap the error and then allow you to open the table without displaying the dialog and clicking Ignore.

Craig Berntson
MCSD, Visual FoxPro MVP, Author, CrysDev: A Developer's Guide to Integrating Crystal Reports"
 
All in all a decent harddrive with enough space and a 1:1 backup of all files is the easiest thing to do.
Good point. It's all very well doing the clever stuff to save space but if the the company's future (and my job) depended on the backup being 100% reliable then I'd follow Olaf and just buy more disk space or a bigger tape drive.

Geoff Franklin
 
Hi Geoff,

as disk space is so huge and cheap these days, that's perhaps the best idea.

To delete all tags as Craig suggests has also one disadvantage: You first need to copy the whole database as you surely want to go on using the original database as is with it's cdx files and without the need to rebuild all indexes. Therefore I didn't proposed that, but rather the COPY TO solution, as that does the copy and deletion of indexes in one step.

Of course there is one advantage of not saving indexes: You don't backup index defects, and a restore will build fresh and fully functional indexes. In fact the Stonefield Database Toolkit (SDT) could be a very helpful tool for the kind of backups you had in mind: It stores all table structure informations of a dbc and enables you to rebuild the whole database indexes from scratch. It also has some repair capabilites built in to repair a corrupted dbf or dbc with the informations it has saved when the database was still working. And it may even be a good idea to run SDT repair routines preventive every night/weekend.

You can get SDT eg from hallogram or foxtoolbox.

Bye, Olaf.

 
I seem to have opened a can of worms with this one but it's good to debate these things as other viewers read and learn more about VFP.
Back to the original problem of creating indexes.
I have back ups and numerous copies of the index structure.
A crash routine utilises a skeleton table which contains a few blank records and a full index. In emergencies we run the app. with this table and append the data to the latest backup when I can get to the remote site, not always possible for several days. This is the reason for this thread as I am looking for an automatic method of resetting the table and its index when Mr. Rushmore sneezes and blows the candle out.
The backup routine zips the current sales table to a file named as the date. The file will be FTP'd to the FTP server once that side works (please see seperate thread).
What I want to do is write a routine to index the latest good backup. Space is not the problem but I am of the old school where you had limited resource space so had to write efficient code, quite the opposite of todays bloated progs.
Could I just rename a good index file from a previous save and re-index that?
I'll try that and see what you guys have to say before I post again.

Keith
 
A REINDEX may not repair a corrupted index. It's better to (re)create the indexes from scratch, but may you think of that.

Other possibilities to save space: You may erase some files that result of dbc/dbf alterations: BAK (DBF Backup) and TBK (FPT Backup). And you can also reduce the size of dbf and fpt with PACK or at least PACK MEMO.

If you really want to go the direction of only backing up the data without index information, seriously consider the COPY TO method. That way you don't need any (old) cdx files to be able to open the tables of the backup and no table header hack like Griff suggested. But you first need an empty dbc to COPY all tables TO. And the stored procs and references (Foreign keys) etc. must also remain intact. References of course depend on (primary) indexes, so you can't backup the references and have to rebuild them if you want to drop the indexes. Still a bit work to do...

Another problem is: You can't use the portion of a GENDBC-Script to recreate the indexes, as it uses the PRIMARY KEY option of CREATE TABLE to create the primary key indexes and you can't do that, as the dbf already exists from your backup. With INDEX ON you can (re)create simple, unique and candidate indexes, for primary keys you need ALTER TABLE ... ADD PRIMARY KEY eIndexExpression TAG ...

And let me tell you about a thing that I would have never thought of: I maintain quite a big application with a decent database and one day I saw a table defind in the DBC that didn't exist on disk. It was not by accident erased, it was added with a local path, although the database resided on a file server. One developer has added this intentionally to store some meta-information about the database. Now if I'd do a backup from some other workstation than his with my COPY TO method and a new fresh backup dbc, I'd not find his local table and therefore drop it in the definition of the backup dbc. Of course I wouldn't backup this local table, even if I backuped all the DBC files, but at least a restore would leave the information about that addition table intact.
You never know if there are certain other files, that don't really belong to the database itself, but store some additional information, like files SDT adds to the database directory. Or there may be classes that are used by third party generated stored procs etc. pp.

The easiest really is still backup of the whole DBC directory.

Sorry, if I pull at your nerves by explaining the possibilties and even advantages of doing an economical backup regarding the disk space vs the mostly sorrow free simple file backup of the whole DBC directory.

As I said, I've thought about the possibilities quite long and I would perhaps do such a backup for a dbc I'm the only and exclusive maintainer, but not a general solution. The thing that let's you sleep best is SDT plus a normal all-file backup... When it comes to repairing SDT has never failed for me and so the benefit of index health is not important. Because drive space is cheap that's no issue too. Next week I'll bye me an external 250 GB hard drive with USB 2.0 and Firewire interface for only 139 EUR (=184 USD due to the high rated EUR). Only two years ago the 256 MB USB stick I own has cost almost that price for .001 of the space.

Of course, when it comes to FTPing the data it's also time consuming and a matter of the bandwidth, so you still may give it a try.

Bye, Olaf.
 
I have tried the code on the application machine and the Copy To method doesn't work in that environment.
I have been experimenting with the following code.

Code:
use z:\daycopy exclusive
delete all
pack
shop_ref="D"
append from z:\daysales
	Dest="c:\autoback\"+shop_ref+;
	alltrim(str(day(date())))+;
	alltrim(str(month(date())))+;
	alltrim(str(year(date())))+".zip"
[b]! pkzip &dest "z:\daycopy.dbf"[/b]

The code works as far as appending the data goes and speed isn't the issue.
Pkzip cannot open the file as it is still in use by the VFP prog. Daycopy.dbf is a free table outside the main app.
How can I close that particular table without affecting the other tables in the app?

Keith
 
Code:
m.oldarea = select()
select 0
use z:\daycopy exclusive
delete all
pack
shop_ref="D"
append from z:\daysales
    Dest="c:\autoback\"+shop_ref+;
    alltrim(str(day(date())))+;
    alltrim(str(month(date())))+;
    alltrim(str(year(date())))+".zip"
use
select(m.oldarea)
! pkzip &dest "z:\daycopy.dbf"


Regards

Griff
Keep [Smile]ing
 
One thing though your use of date in the file name could cause a problem between (say) 11/2/2005 and 1/12/2005 - each creating 1122005

Try this instead

Code:
m.oldarea = select()
select 0
use z:\daycopy exclusive
zap
shop_ref="D"
append from z:\daysales
    Dest="c:\autoback\"+shop_ref+dtos(date())+".zip"
use
select(m.oldarea)
! pkzip &dest "z:\daycopy.dbf"





Regards

Griff
Keep [Smile]ing
 
Thanks Griff - Hadn't considered that.
Been doing some experimenting and managed to get the 'Copy To' command working.
Code:
select daysales
set order to sale_num
copy to z:\copied.dbf
  shop_ref="D"
    Dest="c:\autoback\"+shop_ref+;
        alltrim(str(day(date())))+;
        alltrim(str(month(date())))+;
        alltrim(str(year(date())))+".zip"
! z:\pkzip &dest z:\copied.dbf
delete file z:\copied.dbf
The goal posts have now been moved and after all the debate it has been necesseary to copy the IDX as well.




Keith
 
Hi Keith,

well, you figured it out yourself:
COPY TO doesn't need an empty Destination table like APPEND, it creates the table.

As you are working with free tables it also doesn't need the DATABASE ... clause, but WITH CDX or WITH PRODUCTION, if you want to copy the CDX. I don't have tested with IDX though. If it isn't an old database and you have control, it's recommended to use CDX indexes.

Bye, Olaf.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top