Due to the size of the pointer field into .dbt files the max size of a dBase III type .dbt file is 16 meg. When you try to add new text or modify existing text after that it just disappears into never never land.
The big problem with the .DBT file is it does not get compressed. If you change a memo field and it takes it over the next 128-byte record limit then the whole memo is added to the .dbt at the end and the previous text is just left there. Doing a pack of the file does not compress the .dbt. To do that you have to copy all the records into a new dbf/dbt under another name and then rename them back and reindex. That will likely shrink your .DBT considerably, but if not then you have to think about splitting the file up or a different file design.
Switching over to the DBFCDX RDD should help on both problems, as the FPT memo files are both compressed, and automagically kept at the smallest possible size.
BTW, the blocksize of .DBT files is 512bytes, but the issue is pointed out correctly, ofcourse ;-)
Default blocksize of the fpt memo's is 64 bytes, and using optimal fill.
I have used FPT files of over 50 MB before ('97), and never experienced any crash on them. I know these files grew later to over 100 MB but still without crashing.
This makes me confident enough to trust my textual data to fpt files instead of dbt files.
Another alternative for your situation is to consider using FlexFile. This is also an optimizing routine utilzing unused space. I use Flesxfile II and 5.2e and I have clients that have over 1Gb in *.DBV storing everything from images to text files. Ther is no limit to storage size as there are image files over 1Mb that are stored and text files are not limited but memoedits will restrict you to 64K, but you could use a tbrowse to use larger files.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.