Contact US

Log In

Come Join Us!

Are you a
Computer / IT professional?
Join Tek-Tips Forums!
  • Talk With Other Members
  • Be Notified Of Responses
    To Your Posts
  • Keyword Search
  • One-Click Access To Your
    Favorite Forums
  • Automated Signatures
    On Your Posts
  • Best Of All, It's Free!

*Tek-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail.

Posting Guidelines

Promoting, selling, recruiting, coursework and thesis posting is forbidden.

Students Click Here

How to log cyclic with a predetermined fixed log size, maintenance free.

How to log cyclic with a predetermined fixed log size, maintenance free.

How to log cyclic with a predetermined fixed log size, maintenance free.

Here's a short demo of fixed-size logging with a DBF log file.

The main idea is to populate a fixed number of log records, which means the log table will already start with a specified size to fit in as many messages as you want to be able to look back. This size then is kept constant. At least my check results in 0 bytes size changes after logging thousands of messages, of which only the last 10 remain in the logging table. The demo log size is only 10, but that's a constant you can and will obviously adapt to what you need. That depends on what volume of log messages you have per second, hour, day, year, whatever.

To run this, adjust the class property LogTable so that this DBF file can be created, i.e. the directory for the log table already exists.

It will show a browse of the log after it's filled with 10 messages. Then 2 further messages are logged, each time browsing again. You'll see the browse record count will stay constant. Message 11 replaces message 1 and message 12 replaces message 2. To progress in the demonstration just close the browse window each time it pops up. After the third browse is closed, a test will start to log thousands of messages (without showing them, of course) and monitor the total size of the log DBF and CDX files to check for any size changes. I didn't detect any bloat, so the log file really keeps a fixed size.

This has just the slight disadvantage of starting with the maximum size log file already. On the other hand, the logging process is as simple as you can make it, just a LOCATE followed by a REPLACE. Use the log DBF with that index and you have records in chronological order. Without using the index, the oldest record can be anywhere, so always view the log table in index order - or in descending index order to see the last messages first.

The logging is also done on the log DBF opened exclusively to avoid any interference, and if the logger doesn't get exclusive access it throws an error and will deny working. But you may change this to shared usage and, of course, adapt this in any other way for your needs.


* Logger usage
* loLogger = CreateObject("Logger") && once
* loLogger.LogMessage(cMessage) && for each log record

Local loLogger
loLogger = CreateObject("Logger")
Local lnI
* creating 12 messages in a log fixed to 10 records (LOGRECCOUNT defined as 10 for this purpose)
For lnI = 1 to 12
   loLogger.LogMessage(Textmerge('message <<lnI>>'))
   If lnI>9 && browse to see progress after logging message 10,11, and 12
      Browse && first browse display 1-10, second browse 2-11, third browse 3-12, so records 1 and 2 are recycled.

Local lnJ, lnSize, lnOldSize
lnSize = 0

* Check, if the cdx grows over time
Set Notify cursor off 
For lnI = 1 to 11
   Release laLogtablesize
   lnOldSize = lnSize
   lnSize = laLogtablesize[1,2]+laLogtablesize[2,2] && sum dbf and cdx file sizes
   If lnOldSize>0
      ? 'Log size change after 1000 logs:', lnSize-lnOldSize

   For lnJ=1 to 1000
   EndFor lnJ
EndFor lnI
* Result (for me): No size changes. That was the goal.

* Logger class definition
#Define LOGRECCOUNT 10   
Define Class Logger As Custom
   LogTable = 'c:\programming\tests\log.dbf'

   Procedure Init()
      Return This.OpenLogtable()

   Procedure CreateLogtable()
      Use in Select('LogAlias')
      Create Table (This.LogTable) (LogTime datetime, sortstring char(10), LogMessage char(254))
      Local lnI
      For lnI = 1 To LOGRECCOUNT
         Insert Into (Alias()) Values (Datetime(),Sys(2015),'')
      Endfor lnI
      Index On sortstring Tag chron Ascending
      Use Dbf() Alias LogAlias Order Tag chron Again Exclusive

   Procedure OpenLogtable()
      If !Used('LogAlias') Or Not Upper(Dbf('LogAlias'))==Upper(This.LogTable)
         If Adir(aDummy,This.LogTable)=0
               Use (This.LogTable) Alias LogAlias Order Tag chron In 0 Exclusive
            Catch To loException 
               loException.UserValue = "Could not get exclusive access to log table."
      Return (Reccount('LogAlias')=LOGRECCOUNT)

   Procedure LogMessage(tcMessage As String)
      If This.OpenLogtable()    
         * locate the oldest log record
         * replace it with the new log entry
         Replace LogTime with Datetime(), sortstring with Sys(2015), LogMessage With tcMessage in LogAlias
         Error 'log table reccount not as expected'
   Procedure Error()
      LPARAMETERS nError, cMethod, nLine
      If nError = 2059
         ? Message()
      ? nError, cMethod, nLine

Notice how sorting with Sys(2015) works on the basis of it being a string that sorts in chronological order. Sorting by the datetime field is not good enough, as that is only precise to the second and you could have very many records in the same second. SYS(2015) will always ascend alphabetically. In shared usage, records of different client computers may interleave, if computer clocks are not synchronized, but that doesn't stop the mechanism to work, records of the same second from different clients just may not be recycled in exact order, but that will never matter, as they are exceeding their "best before" date in the same second anyway.

You can for example extend this by archiving log entries that are interesting, adding further fields to the log, etc., etc. Of course, the log could also start empty and before reaching LOGRECCOUNT you APPEND BLANK before REPLACE, but if the log size is known from the first init, you can be sure logging will never fail because of insufficient disk space.


RE: How to log cyclic with a predetermined fixed log size, maintenance free.

This is very interesting, Chris. I assume it arose out of the discussion in thread184-1820328: find non vfp file open in a non vfp application. I wish I had thought of the idea twenty years ago.

I was working on an application with a large number of users, where the management wanted every update (insert, update, delete) to be logged. I did it by inserting a new record into a DBF for each of those event. The DBF very quickly approached 2GB, so I added some code to periodically check the size, and if necessary delete the oldest record and then pack the table. Obviously that could only be done when exclusive use was available, which was during the monthly maintenance routine.

It all worked well enough - and has been for the last twenty years. But there is always the fear that the table might suddenly start growing much faster and therefore reach 2GB before the users could do anything about it.

Having a fixed-length DBF would have been a good solution. It would contain exactly the number of records that would make it a bit under 2 GB, indexed on the datetime field. Every time an action needed to be logged, it would first search for the first record with a blank datetime, and if not found it would then search for the one with the earliest datetime. Once a record was found, it would overwrite it with new data.

I will keep this in mind if the need ever arises again.


Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads

RE: How to log cyclic with a predetermined fixed log size, maintenance free.


Quote (Mike Lewis)

it would first search for the first record with a blank datetime, and if not found it would then search for the one with the earliest datetime
Which is yet another idea to find the relevant record to update, yes.

I avoid needing to look for blank datetimes by filling in datetime() into the initial records, too. By the way, all that would work even simpler with default values you can define in DBFs part of a DBC.

The LogMessage concept was the starting point of this. If you sort data chronologically by index, the top record in sort order is the oldest and found simply by LOCATE, then replacing the datetime with the current datetime, it moves to the bottom of the list in sort order while it stays the same recno. Thus you get the cyclic nature of the data. And this sorting is rarely done by an index, as you usually have chronological order just by physical order. The overall effect is that you write in record 1 to LOGRECCOUNT and then start over at record 1 again, another way to see the cycling effect of this.


Red Flag This Post

Please let us know here why this post is inappropriate. Reasons such as off-topic, duplicates, flames, illegal, vulgar, or students posting their homework.

Red Flag Submitted

Thank you for helping keep Tek-Tips Forums free from inappropriate posts.
The Tek-Tips staff will check this out and take appropriate action.

Reply To This Thread

Posting in the Tek-Tips forums is a member-only feature.

Click Here to join Tek-Tips and talk with other members! Already a Member? Login

Close Box

Join Tek-Tips® Today!

Join your peers on the Internet's largest technical computer professional community.
It's easy to join and it's free.

Here's Why Members Love Tek-Tips Forums:

Register now while it's still free!

Already a member? Close this window and log in.

Join Us             Close