?
Ah, there: **Buffering = 3
Well, still not everything is clear. What's clear is: In the save branch, it's not the TABLEUPDATE() which saves, it's GO BOTTOM. But that doesn't hurt the program logic, it just makes the TABLEUPDATE() unneccessary.
But Dexter says, he does TABLEREVERT() in the selfmade MsgBx, when the user clicks cmdCancel of that MessageBox form.
Dexter, if you do GO BOTTOM before TABLEREVERT() in that button code, then the same the leaving of the buffered record saves it, before you revert it. For a tableupdate or tablerevert you must stay on the record you want to influence. Also my question still applies: Is your form working in the same data session at all? If not, anything you do in there is futile.
There's a good reason why using Buffermode 5 is best: 1. You can still save each record via moving on it and doing TABLEUPDATE(.F.). .F. here meaning not all rows, which means one row, but there are other parameterizations, too, see help. And the other good reason is, you can move around in the table to check for duplicates, without that saving the current record by leaving it. You already know that tricky part about record buffering, that's why you query all data into the cursor "duplic", so you can locate for a double value there.
In regard to preventing double entries the better apporach is to make use of INDEXSEEK(), especially with the parameter lMovePointer =.F., so you seek in the index without repositioning, the function just returns .T. for a found record. And since the record is buffered, it itself isn't found with INDEXSEEK, only a doublette in another record.
The third way is to define an index and let it make the restriction via being a candidate key. Then you get an error by saving a doublette, it's rejected. You can catch that error and can then revert or report back the double entry to the user. That approach sounds most complicated and indeed needs most code, but it's really the most save because even if yu do a doublette check and no matter how fast you do it and how fast you save after finding no doublette, there is this time slot where another user also saves that same thing and you still have a doublette. It's quite improbable in systems with a few users, but to prevent it you let the data decide abot the doulbettes with a candidate index. That's foolproof.
I always see developers do code to prevent errors, error handling, especially exception handling in the form of TRY CATCH really is a valid way to let errors happenin a controlled way, prevent error prompts to the end user and handle expected errors about eg such doublettes, internally. Nothing breaks, if you let errors happen and react to them, instead of doing more complicated things to prevent an eventual error. Think alone of network bandwidth. For every save you do a lookup in the index. And in your case you even load the whole data, just for checking an eveantual doublette. This is really counterproductive.
Bye, Olaf.