Just notice again, this only appends new records with a new JOB_NO, it will not update TENG_SHTD with edited data from TENG_XREF, which is about JOB_NO already existing.
What you need in each and every table is a primary key, that will be the only field to check about new values. If you make it an integer AutoInc, it also becomes much easier to determine new records, as that will be all with an ID larger than MAX(ID) within TENG_SHTD.
You just solved part of the problem of merging back checked-out data, the 2 in my list above.
When you also want to detect changes of existing records you will have to examine which records DO exist in TENG_SHTD and in TENG_XREF and then take them from TENG_XREF. One easy solution to that is a DELETE+INSERT instead of UPDATE strategy, you DELETE FROM TENG_SHTD WHERE (cust_name + JOB_NO) IN (SELECT (cust_name + JOB_NO) FROM TENG_XREF), before the step to append new records. Because as you delete the common records both found in TENG_SHTD and TENG_XREF from TENG_SHTD, they will fall into the category of "new" data (at least when you work with SET DELETED ON). The only downside of that approach is you will have lots of records marked for deletion and added back, often unchanged. And it will cause a problem with unique indexes, as deleting a row does unfortunately not delete it from an index. The best thing to go about the problem is to PACK the TENG_SHTD before doing the INSERT/APPEND step, but that becomes less good the more data you have.
And another big disadvantage of the DELETE+INSERT approach is how records are rearranged. In theory, this shouldn't matter as a table is just a set of data and in (mathematical) set theory the order of items (tuples) of a set don't matter and shouldn't matter. You get in trouble with AutoInc and insert/append, that problem is there anyway, even if you only append new data, so I'll take that out of the discussion, as primary keys could be generated differently and should be, if two users check out the same data, or automatic incrementing numbers starting from the same initial value will cause double keys, when the two users put back their data.
If you want existing records to be kept where they are and being updated, the coding for that has to become much more individual per table, as it involves plenty of [tt]SET InternalTable.field = ExternalTable.field[/tt] clauses within UPDATE-SQL or [tt]REPLACE InternalTable.field WITH ExternalTable.field[/tt], so there is no simpler syntax in the xBbase REPLACE command, too.
Overall you implement, what TABLEUPDATE() does for you in case you work with a single table and buffering, it also merges any change of new, edited and deleted buffered data to a central DBF, which is kept unchanged until the TABLEUPDATE() is done. I always warn people about this, when they do so in cases they could simply learn the few settings you need for working with buffered data and the TABLEUPDATE solution, maybe even combined with transactions. What you absolutely and unavoidably miss, is the opportunity to know about conflicts TABLEUPDATE can know about from buffers. Because besides buffering new data the buffer also keeps in mind old data as it was when fetching it and can compare that to current DBF data and buffered data and see, whether these three states of a field differ and mean a conflict or only the buffered data changed from the fetched and still yet stored value and so your commit is the only commit of new data with no conflict. The three states are [tt]WorkareaAlias.field, OLDVAL(WorkareaAlias.field)[/tt] and [tt]CURVAL(WorkareaAlias.field)[/tt] and they can all be different values. An update is necessary, if [tt]WorkareaAlias.field[/tt] differs from [tt]CURVAL(WorkareaAlias.field)[/tt] and a conflict arises, if at the same time [tt]OLDVAL(WorkareaAlias.field)[/tt] differs from [tt]CURVAL(WorkareaAlias.field)[/tt], because then another user changed that data already and you have to ask, whether your value or the other user's value os correct, or you even need a third or fourth value. eg if the field is about the stock number of items, if it decreased from 10 to 9 from one user and decreases from the same initial 10 to 8 for another user, actually neither 9 nor 8 are right, overall the stock should reduce by 3 and become 7. Such data change conflicts are hard to detect working with offline data and most probably easy to go about in a central database, when this happens at separate times of the day, but is a hard problem if data is offline all day.
I doubt you will stay happy with your current approach of handling data this way. If you have the need to support users working in offline environments I agree a local database with data you check-out from central data is the only really viable solution. VFP does offer Offline Views, a well-kept secret, most never talked about, but that's also because it doesn't really let you work on offline data with the same code as done for online data already, you query a view instead of the DBF in offline mode, so that would cause a major rewrite of code for data access.
The simplest solution is having no need to go offline with data. In today's everywhere internet connectivity (depends on the country, too) you could think about putting data into the cloud, which doesn't necessarily mean Amazon AWS or Microsoft Azure or Google GCP, this can also be a MySQL database. Though don't get me wrong, online/offline here only has a weak entanglement with internet connectivity, these terms here mean the difference between working on the actual central database (online) and extracts of data (offline). An always online internet connection and a central database accessible via internet just mean all clients don't need offline data extracts, they can work on the online - and this time it means both central and online on the internet - data. That approach also means a major rewrite of code, but adds opportunities to work with the online data in a website.
Bye, Olaf.