Just to answer your questions more directly:
1. Is try catch ok to use?
No, a flaky network connection - not one that's clearly gone away by an unpowered router, won't trigger an error, but a file defect, which you then detecct next time, not this time. Puttin every UPDATE, REPLACE, INSERT, APPEND, etc. etc. into TRY CATCH you also don''t win enough control to handle all these errors, as there are some that might be due to LAN fails, but not all of them. You only overcomplicate anything you do and does not generally solve the problem.
2. Is it even worth doing testing LAN connection everytime users access to tables?
My answer above in short is no. I was concentrating on the aspect of whether a router without power causes a file defect, though, not about any error prevention. Let's look at this situation in more details: Just a router is without power and thus your LAN connection is gone. Both server and client still run, the client has buffered changed data, otherwise the discussion can end. If you don't use buffering, we talk about whether we can protect a single field change to not get lost because of an error crashing the application. It's relaly just not worth wondering about that. So assumed there are some changes lingering and would you know the LAN connection is lost in the mmoment, you could prevent an error, but still only wait for the LAN to come back. You would need to do some local storage of changes, or would need to keep the form up and open and in its state to not lose the changes. It's not practical.
It really boils down to solving this problem in a hardware solution than trying to work around flaky hardware situations (taking power outages into that realm, no matter how good the hardwae is in itself) with fault-tolerant software. Let's say it that way: If you would tell me to write software that can handle such situations, I'd higher my price to perhaps 10 times as much as writing a "just" decent software that relies on LAN to work. You save more by investing in a better hardware structure than by letting me write unfailable code. And "unfailable" could always only go as far as ensuring that even with a software crash the last state is recoverable. Even if managing that, it's not were you would usually pick up. As the duraiton of power outages is unknonwn, the employees that last worked may not be all available. It's likely when a user comes back next day his recovered local client situation is outdated, as others already cared about order fulfillment or such things.
Well, and then finally, as I already mentioned last in my previous post, a failsafe (in sense of data health) software system already exists in the form of database servers, and there are many to pick from that all based on transaction logging can recover to a healthy state and also prove their healthy state at start up by checking actual data against what the transaction log tells it should be.
It'll not cover the clients and all locally not yet stored changes, still. But as already said that's likely not even desirable. You already upgrade the quality of your situation significantly by using a SQL database server. It makes health issues to the data backend a very rare instance. Notice, not all problems are routed in power outages. Defects of DBFs and also of server data files can occur for other reasons, too. But they become really rare.
Let me put this in perspective of your frequency of power outages: The hard and software used in the moon landing mission was fault tolerant in several ways, but if they would have expected faults every few days, these tolerances wouldn't have been good enough to start the mission.
And so to power protect your server-side hardware, including the server-side hub with UPS is a must-have, it's no discussion point.
Chriss