THis is a VERY normal situation....
I have several large db's that I need to update to and from as well as archive from and remove old records...
What I have done is break the updates down by subsets (grouping) of records. Every time it finishes a group, it commits and moves to the next. This keeps locks on records for a very short time and allows transaction logs to update and checkpoint...
Any error in a single set of updates will cause the job to fail with an error stating where the updates got to, but you could easily just write an error and continue.. Or do a retry ...
But IMHO you should always break BIG updates down into much smaller units of work. The key is being able to restart where it failed... So any error should report status to enable you to restart at that point..
The big trick is what do you want to use as a grouping.. For my instance, using the date was easy.. You might want to do all a's then b's or 1 to 1000, 1001 to 2001 etc...
But for it to be successfull you need to make sure that you can always start back up where you stopped.. This will probably mean that you need to order your data also.
No order, no meaningfull failure point to start back from.
HTH
Rob