Is the central table fairly small? That is, would a temp table containing just the pks and join-values from the central table be fairly small?
Have you considered turning on ON DELETE CASCADE for your foreign keys, assuming you have any? Letting the database engine handle cascaded deletes is more efficient than doing it yourself.
Are your tables properly indexed? Having the right clustered index and nonclustered indexes can make all the difference in the world.
Have you actually examined any of the execution plans of your delete statements? Table scans are just as bad in deletes as they are in selects, real performance killers.
What are the join types being used: hash, merge, loop?
Improving performance of ANY query, delete, update, or select, involves all these factors.
One strategy that has worked for me, but may not apply in your case (though it might so I'm offering it) is to use a simplified select query that makes good use of the table indexes. You basically select MORE than you need into a separate work/temp table, using conditions that get all the rows you need to work with, plus some extra. Then, working with that smaller table, you do additional work against that table to remove the items you don't need. When you are done manipulating the work table, then you join to it and delete. I don't know if this would make a difference for you.
For me this strategy was the difference between a 15-minute query and a series of queries that all together took about 4 minutes and only hit the main production tables for 3 minutes. The extra conditions that were slowing down the main query then took minimal time against the smaller work table I built.