Soft Deletes should be done with delete triggers and an archive database. The problem with soft deletes is that the overall database data set grows and grows while the active working set (the rows which aren't 'deleted') becomes smaller and smaller compared to the full data set, so over time the database performance degrades, probably significantly. Add to that the increased complexity of the code, (which results in higher maintenance costs), a different solution is often preferable.
A more cleaner solution is to use an archive catalog, which is similar to the one the data is contained in, and a set of delete triggers. The code in the application can now be written normally, and the delete triggers insert the deleted row in the archive catalog's tables. As data which is 'deleted' is only accessed in relatively rare cases (why otherwise access deleted data), that specific code can be written against the archive database and simply migrate the data back to the main database.
netto result: higher performance, cleaner simpler code and the deleted data is still available for recovery/auditing.
Please consider the above when implementing soft deletes. The core reason why soft deletes are required is in almost all cases the ability to look at older data and that's perfectly possible with an archive catalog. Reverting back deleted data is less common as it's very cumbersome to implement: reverting on 1 table is doable, however tables are related to eachother, so rolling back 1 row has implications to its related rows as well, so what you want in these cases is rolling back a subgraph. This is very complex. Soft deletes won't help you there as well, as the complexity of the problem to roll back the data is the same.
Requiring a filter on the tables in every query is very complex to build in, because every join requires the filter in its ON clause as well. It's not something that's easily implemented and works for 100% of the time. Additionally, using soft-deletes has the drawback that you can't use unique constraints nor foreign key constraints, as they might clash with rows which are 'deleted' but still there.
As your model is rather big, and you need to implement soft deletes on all tables (which I doubt is necessary btw, who decided that and why?) the amount of data your application has to wade through every day is IMHO increasing rapidly and will hurt performance pretty badly in the long run. So using an archive catalog with delete triggers is much simpler, cleaner and has the same effect: no data is lost after deletes, as the triggers always fire.