Here's my situation. We have a system that has to fetch an entity graph and do some business logic on it. The result of this logic is a potentially large (up to 5000) number of inserts or updates in a particular table.
I want to optimize these inserts/updates as much as possible, and my current theory is that the bottleneck (or one bottleneck) is passing the many SQL statements across the network.
What are some steps I can take to make this text as small as possible, and the processing as fast as possible?
This is what I was thinking:
- No refetch after the Save.
- Remove all fields from the entity that aren't necessary.
- Process the entity collection as a UOW (or is an entitycollection save the fastest method?)
- Don't save the collection/uow in a transaction (not even sure this is possible?)
- Remove/simplify the schema name that is emitted (I seem to remember this being possible?)
The last thing I tried before leaving work today was adding each entity as it was "discovered" to an entitycollection, and then saving the collection in one go. This was faster than adding to the fetched graph and saving the whole thing recursively.
Opinions on the above methods? Any other ideas on how I can trim the size of the SQL text that gets emitted, or other ways to make mass inserts/updates faster?
Thanks,
Phil