Fastest Insert/Update Performance

Posts   
 
    
psandler
User
Posts: 540
Joined: 22-Feb-2005
# Posted on: 23-Jan-2007 06:20:01   

Here's my situation. We have a system that has to fetch an entity graph and do some business logic on it. The result of this logic is a potentially large (up to 5000) number of inserts or updates in a particular table.

I want to optimize these inserts/updates as much as possible, and my current theory is that the bottleneck (or one bottleneck) is passing the many SQL statements across the network.

What are some steps I can take to make this text as small as possible, and the processing as fast as possible?

This is what I was thinking:

  1. No refetch after the Save.
  2. Remove all fields from the entity that aren't necessary.
  3. Process the entity collection as a UOW (or is an entitycollection save the fastest method?)
  4. Don't save the collection/uow in a transaction (not even sure this is possible?)
  5. Remove/simplify the schema name that is emitted (I seem to remember this being possible?)

The last thing I tried before leaving work today was adding each entity as it was "discovered" to an entitycollection, and then saving the collection in one go. This was faster than adding to the fetched graph and saving the whole thing recursively.

Opinions on the above methods? Any other ideas on how I can trim the size of the SQL text that gets emitted, or other ways to make mass inserts/updates faster?

Thanks,

Phil

Walaa avatar
Walaa
Support Team
Posts: 14950
Joined: 21-Aug-2005
# Posted on: 23-Jan-2007 07:37:05   

Don't refetch after the save LLBLGen Generates the SQL Query to only include changed fields. UOW has a complete different purpose and it uses a transaction.

The last thing I tried before leaving work today was adding each entity as it was "discovered" to an entitycollection, and then saving the collection in one go. This was faster than adding to the fetched graph and saving the whole thing recursively.

Do you mean you used one EntityCollection to save all the entities in a graph (different types of entities)?

I want to optimize these inserts/updates as much as possible, and my current theory is that the bottleneck (or one bottleneck) is passing the many SQL statements across the network.

How does your application architecture look like?

Also please check these threads: http://www.llblgen.com/TinyForum/Messages.aspx?ThreadID=8658 http://www.llblgen.com/TinyForum/Messages.aspx?ThreadID=7545

psandler
User
Posts: 540
Joined: 22-Feb-2005
# Posted on: 23-Jan-2007 14:49:53   

Walaa wrote:

Do you mean you used one EntityCollection to save all the entities in a graph (different types of entities)?

I use one entityCollection, but all the entities are of the same type. Does an entityCollection always get saved in a transaction?

Walaa wrote:

How does your application architecture look like?

It's basically just a windows app using LLBL generated objects. Right now we are only in proof on concept stage. We are rewriting an app that has all the logic in SPs and UDFs. Our POC app performs much faster when doing all the calculations, but the old app's updates and inserts perform faster. My current theory is that this is due to the fact that the SQL doesn't have to be passed over the network.

Eventually we will like have a web layer => Business layer => DAO layer => DAL (the two in the middle might be combined is one project/assembly). It's unlikely that it will ever be three-tier.

Thanks,

Phil

jaschag
User
Posts: 79
Joined: 19-Apr-2006
# Posted on: 23-Jan-2007 16:28:19   

psandler wrote:

Walaa wrote:

Do you mean you used one EntityCollection to save all the entities in a graph (different types of entities)?

I use one entityCollection, but all the entities are of the same type. Does an entityCollection always get saved in a transaction?

Walaa wrote:

How does your application architecture look like?

It's basically just a windows app using LLBL generated objects. Right now we are only in proof on concept stage. We are rewriting an app that has all the logic in SPs and UDFs. Our POC app performs much faster when doing all the calculations, but the old app's updates and inserts perform faster. My current theory is that this is due to the fact that the SQL doesn't have to be passed over the network.

Eventually we will like have a web layer => Business layer => DAO layer => DAL (the two in the middle might be combined is one project/assembly). It's unlikely that it will ever be three-tier.

Thanks,

Phil

(Assuming sql server) have you looked into bulk inserts? It would mean bypassing the object layer but IIRC in a past project it improved insert speed by up to 100 times (on 10-20 K records).