LLBLGEN versus stored procedures

Posts   
 
    
Perche
User
Posts: 10
Joined: 07-Aug-2008
# Posted on: 08-Aug-2008 19:09:27   

A number of our programmers are leaning towards writing one stored procedure for any transaction update involving several tables instead of using multiple LLBLGEN updating methods such as SaveEntity, SaveEnityColection, ActionProcedures, etc.

Their argument is that using a stored procedure instead of multiple LLBLGEN updating methods offers "the best performance" by reducing the number of trips to the database.

Is the performance hit really that significant? We are talking about 10 rows in a single transaction. Does SaveEntityCollection or UnitOfWork2 save overhead versus multiple SaveEntity calls? I prefer programmer performance and maintainability to computer performance but I feel I am losing the argument. Does someone have an opinion on the matter?

Thanks.

arschr
User
Posts: 893
Joined: 14-Dec-2003
# Posted on: 08-Aug-2008 20:24:29   

Measure how long it takes.

Perche
User
Posts: 10
Joined: 07-Aug-2008
# Posted on: 08-Aug-2008 21:35:13   

It is hard to measure the processing speed as the update occurs within a web postback. But I would estimate that it takes less than a second.

Part of my question is philosophical: How many milli-seconds of overhead (or whatever) should one tolerate before one switches from LLBLGEN methods to writing the entire update transaction entirely as a stored procedure? Are the programmers who advocate stored procedures going overboard to save milli-seconds?

The other part is technical: Does LLBLGEN have a "significant" overhead for multi-table update transactions? Would using one SaveEntityCollection instead of several SaveEntity methods reduce overhead? That is, would SaveEntityCollection "batch" updates into one trip to the server? Would unit-of-work processing so "batch" updates?

The other programmers perceive LLBLGEN to be inefficient but I am sceptical whether this inefficiency is significant.

Thanks.

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39615
Joined: 17-Aug-2003
# Posted on: 08-Aug-2008 21:39:44   

The roundtrips you save are small, as the data you have to send for the update have to be sent in either occasion (proc or dyn. query).

Our queries are actually more efficient, as the query will update just the fields which are changed, so our update statements are tailored towards what has to be updated. A proc which updates a table, which has 10 fields for example, always has to accept all fields, (which might be nullable) and it then requires inefficient isnull checks to update only the fields which have a parameter which aren't null.

So if you have 20 entities in a collection, 3 of them have 2 fields changed, you get 3 update statements in a transaction, with each update statement updating the 2 fields, and only taht data is send. Your procs can't be that specific, otherwise you've to write for every possible different set of changed fields a proc, which is undoable, so people tend to revert to less optimal procs with nullable parameters and isnull / coalesce checks.

Updates aren't batched, but that's also not that significant. You win way more with specially crafted queries for the update required, e.g. field a and b are changed and should be updated in the table.

Apart from that, writing a proc for this is not really scalable: if something changes, you have to update the proc, likely pass more / less parameters etc. etc. while with entities, you don't have to do that.

But let the sceptical programmers write some test code and measure it. Update 3 tables of 10 fields and update in each table 2 fields. They can't possibly write a proc for these 2 fields specifically, as in other occasions it might be 3 fields or more have been changed, and another proc has to be used. You get stuff like: http://weblogs.asp.net/fbouma/pages/7049.aspx

Frans Bouma | Lead developer LLBLGen Pro
jmeckley
User
Posts: 403
Joined: 05-Jul-2006
# Posted on: 08-Aug-2008 22:40:51   

This argument isn't specific to LLBL. it's dynamic sql vs. stored proc. (NH, AR, LLBL, etc) stored procs are great for processing 1000s or 10s of 1000s of records at once. if your talking about a small series of transactions then the gain is not significant.

The "better preformance" argument doesn't fly without statistics. Is there noticeable end user latency? If so where? The database is always the first place to check, but it's not the only culprit for latency.

If your optimism your system prematurely then you will run into problems in the future when the system needs to be extended. And to Otis' point, the stored procs are rarely optimized.

With an ORM you leave the creation of sql to the framework. after all that's what it does best. if you're simply using an ORM to execute stored procs then you don't need the ORM it's just bloating your system. in this case I would roll a simple stored proc DAL.

I would also ask your coworkers besides "performance" what is at risk using entities over stored procs.

jmeckley
User
Posts: 403
Joined: 05-Jul-2006
# Posted on: 08-Aug-2008 22:43:07   

It is hard to measure the processing speed as the update occurs within a web postback.

too much is happening in an http request to meassure anything except total execuction time. write a series unit tests. use the System.Diagnostics.StopWatch to time the routines.