sp_executesql fails to pick correct indexes

Posts   
1  /  2  /  3
 
    
JohnL
User
Posts: 47
Joined: 07-Oct-2005
# Posted on: 06-May-2008 19:37:38   

I recently upgraded to LLBLGen 2.5 Final ( April 23th, 2008 ) after a long time using the older version successfully. I was pleased to find only a few places where I needed to tweak code to adjust for breaking changes and overall the experience has been good. The back end is SQL 2005 Express (this is a desktop application for a small company).

I have one section where I allow our users to create a custom report, classifying clients into different buckets based on user defined criteria. Because I'm allowing user defined criteria (there is a UI where they select fields, relations and values) I can not use a stored procedure for this particular block of code.

To achieve this in LLBLGen was simple: I just loop over the criteria the user selected for a bucket and build a PredicateExpression that updates records if they meet the criteria (and no prior criteria was already applied).

Now to the problem: LLBLGen generates a sp_executesql that is taking about 55 seconds to run. This 55 seconds is per criteria block, so this adds up to a long time in the end, and would require I bump the query timeout up from the 30 seconds I currently use. No other queries approach 30 seconds (or even 5 seconds for that matter).

Here is a sample of a long running query (as found in profiler):

exec sp_executesql N'UPDATE [ucpmSQL].[dbo].[Company] SET [ClassifierGuid]=@ClassifierGuid WHERE ( ( ( [ucpmSQL].[dbo].[Company].[ClassifierGuid] IS NULL AND [ucpmSQL].[dbo].[Company].[CompanyGuid] IN (SELECT [ucpmSQL].[dbo].[CompanyAggregate].[CompanyGuid] FROM [ucpmSQL].[dbo].[CompanyAggregate]  WHERE ( [ucpmSQL].[dbo].[CompanyAggregate].[BoundInsureds] >= @BoundInsureds1 AND [ucpmSQL].[dbo].[CompanyAggregate].[HitRatio] >= @HitRatio2 AND [ucpmSQL].[dbo].[CompanyAggregate].[IncomeRecent] >= @IncomeRecent3)))))',N'@ClassifierGuid uniqueidentifier,@BoundInsureds1 int,@HitRatio2 decimal(9,4),@IncomeRecent3 money',@ClassifierGuid='688F18E4-10EC-4A3D-9D01-CC575D4CD366',@BoundInsureds1=2,@HitRatio2=0.3300,@IncomeRecent3=$2000.0000

And here it is formatted clearly:

DECLARE  @ClassifierGuid UNIQUEIDENTIFIER,
         @BoundInsureds1 INT,
         @HitRatio2   DECIMAL(9,4),
         @IncomeRecent3  MONEY

SELECT @ClassifierGuid = '688F18E4-10EC-4A3D-9D01-CC575D4CD366',
       @BoundInsureds1 = 2,
       @HitRatio2 = 0.3300,
       @IncomeRecent3 =$ 2000.0000

UPDATE [ucpmsql].[dbo].[Company]
SET [ClassIfierguId] = @ClassifierGuid
WHERE  ((([ucpmsql].[dbo].[Company].[ClassIfierguId] IS NULL 
          AND [ucpmsql].[dbo].[Company].[CompanyguId] IN (SELECT [ucpmsql].[dbo].[CompanyAggregate].[CompanyguId]
                                                          FROM   [ucpmsql].[dbo].[CompanyAggregate]
                                                          WHERE  ([ucpmsql].[dbo].[CompanyAggregate].[BoundInsureds] >= @BoundInsureds1
                                                                  AND [ucpmsql].[dbo].[CompanyAggregate].[HitRatio] >= @HitRatio2
                                                                  AND [ucpmsql].[dbo].[CompanyAggregate].[IncomeRecent] >= @IncomeRecent3)))))

The interesting part: running the non sp_executesql code is faster. Much faster... it only takes 2 seconds or less. A bit of searching on Google turned up links like these that reflect the same problem has been encountered before:

http://www.mail-archive.com/ibatis-user-cs@incubator.apache.org/msg00094.html (I find this one interesting because the "solution" they were discussion appears to be to introduce SQL injection risks and violate every suggestion that parameterized queries be used for performance)

http://www.dbforums.com/archive/index.php/t-370337.html (The guy trying to help makes several attempts to force more indexes on the issue, but my testing confirms that sp_executesql is just ignoring indexes in favor of full table scans)

There are plenty more to look at by searching "sp_executesql very slow" and similar search terms.

The end of the story appears to be that sp_executesql uses a different method for choosing indexes and apparently it is a broken method compared to a plain text query. This isn't a LLBLGen problem, per se, in other words. The only reason I bring this up is to ask: is there a plausible workaround within LLBLGen? Creating the query in LLBLGen was so much easier than my prior string concatenation method (which was also poor programming practice as the parameters were inlined... yes, I "escaped" the string substitutions, but that is no substitution for real parameters).

arschr
User
Posts: 893
Joined: 14-Dec-2003
# Posted on: 06-May-2008 23:19:35   

have you looked at the query plan to see why it is choosing table scans?

JohnL
User
Posts: 47
Joined: 07-Oct-2005
# Posted on: 07-May-2008 02:22:20   

arschr wrote:

have you looked at the query plan to see why it is choosing table scans?

I have examined the plans; they start the same with an update, assert, nested loop (fed from an index seek) and the table update of Company itself.

What differs is the Table Update for Company under a sp_executesql is fed via a chain of nested loops while the text query is fed via a chain of hash matches. The construction of the views appears identical between the two version (which I would expect). Unfortunately, the remainder is very complex and they appear to be doing radically different things in the middle; even pulling an "actual" execution plan doesn't give actual execution times for the various sub components of the query. Really, I can't see any reason for sp_executesql to be choosing anything differently, but it reliably constructs a significantly slower plan.

To verify that indexes weren't being used, I ran the text version and had it suggest tuning; then I applied all of them to a copy of the database. Then I ran the adviser against the stored procedure version; it replaced the stored procedure with effectively the text version I created and ran that: of course, that ran smoothly. (Apparently the tool doesn't do sp_executesql statements directly, it expands them as I did). Even with all the indexes and statistics define that were recommended the sp_executesql query took 50 seconds (proving the indexes created to be moot in this case as 3-5 seconds isn't enough of a difference to make up for the massive loss experienced). The base query seemed to run in effectively the same amount of time as always, between a second and two.

I can't imagine why wrapping a query in sp_executesql (as LLBLGen does) would cause such performance loss, especially when the query engine should be doing the same replacement of values internally that I did by hand. Searching for similar problems I find them mentioned, but most people are suggesting it is a bad cached execution plan (it isn't: I clear the plan cache with no impact on either plan) or the server finding "bad" values for variables that cause poor execution plan design (but in both queries the values are identical). In the other cases I searched, I'm not seeing any solutions other than returning to straight text queries.

The upshot of all this is that code that I craft by hand run great, but LLBLGen queries are running an order of magnitude slower due to the poor performance of SQL 2005's sp_executesql calls. I can't find any explanation for why two equivalent queries act so differently. At the same time, single table queries, stored procedure calls, queries over the same view as used here (minus the additional baggage of the subquery) all run in reasonable fashion.

psandler
User
Posts: 540
Joined: 22-Feb-2005
# Posted on: 07-May-2008 04:49:32   

This sounds very much like a problem I had a few months ago with a query, which ended up being caused by a phenomenon called "parameter sniffing".

Here is a link that will help you determine whether this might be what is causing your problem:

http://blogs.msdn.com/queryoptteam/archive/2006/03/31/565991.aspx

You could also try declaring and setting variables inside of your sp_executesql statement. Like:

exec sp_executesql N'
        DECLARE 
                @batchId1 bigint,
                @batchId2 bigint;

        SELECT @batchID1 = 1583, @batchID2 = 1583;SELECT * FROM (etc.)'

I'm not sure if any of that will really help you solve the problem--however, I will tell you that for me the problem went away when the distribution of values in the columns I was filtering became more varied.

Phil

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39614
Joined: 17-Aug-2003
# Posted on: 07-May-2008 10:12:22   

Just for clarity, we don't actively wrap calls in sp_executesql: the SqlCommand does. If you execute a query with ExecuteNonQuery and just specify a sql string, you'll get your query executed by sp_executesql. This is because sp_executesql makes sure the execution plan gets cached. Not using sp_executesql doesn't do that (and also doesn't retrieve it from the cache, according to bol)

So actually... there's little what we can do to execute a query and NOT use sp_executesql simple_smile

Frans Bouma | Lead developer LLBLGen Pro
JohnL
User
Posts: 47
Joined: 07-Oct-2005
# Posted on: 07-May-2008 19:16:47   

psandler wrote:

This sounds very much like a problem I had a few months ago with a query, which ended up being caused by a phenomenon called "parameter sniffing".

Here is a link that will help you determine whether this might be what is causing your problem:

http://blogs.msdn.com/queryoptteam/archive/2006/03/31/565991.aspx

You could also try declaring and setting variables inside of your sp_executesql statement. Like:

exec sp_executesql N'
        DECLARE 
                @batchId1 bigint,
                @batchId2 bigint;

        SELECT @batchID1 = 1583, @batchID2 = 1583;SELECT * FROM (etc.)'

I'm not sure if any of that will really help you solve the problem--however, I will tell you that for me the problem went away when the distribution of values in the columns I was filtering became more varied.

Phil

Thanks for the link. You are dead on that the sp_executesql statement can be coaxed into working properly by doing this:

exec sp_executesql N'
declare @ClassifierGuidTest uniqueidentifier,
        @BoundInsuredsTest int,
        @HitRatioTest as decimal(9,4),
        @IncomeRecentTest as money;

select @ClassifierGuidTest=@ClassifierGuid, @BoundInsuredsTest=@BoundInsureds1, @HitRatioTest=@HitRatio2, @IncomeRecentTest=@IncomeRecent3

UPDATE [ucpmSQL].[dbo].[Company] SET [ClassifierGuid]=@ClassifierGuidTest WHERE ( ( ( [ucpmSQL].[dbo].[Company].[ClassifierGuid] IS NULL AND [ucpmSQL].[dbo].[Company].[CompanyGuid] IN (SELECT [ucpmSQL].[dbo].[CompanyAggregate].[CompanyGuid] FROM [ucpmSQL].[dbo].[CompanyAggregate]  WHERE ( [ucpmSQL].[dbo].[CompanyAggregate].[BoundInsureds] >= @BoundInsuredsTest AND [ucpmSQL].[dbo].[CompanyAggregate].[HitRatio] >= @HitRatioTest AND [ucpmSQL].[dbo].[CompanyAggregate].[IncomeRecent] >= @IncomeRecentTest)))))',N'@ClassifierGuid uniqueidentifier,@BoundInsureds1 int,@HitRatio2 decimal(9,4),@IncomeRecent3 money'
,@ClassifierGuid='688F18E4-10EC-4A3D-9D01-CC575D4CD366',@BoundInsureds1=2,@HitRatio2=0.3300,@IncomeRecent3=$2000.0000

This is completely absurd, but it works in the same 1-2 seconds that a "bare" query in QA does. Apparently the plan for the update is formed after the select occurs and the values are set instead of the wildly wrong guess that a traditional sp_executesql makes. No changes to indexes or statistics were required with this alternate formulation.

So the big question is: is there any way to create queries like this in LLBLGen? This would require the template kit and some heavy modification, would it not? Even so, it might be worth doing though as the alternative is to return to string manipulation to construct a where clause.

Normally when faced with this kind of failure of a system I would work around it with a static stored procedure, but because the user can build arbitrary queries on the fly with my tool that isn't practical in this case. This isn't even that hard of a query; a subtable query against a view that does summation shouldn't break the optimizer in any reasonable product, but I'm pretty stuck with Express 2005 for this application due to integration and cost requirements. I should also point out that this query causes the same problems with the full versions of SQL 2005 Standard as well and in my Developer's version (with is Enterprise with a license restriction).

JohnL
User
Posts: 47
Joined: 07-Oct-2005
# Posted on: 07-May-2008 19:22:30   

Otis wrote:

Just for clarity, we don't actively wrap calls in sp_executesql: the SqlCommand does. If you execute a query with ExecuteNonQuery and just specify a sql string, you'll get your query executed by sp_executesql. This is because sp_executesql makes sure the execution plan gets cached. Not using sp_executesql doesn't do that (and also doesn't retrieve it from the cache, according to bol)

So actually... there's little what we can do to execute a query and NOT use sp_executesql simple_smile

Right. The post above shows that it can work through sp_executesql without the massive delay caused by the simple parameterization. I'm going to look into the template kits to see if I'm going to cause myself more problems than good if I try to run the parameters via such indirection as shown above. It works in my tests, and makes some reasonable sense that it wouldn't be too awful to insert such redirection. Considering that it finally avoids a problem that I'm seeing a lot of chatter about on the web but no solid solutions other than abandoning the parameters (which isn't a very solid suggestion), it might be worth losing a few days to learning the template system.

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39614
Joined: 17-Aug-2003
# Posted on: 07-May-2008 20:42:31   

PLease post back into this thread if you need help / directions with that simple_smile

I think the solution for this is in a custom DQE which produces the query you're needing for the update. THis can be done by deriving a new class from the sqlserver's DynamicQueryEngine class and overriding the right update query method. You might even get away with using a derived dataaccessadapter class (if you're using adapter) and modifying the update query produced and passed to the OnSaveEntity method.

Frans Bouma | Lead developer LLBLGen Pro
JSobell
User
Posts: 145
Joined: 07-Jan-2006
# Posted on: 14-May-2008 16:40:03   

We've just experienced exactly the same problem. A query runs in 2 seconds when the variable declarations are placed before the statement, but 44 seconds when they are applied via sp_executesql !!

This is a major pain, so I guess we'll have to work around it for now by writing stored procs for all the problem queries. I'm reluctant to modify the code in the LLBLGen library because I know it will get stomped over in the next update.

Perhaps this should be considered a known flaw in SQL2005 and the normal behaviour should be to add the declarations to the start of the command instead of the end? The only problem then I suppose is that SQL injections could become more of a risk. The main concern to me is that this issue can pop up at any time in the future in existing applications, particularly given that it's a query-plan related issue that appears most often when very large numbers of records are involved. A search of Google shows many people reporting this issue, but it's interesting to see that most responses simply suggest the developer has an error in their query design simple_smile

Cheers, Jason

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39614
Joined: 17-Aug-2003
# Posted on: 14-May-2008 18:38:08   

The thing is... you can't avoid sp_executesql with ADO.NET... SqlCommand will always use that... so the only way to work around this is by using procs... disappointed

I know... it sucks..

(edit) I'm now reading the page phil linked to, and it might be that an optimization hint appended to the query might work.. ? This is appendable in the DataAccessAdapter class (derive a class from it, then override the OnFetch... routine you're using) it's very query specific but it might help in this case.

Frans Bouma | Lead developer LLBLGen Pro
JSobell
User
Posts: 145
Joined: 07-Jan-2006
# Posted on: 15-May-2008 00:56:09   

What I can't quite figure out from the abundance of contradictory reports on other sites is whether this is a reproducable problem caused by the use of sp_executesql, or if it is simply some sort of 'stale queryplan' type of situation.

Just to add yet more confusion (and possibly some insight?), I've just re-run the queries this morning (Australian time simple_smile ) and now the sp_executesql runs in the same time as the 'variables prefixed to query' query!!! The only change is that extra records were added, so I'm wondering if this is to do with sp_executesql using something like a separate query-plan cache, and this sometimes contains old plans??

I hate these intermittent issues, and I really wish I'd tried a series of index rebuilds and stats updates yesterday while the problem was still present disappointed Are we going to find other situations where TSQL commands are slow and sp_executesql runs fast?

I suspect that unless someone can demonstrate a reproducable experiment regarding this slowdown, it may be best to put this on the backburner and document it as a known potential issue, perhaps with suggestions such as rebuilding indexes and UPDATE STATISTICS to bring the server uptodate.

Cheers, Jason

JohnL
User
Posts: 47
Joined: 07-Oct-2005
# Posted on: 15-May-2008 02:37:00   

JSobell wrote:

What I can't quite figure out from the abundance of contradictory reports on other sites is whether this is a reproducable problem caused by the use of sp_executesql, or if it is simply some sort of 'stale queryplan' type of situation.

Just to add yet more confusion (and possibly some insight?), I've just re-run the queries this morning (Australian time simple_smile ) and now the sp_executesql runs in the same time as the 'variables prefixed to query' query!!! The only change is that extra records were added, so I'm wondering if this is to do with sp_executesql using something like a separate query-plan cache, and this sometimes contains old plans??

I hate these intermittent issues, and I really wish I'd tried a series of index rebuilds and stats updates yesterday while the problem was still present disappointed Are we going to find other situations where TSQL commands are slow and sp_executesql runs fast?

I suspect that unless someone can demonstrate a reproducable experiment regarding this slowdown, it may be best to put this on the backburner and document it as a known potential issue, perhaps with suggestions such as rebuilding indexes and UPDATE STATISTICS to bring the server uptodate.

Cheers, Jason

From what I can tell it is simply that the plan creation mechanism operates differently when presented a text query (via QA or via a construct such as I demonstrated in the later post that uses sp_executesql but with a level of indirection on the parameters that forces it to be treated the same as text). I would swear that it fails to notice that it can construct a partial result from the subquery that can be used to filter the view once in the straightforward use, while for whatever reason it picks that strategy when the query is "text". I hammered on the indexes and statistics and found no changes until I did what I'm doing now (indirection of the parameters) and yes I worry about some other performance problem, but this works for now on the subset I needed to modify.

Hopefully SQL 2008 will avoid this misadventure and this will be a temporary solution. An interesting side note: running this against SQL 2000 does not exhibit this problem, so 2005 apparently is simply trying too hard.

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39614
Joined: 17-Aug-2003
# Posted on: 15-May-2008 11:12:18   

I think it's because sp_executelsql is meant to re-use compiled query plans, so it doesn't re-sniff parameters so it ends up in using a plan which is very slow with the current parameter values. The query which is send to the db without sp_executesql, makes it sniff the parameter values, then pick a plan, so it might optimize it better.

I'm not sure if re-ordering the parameters could help (in the query). I mean: if the parameters are in a different order, the query is textually different and it might be the optimizer sees it as a different query. could any one of you who runs into this in a reproducable manner, try to see whether re-ordering the parameters in the query TEXT makes any difference?

Frans Bouma | Lead developer LLBLGen Pro
DvK
User
Posts: 318
Joined: 22-Mar-2006
# Posted on: 15-May-2008 11:16:41   

Sounds a bit like this issue I experienced with SQL 2005....

http://www.llblgen.com/TinyForum/Messages.aspx?ThreadID=12464

I never really cleared this up....except after reorganizing the database.

grtz, Danny

JohnL
User
Posts: 47
Joined: 07-Oct-2005
# Posted on: 15-May-2008 16:13:01   

DvK wrote:

Sounds a bit like this issue I experienced with SQL 2005....

http://www.llblgen.com/TinyForum/Messages.aspx?ThreadID=12464

I never really cleared this up....except after reorganizing the database.

grtz, Danny

What do you mean by reorganizing the database? Schema redesign, physical partitioning or simply extra indexing and statistics? If there is a root cause that you think was eliminated I would be curious to try a similar tactic.

JohnL
User
Posts: 47
Joined: 07-Oct-2005
# Posted on: 15-May-2008 16:55:39   

Otis wrote:

I think it's because sp_executelsql is meant to re-use compiled query plans, so it doesn't re-sniff parameters so it ends up in using a plan which is very slow with the current parameter values. The query which is send to the db without sp_executesql, makes it sniff the parameter values, then pick a plan, so it might optimize it better.

I'm not sure if re-ordering the parameters could help (in the query). I mean: if the parameters are in a different order, the query is textually different and it might be the optimizer sees it as a different query. could any one of you who runs into this in a reproducable manner, try to see whether re-ordering the parameters in the query TEXT makes any difference?

Reordering the parameters doesn't seem to affect the plan (I moved the update parameter to the end, hoping that it would work with the subtable query parameters first).

I thought that it was the cached query plan as well, but I have cleared the query cache and even gone as far as setting up a copy of the database via a backup/restore (not an attach/detach, which would defeat the purpose) to another machine which had never executed any queries on the database. I still found the same result (as long as I run with simple parameters first, I know I can't have a cached plan yet).

I then thought maybe that meant it didn't have good statistics yet, so I manually rebuilt the statistics and indexes and it still ran slow. My final trick with the new copy of the database was to install all recommended statistics and indexes and rebuild those (even though the improvement was predicted at 13%, which means the existing indexes are fine in my book): it still runs slow.

Yet, as soon as I use indirection on the parameters (via the define/select trick or QA) it works a wonder and returns less than a couple of seconds. As some of my clients still were using a full version of SQL 2000 to run this program, I tested that and found no problems with either method, which leads me to believe this is a bug in 2005 that was introduced with some of the more aggressive attempts to optimize queries it does.

DvK
User
Posts: 318
Joined: 22-Mar-2006
# Posted on: 15-May-2008 17:08:54   

I meant : reorganizing and rebuilding indexes and pages.

grtz, Danny

DvK
User
Posts: 318
Joined: 22-Mar-2006
# Posted on: 15-May-2008 17:11:54   

Reordering the parameters doesn't seem to affect the plan (I moved the update parameter to the end, hoping that it would work with the subtable query parameters first).

I thought that it was the cached query plan as well, but I have cleared the query cache and even gone as far as setting up a copy of the database via a backup/restore (not an attach/detach, which would defeat the purpose) to another machine which had never executed any queries on the database. I still found the same result (as long as I run with simple parameters first, I know I can't have a cached plan yet).

I then thought maybe that meant it didn't have good statistics yet, so I manually rebuilt the statistics and indexes and it still ran slow. My final trick with the new copy of the database was to install all recommended statistics and indexes and rebuild those (even though the improvement was predicted at 13%, which means the existing indexes are fine in my book): it still runs slow. Your comment is exactly stating what I was feeling too about the issues I ran into. On SQL

Yet, as soon as I use indirection on the parameters (via the define/select trick or QA) it works a wonder and returns less than a couple of seconds. As some of my clients still were using a full version of SQL 2000 to run this program, I tested that and found no problems with either method, which leads me to believe this is a bug in 2005 that was introduced with some of the more aggressive attempts to optimize queries it does.

2000 I didn't have these issues either but can't really get my finger on it.

grtz, Danny

JohnL
User
Posts: 47
Joined: 07-Oct-2005
# Posted on: 08-Dec-2009 22:38:40   

I noticed I never reported the outcome of my work on this. As of today ( 2009-12-08 ) MS SQL 2005 and 2008 still exhibit the issue described in the above, so the fix described below has been in production for over a year and a half.

Step 1: Create your own subclass of the DataAccessAdapter:

public class YourDataAccessAdapter : DataAccessAdapter
{

}

Step 2: Create the utility functions RewriteQuery and ReplaceVars:

    private string RewriteQuery(IRetrievalQuery selectQuery)
    {
        StringBuilder declare = new StringBuilder();
        StringBuilder set = new StringBuilder();
        foreach (SqlParameter param in selectQuery.Parameters)
        {
            declare.AppendLine(String.Format("declare {0}_param {1}", param.ParameterName, param.SqlDbType));
            set.AppendLine(String.Format("set {0}_param = {0}", param.ParameterName));
        }
        return String.Format("{0}\r\n{1}\r\n{2}", declare, set, ReplaceVars(selectQuery.Command.CommandText));
    }

    private string ReplaceVars(string source)
    {
        Regex reg = new Regex(@"(@[\w|_]+)");
        return reg.Replace(source, "$1_param");
    }

Step 3: Implement overrides for the following events: OnFetchEntity, OnFetchEntityCollection, OnFetchTypedList, OnFetchTypedView, OnGetScalar. In each case they will look like:

    protected override void OnFetchEntityCollection(IRetrievalQuery selectQuery, IEntityCollection2 entityCollectionToFetch)
    {
        selectQuery.Command.CommandText = RewriteQuery(selectQuery);
        base.OnFetchEntityCollection(selectQuery, entityCollectionToFetch);
    }

There is an exception on OnGetScalar: selectQuery is called scalarQuery.

Once this is done, you can build your query as usual, just create your derived adapter where you have a problem query (I did not replace all queries, just problem ones in this way).

            using (YourDataAccessAdapter adapter = new YourDataAccessAdapter() )
            {
                adapter.FetchTypedView(...);
            }

Using this technique (which simply defines and then loads variables to force SQL to take a better look at what it is being passed) I have gone from 90-120 second queries to 3-5 second queries in the problem area.

I hope that this helps someone else who stumbles across this glitch (which seems to only happen with very complex views and then very-very complex filters upon those views).

This can be simply extended to other types of queries if they present problems: simply override and rewrite the same way.

It is times like these that make you happy that the classes are not SEALED.

(I hope reviving this thread won't annoy, I just wanted a working resolution attached to this because someone else searched for it and found my JohnL and asked if it was me).

MTrinder
User
Posts: 1461
Joined: 08-Oct-2008
# Posted on: 08-Dec-2009 23:23:35   

Thanks for updating - I'll flag it for Frans to take another look to see if it is something that can be built in.

Cheers

Matt

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39614
Joined: 17-Aug-2003
# Posted on: 09-Dec-2009 10:03:05   

Thanks John for sharing. Yes, it's a problem which sometimes pops up its ugly head. Your code snippet is very small, so we'll look into adding a setting for v3.0's adapter to switch the parameter declaration.

There's one caveat: in v3 we're moving towards a DbProviderFactory approach, so your sqlclient specific code will fail. The db specific type will be available though, through the persistence info provider, so the code will require an adjustment in v3, but if we manage to squeeze this in before RTM (no promises) you don't need to wink

We won't seal classes, as we hate that concept as much as everyone else wink .

Frans Bouma | Lead developer LLBLGen Pro
JohnL
User
Posts: 47
Joined: 07-Oct-2005
# Posted on: 09-Dec-2009 15:46:47   

A switch on the adapter would be excellent, but as long as the workaround is simple I will be happy in 3.0. I also have the hope that SQL 2010 will have some way to work around this as well (like not exhibiting it in the first place through better query plan/query matching).

JohnL
User
Posts: 47
Joined: 07-Oct-2005
# Posted on: 09-Dec-2009 20:43:38   

I notice that there is one condition where the code I posted fails. Character columns need the length attached.


declare.Append(String.Format("declare {0}_param {1}", param.ParameterName, param.SqlDbType));
if (param.SqlDbType.ToString().IndexOf("CHAR", StringComparison.CurrentCultureIgnoreCase) != -1)
      declare.Append(String.Format("({0})", param.Size)); //Add character lengths.
declare.AppendLine();

Without it, SQL will silently assume a single character, which is usually wrong.

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39614
Joined: 17-Aug-2003
# Posted on: 10-Dec-2009 10:17:04   

thanks for the clarification simple_smile We'll take that info into account when we're looking into generalizing this.

Frans Bouma | Lead developer LLBLGen Pro
acullen
User
Posts: 7
Joined: 24-Dec-2009
# Posted on: 24-Dec-2009 07:42:27   

I'm having a similar issue, but with updates. We have an update that had been averaging about 3 seconds suddenly jump up to 30. I tried to modify this fix to include Update and Save methods, but get the following error:

IQuery.Command get isn't implemented in the BatchActionQuery class

Any kind soul out there have a suggestion on a workaround for database updates?

1  /  2  /  3