paging for typed view does not work every time

Posts   
 
    
utaq
User
Posts: 2
Joined: 01-Aug-2005
# Posted on: 01-Aug-2005 22:06:22   

hi otis

i have couple of places i am using paging and sorting with typed views. in some of them it works in others not.

i am using below constructor in all cases

adapter.FetchTypedView( customerSearch.GetFieldsInfo(), customerSearch, filter, pageSize, sorter, false, null, pageNumber, pageSize );

i am reusing IRelationPredicateBucket filter to get a count first and paging result next.

in cases paging does not work i get all results back (as if there is no filter)

i have tracked queries execution in profiler. cases that dont work execute like

SELECT ... FROM ... WHERE ... ORDER BY

instead ...

CREATE TABLE #TempTable ( col1, ...

INSERT INTO #TempTable ( col1 ... SELECT DISTINCT TOP xyz FROM ... WHERE ... ORDER BY ...

SELECT ... FROM #TempTable WHERE [__rowcnt] > @__rownoStart AND [__rowcnt] <= @__rownoEnd;

any help is appreicated.

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39624
Joined: 17-Aug-2003
# Posted on: 02-Aug-2005 11:26:03   

Paging for typed views (and also typed lists and dyn. lists) is only reliable if DISTINCT can be applied. The reason for that is that these objects are filled using a select which most likely (and with views that's not determinable) contains a join. This can lead to duplicate rows.

The query construction code checks if there are relations specified. If not, and if no duplicates are allowed, as with paging normally, it allows DISTINCT if no DISTINCT conflicting types are found in the select list (like image or ntext or text or blobs). If there are DISTINCT conflicting types in the select list, client side distinct filtering has to be applied IF no duplicates are allowed.

When a duplicate row is present in the resultset, it eats up a row on a page. So if you have 10 duplicate rows at the start of your resultset, and a page of size 10, you get 10 duplicates back, which you don't want.

If distinct can't be applied, the code which produces the SQL switches to client-side distinct filtering. This works ok for entities (which can be filtered out as the rows are processed by the runtime libraries) but not for typedlists/dynlists/typedviews, as these are filled with an ADO.NET dataadapter.

When DISTINCT isn't usable, the client side paging logic has to be used, as the SQL producing code doesn't know if the query is for a typedlist,view or an entity collection or a single entity. The client side paging logic simply uses an overload of DataAdapter.Fill() which accepts a startrow and an amount. This then skips n rows before it reads in the rows necessary for the page.

It though means that it can contain duplicate rows.

Is this the behavior you're seeing or isn't there paging at all and no duplicate rows? If so, please give me an example of such a view and the types of the columns so I can try to reproduce it here.

Frans Bouma | Lead developer LLBLGen Pro
utaq
User
Posts: 2
Joined: 01-Aug-2005
# Posted on: 02-Aug-2005 15:58:12   

That is it.

I have columns of type ntext in all views.

Thanks for help.

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39624
Joined: 17-Aug-2003
# Posted on: 02-Aug-2005 16:47:28   

utaq wrote:

That is it.

I have columns of type ntext in all views. Thanks for help.

Still, if there are no duplicate rows, it should show the correct data. Is that the case?

Frans Bouma | Lead developer LLBLGen Pro
Marcus avatar
Marcus
User
Posts: 747
Joined: 23-Apr-2004
# Posted on: 03-Dec-2005 13:00:08   

Frans,

Under what other circumstances is client side paging performed...

We are using:

adapter.FetchEntityCollection(collection, bucket, pageSize, sorter, prefetchPath, pageNumber, pageSize);

with pageSize = 20 and pageNumber = 2 and we are seeing client side paging being applied rather than the server side... disappointed

The entities being fetched are the full entities including the primary key, so no duplicates are even possible.... The bucket does include a relation for filtering purposes however and the entity contains an XML field.

Is this a bug?

Marcus

Here is the generated query:

exec sp_executesql N'SELECT [ModernArk].[dbo].[Ark_Item].[ItemID], [ModernArk].[dbo].[Ark_Item].[ItemUID], [ModernArk].[dbo].[Ark_Item].[FileUID], 
[ModernArk].[dbo].[Ark_Item].[FSServiceUID], [ModernArk].[dbo].[Ark_Item].[FSSubSystemID], [ModernArk].[dbo].[Ark_Item].[OwnerAccountID], 
[ModernArk].[dbo].[Ark_Item].[ItemTypeID], [ModernArk].[dbo].[Ark_Item].[DefaultLicenseID], [ModernArk].[dbo].[Ark_Item].[ItemName], 
[ModernArk].[dbo].[Ark_Item].[Description],
[ModernArk].[dbo].[Ark_Item].[MetadataXml]
 FROM ( [ModernArk].[dbo].[Ark_Item]  INNER JOIN [ModernArk].[dbo].[Ark_ItemPermission]  ON  
[ModernArk].[dbo].[Ark_Item].[ItemID]=[ModernArk].[dbo].[Ark_ItemPermission].[ItemID]) WHERE ( [ModernArk].[dbo].[Ark_Item].[OwnerAccountID] = @OwnerAccountID1 AND 
[ModernArk].[dbo].[Ark_ItemPermission].[GroupID] IN (@GroupID2))',N'@OwnerAccountID1 int,@GroupID2 int',@OwnerAccountID1=65,@GroupID2=74
Otis avatar
Otis
LLBLGen Pro Team
Posts: 39624
Joined: 17-Aug-2003
# Posted on: 03-Dec-2005 15:07:43   

when distinct can't be applied to the query, client side paging is used IF there is a join because you then get duplicates and thus paging will give incorrect results.

Frans Bouma | Lead developer LLBLGen Pro
Marcus avatar
Marcus
User
Posts: 747
Joined: 23-Apr-2004
# Posted on: 03-Dec-2005 15:41:58   

Otis wrote:

when distinct can't be applied to the query, client side paging is used IF there is a join because you then get duplicates and thus paging will give incorrect results.

Yes I can understand that... but if you know the JOIN will not contains any duplicates because of the filter being applied, you then don't have a way of specifying "allowDuplicates" for the FetchEntityCollection method. I can also see the reasoning why this has been ommitted disappointed

I use the FetchEntityCollection method because I can attach a lot of prefetches which I cannot do with a TypedView or TypedList:


            IPrefetchPath2 prefetchPath = new PrefetchPath2((int)EntityType.ItemEntity);
            prefetchPath.Add(ItemEntity.PrefetchPathTagCollection);
            prefetchPath.Add(ItemEntity.PrefetchPathItemCommentCollection).SubPath.Add( ItemCommentEntity.PrefetchPathCreatedByAccountUser).SubPath.Add(AccountUserEntity.PrefetchPathAccountSummary);
            prefetchPath.Add(ItemEntity.PrefetchPathDefaultLicense);
            prefetchPath.Add(ItemEntity.PrefetchPathLicenseCollection); 

The tables concerned have millions of rows and although the average filter limits this to 50K or so, we cannot bring all these rows back to the middle tier... which is why we employ paging. simple_smile

As far as I can tell the only way around this would be for me to subclass DataAccessAdapter and provide a new FetchEntityCollection overload which takes the paging parameters as well as an allowDuplicates flag. The following code would then need to be altered in order to pass the flag into the CreateSelectDQ method instead of the hardcoded true / false.

            if(relationsPresent)
            {
                selectQuery=CreateSelectDQ(fieldsForQuery, 
                    persistenceInfo, predicateExpressionToUse, maxNumberOfItemsToReturn, sortClauses, filterBucketToUse.Relations, false, null, pageNumber, pageSize);
            }
            else
            {
                selectQuery=CreateSelectDQ(fieldsForQuery, 
                    persistenceInfo, predicateExpressionToUse, maxNumberOfItemsToReturn, sortClauses, null, true, null, pageNumber, pageSize);
            }

Is this something you would consider appropriate to be included in the product or shall I go ahead and implement this myself...

Marcus

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39624
Joined: 17-Aug-2003
# Posted on: 04-Dec-2005 12:22:44   

You will never get duplicate objects, as those will be filtered out in the entity fetch routine.

But even then, as soon as you get duplicates, the paging is wrong. There is a small chance a join won't return dupes, if the relations are solely inner joins of the form m:1 or 1(fk):1(pk) or as you said, the filter doesn't return dupes.

If you still want server side paging, no matter what, you can override CreatePagingSelectDQ and always simply produce a paging query even though the query is set to use client-side paging. You should be able to do that as soon as I make ManglePageSelectDQ protected virtual which should be in the next build.

Frans Bouma | Lead developer LLBLGen Pro
Marcus avatar
Marcus
User
Posts: 747
Joined: 23-Apr-2004
# Posted on: 06-Dec-2005 14:07:02   

Otis wrote:

If you still want server side paging, no matter what, you can override CreatePagingSelectDQ and always simply produce a paging query even though the query is set to use client-side paging. You should be able to do that as soon as I make ManglePageSelectDQ protected virtual which should be in the next build.

I'm not convinced this is the best way to go... simple_smile This way seems to be more of a hack then the method above which I think is more like an extension... smile

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39624
Joined: 17-Aug-2003
# Posted on: 06-Dec-2005 15:23:17   

Marcus wrote:

Otis wrote:

If you still want server side paging, no matter what, you can override CreatePagingSelectDQ and always simply produce a paging query even though the query is set to use client-side paging. You should be able to do that as soon as I make ManglePageSelectDQ protected virtual which should be in the next build.

I'm not convinced this is the best way to go... simple_smile This way seems to be more of a hack then the method above which I think is more like an extension... smile

The problem is that this always has to work, and I'm not convinced it always will, e.g.: will it always give 100% correct results? I don't know. You rely on the fact that the filter won't produce dupes, but what if it does in the future?

Btw, client side paging isn't that harmful per se: it reads through the open cursor till it has read the page. Normally this doesn't matter much, because it's unlikely you want to load page 100,001 out of 3 million.

Frans Bouma | Lead developer LLBLGen Pro
arschr
User
Posts: 893
Joined: 14-Dec-2003
# Posted on: 06-Dec-2005 15:46:07   

Normally this doesn't matter much, because it's unlikely you want to load page 100,001 out of 3 million.

Are there optimizations possible for loading the last page? It is very common for people to want to see the first few pages and the last page.

Marcus avatar
Marcus
User
Posts: 747
Joined: 23-Apr-2004
# Posted on: 06-Dec-2005 18:08:36   

arschr wrote:

Are there optimizations possible for loading the last page? It is very common for people to want to see the first few pages and the last page.

Not in the current release... but if you are running SQL Server 2005 I can post my modifications which avail of the new Row_Number function which provides much better performance. You'll have to wait however as they're not implemented yet simple_smile I hadn't though of making an optimisation for the last page...

We display pages like 1, 2, 3, 4, 5, ... 879, 880 so users will often hit the last and second last pages. Im sure that there would be an optimisation to be had here (espcially if you already know the total row count) by using a descending ORDER BY clause.