- Home
- LLBLGen Pro
- LLBLGen Pro Runtime Framework
Paging + prefetch path
Joined: 23-Apr-2004
Otis wrote:
The extra method is appealing, though it fragments the logic around also, which might not be a good choice (which is also one of the reasons I keep methods together sometimes instead of splitting them up in a lot of small methods).
I see and like your logic, but my friends in the XP camp will want justice for such a comment . They would be shouting "Refactor Refactor" like a stuck record
Otis wrote:
In the prefetch path code, there are 2 parts which are candidates for replacement with own code: 1) filter construction for collection fetch 2) merge routines after the fetch has taken place.
If I expose the logic in a way that you can override these 2 parts, it plugs into the existing prefetch framework logic and requires less code, I think
Do you see any pitfalls with that? (As you already have written part 1) so it might require extra info)
Yes the separation you mention is correct and is the reason we need accessibility to the merge routines so that after we build the filter and execute the query we can call back in to the merge methods.
The problem is that my new filter construction requires the access to AliasStartEntity and AliasEndEntity... when it creates its FilterRelations. This is why I asked for the additional method extraction after you were reluctant to change their internal access.
Looking at it again... I think the original FetchPrefetchPath method could be broken down further to allow its constituent parts to be used separately in a more granular way than it is today...
In my view there are 4 parts to this logic. 1) Build the WHERE filter 2) Add the filters relations 3) Fetch the collection 4) Call Merge and then recurse...
The fetch collection and merge logic is already "extracted" which leaves 1) and 2) coupled in the same method (at the moment) and is the reason I suggested the extraction of the add filter relations method... If we wanted to keep everything consistent it might be an idea to extract the both 1) and 2) to seperate overidable methods. I think this would actually be a neater solution all round...
Here is my current DataAccessAdapterEx Class so that you can see where I'm coming from (BTW - I have already changed the relevant accisibilities on the DataAccessAdapterBase class).
/// <summary>
/// Summary description for SdsDataAccessAdapter.
/// </summary>
public class DataAccessAdapterEx : DataAccessAdapter
{
#region Constructors
/// <summary>
/// CTor
/// </summary>
public DataAccessAdapterEx() : base()
{}
/// <summary>
/// CTor
/// </summary>
/// <param name="comPlusContextHost">the COM+ context host for this adapter instance.</param>
/// <remarks>do not call this from your code</remarks>
public DataAccessAdapterEx(IComPlusAdapterContext comPlusContextHost) : base(comPlusContextHost)
{}
/// <summary>
/// CTor
/// </summary>
/// <param name="comPlusContextHost">the COM+ context host for this adapter instance.</param>
/// <param name="connectionString">The connection string to use when connecting to the database.</param>
/// <remarks>do not call this from your code</remarks>
public DataAccessAdapterEx(IComPlusAdapterContext comPlusContextHost, string connectionString) : base(comPlusContextHost, connectionString)
{}
/// <summary>
/// CTor
/// </summary>
/// <param name="keepConnectionOpen">when true, the DataAccessAdapter will not close an opened connection. Use this for multi action usage.</param>
public DataAccessAdapterEx(bool keepConnectionOpen) : base(keepConnectionOpen)
{}
/// <summary>
/// CTor
/// </summary>
/// <param name="connectionString">The connection string to use when connecting to the database.</param>
public DataAccessAdapterEx(string connectionString) : base(connectionString)
{}
/// <summary>
/// CTor
/// </summary>
/// <param name="connectionString">The connection string to use when connecting to the database.</param>
/// <param name="keepConnectionOpen">when true, the DataAccessAdapter will not close an opened connection. Use this for multi action usage.</param>
public DataAccessAdapterEx(string connectionString, bool keepConnectionOpen) : base(connectionString, keepConnectionOpen)
{}
/// <summary>
/// CTor. SqlServer specific
/// </summary>
/// <param name="connectionString">The connection string to use when connecting to the database.</param>
/// <param name="keepConnectionOpen">when true, the DataAccessAdapter will not close an opened connection. Use this for multi action usage.</param>
/// <param name="catalogNameUsageSetting">SqlServer specific. Configurates this data access adapter object how to threat catalog names in
/// persistence information.</param>
/// <param name="catalogNameToUse">SqlServer specific. The name to use if catalogNameUsageSetting is set to ForceName. Ignored otherwise.</param>
public DataAccessAdapterEx(string connectionString, bool keepConnectionOpen, CatalogNameUsage catalogNameUsageSetting, string catalogNameToUse) : base(connectionString, keepConnectionOpen, catalogNameUsageSetting, catalogNameToUse)
{}
#endregion
#region Properties
private int _parameterisedPrefetchPathThreshold = 100;
public int ParameterisedPrefetchPathThreshold { get { return _parameterisedPrefetchPathThreshold; } set { _parameterisedPrefetchPathThreshold = value; } }
#endregion
#region Paging + Prefetch Overrides
protected override void FetchPrefetchPath(
IEntityCollection2 rootEntities,
IRelationPredicateBucket filterBucket,
long maxNumberOfItemsToReturn,
ISortExpression sortClauses,
IPrefetchPath2 prefetchPath)
{
if (rootEntities.Count <= 0)
return;
int pkCount = rootEntities[0].Fields.PrimaryKeyFields.Count;
int setCount = rootEntities.Count;
if ((setCount * pkCount) <= ParameterisedPrefetchPathThreshold)
FetchParameterisedPrefetchPath(rootEntities, maxNumberOfItemsToReturn, prefetchPath);
else
base.FetchPrefetchPath(rootEntities, filterBucket, maxNumberOfItemsToReturn, sortClauses, prefetchPath);
}
#endregion
#region Public Methods
public virtual void FetchEntityCollection(IEntityCollection2 collectionToFill, IRelationPredicateBucket filterBucket, int maxNumberOfItemsToReturn,
ISortExpression sortClauses, IPrefetchPath2 prefetchPath, int pageNumber, int pageSize)
{
//Trace.WriteLineIf(TraceHelper.PersistenceExecutionSwitch.TraceInfo, "DataAccessAdapterBase.FetchEntityCollection(5)", "Method Enter");
// first fetch the entity collection.
FetchEntityCollection(collectionToFill, filterBucket, maxNumberOfItemsToReturn, sortClauses, pageNumber, pageSize);
if (prefetchPath != null)
{
// then do the prefetch actions of the path specified.
FetchParameterisedPrefetchPath(collectionToFill, maxNumberOfItemsToReturn, prefetchPath);
}
//Trace.WriteLineIf(TraceHelper.PersistenceExecutionSwitch.TraceInfo, "DataAccessAdapterBase.FetchEntityCollection(5)", "Method Exit");
}
#endregion
#region Private Methods
private void FetchParameterisedPrefetchPath(IEntityCollection2 rootEntities, long maxNumberOfItemsToReturn, IPrefetchPath2 prefetchPath)
{
//Trace.WriteLineIf(TraceHelper.PersistenceExecutionSwitch.TraceVerbose, "DataAccessAdapterBase.FetchPrefetchPathFast", "Method Enter");
if (rootEntities.Count <= 0)
{
// no root objects, nothing to do
//Trace.WriteLineIf(TraceHelper.PersistenceExecutionSwitch.TraceVerbose, "DataAccessAdapterBase.FetchPrefetchPathFast: No root objects.", "Method Exit");
return;
}
if (prefetchPath == null)
{
// no prefetch path specified
//Trace.WriteLineIf(TraceHelper.PersistenceExecutionSwitch.TraceVerbose, "DataAccessAdapterBase.FetchPrefetchPathFast: No prefetch path.", "Method Exit");
return;
}
for (int i = 0; i < prefetchPath.Count; i++)
{
bool rootEntitiesArePkSide = false;
IPrefetchPathElement2 currentElement = prefetchPath[i];
IRelationPredicateBucket elementFilter = new RelationPredicateBucket();
if (currentElement.TypeOfRelation == RelationType.ManyToMany)
{
// Construct the filter based on the given relation, which is the relation between the start entity and the intermediate entity.
// Start entity in the relation is always the parent entity for this fetch.
for (int m = 0; m < rootEntities.Count; m++)
{
IPredicateExpression filter = new PredicateExpression();
for (int j = 0; j < currentElement.Relation.AmountFields; j++)
{
IEntityFieldCore pkField = rootEntities[m].Fields[currentElement.Relation.GetPKEntityFieldCore(j).Name];
// start entity is always the PK side.
rootEntitiesArePkSide = true;
filter.Add(new FieldCompareValuePredicate(
currentElement.Relation.GetFKEntityFieldCore(j),
null,
ComparisonOperator.Equal,
pkField.CurrentValue,
((EntityRelation)currentElement.Relation).AliasFKSide));
}
elementFilter.PredicateExpression.AddWithOr(filter);
}
} else
{
for (int m = 0; m < rootEntities.Count; m++)
{
IPredicateExpression filter = new PredicateExpression();
// construct the filter based on the given relation. Start entity in the relation is always the parent entity for this fetch.
for (int j = 0; j < currentElement.Relation.AmountFields; j++)
{
if (currentElement.Relation.StartEntityIsPkSide)
{
rootEntitiesArePkSide = true;
IEntityFieldCore pkField = rootEntities[m].Fields[currentElement.Relation.GetPKEntityFieldCore(j).Name];
filter.Add(new FieldCompareValuePredicate(
currentElement.Relation.GetFKEntityFieldCore(j),
null,
ComparisonOperator.Equal,
pkField.CurrentValue,
((EntityRelation)currentElement.Relation).AliasFKSide));
} else
{
IEntityFieldCore fkField = rootEntities[m].Fields[currentElement.Relation.GetFKEntityFieldCore(j).Name];
// root objects are FK side, so the subquery will contain the FK fields, and the entities to fetch the PK fields
filter.Add(new FieldCompareValuePredicate(
currentElement.Relation.GetPKEntityFieldCore(j),
null,
ComparisonOperator.Equal,
fkField.CurrentValue,
((EntityRelation)currentElement.Relation).AliasPKSide));
}
}
elementFilter.PredicateExpression.AddWithOr(filter);
}
}
// append extra relations and filters, if applicable.
if (currentElement.FilterRelations != null)
{
for (int j = 0; j < currentElement.FilterRelations.Count; j++)
{
EntityRelation currentRelation = (EntityRelation)currentElement.FilterRelations[j];
elementFilter.Relations.Add(currentRelation, currentRelation.AliasStartEntity, currentRelation.AliasEndEntity, currentRelation.HintForJoins);
}
}
if (currentElement.Filter != null)
{
if (currentElement.Filter.Count > 0)
elementFilter.PredicateExpression.AddWithAnd(currentElement.Filter);
}
if (rootEntities.ActiveContext != null)
currentElement.RetrievalCollection.ActiveContext = rootEntities.ActiveContext;
// execute the fetch.
FetchEntityCollection(currentElement.RetrievalCollection, elementFilter, 0, currentElement.Sorter, 0, 0);
/////////////
// merge the entities fetched with the entities in the rootEntities collection.
/////////////
if (currentElement.TypeOfRelation == RelationType.ManyToMany)
MergeManyToMany(currentElement, elementFilter, maxNumberOfItemsToReturn, rootEntities);
else
MergeNormal(rootEntities, currentElement, rootEntitiesArePkSide);
if (currentElement.SubPath.Count > 0)
{
FetchPrefetchPath(currentElement.RetrievalCollection, elementFilter, currentElement.MaxAmountOfItemsToReturn, currentElement.Sorter,
currentElement.SubPath);
}
// clear the retrieval collection, so the path can be re-used.
((EntityCollectionBase2)currentElement.RetrievalCollection).Clear();
}
//Trace.WriteLineIf(TraceHelper.PersistenceExecutionSwitch.TraceVerbose, "DataAccessAdapterBase.FetchPrefetchPathFast", "Method Exit");
}
#endregion
}
Joined: 23-Apr-2004
BTW Frans... I know I've said this before but the above paging + prefetch works really well and you are very welcome to stick it into the base class if you like. I've got an IPR transfer letter which I use for contractors or people who "give" me code for my products so that they can't come back in a couple of years and say "Hey... that's my code, I want half your revenues". I'll send you the letter if you like...
But serious the above is working very well and performing GREAT. The ability to set the threshold gives you full backward compatability and obviously this value should be read from a config file...
Joined: 17-Aug-2003
Marcus wrote:
Otis wrote:
The extra method is appealing, though it fragments the logic around also, which might not be a good choice (which is also one of the reasons I keep methods together sometimes instead of splitting them up in a lot of small methods).
I see and like your logic, but my friends in the XP camp will want justice for such a comment . They would be shouting "Refactor Refactor" like a stuck record
haha
Otis wrote:
In the prefetch path code, there are 2 parts which are candidates for replacement with own code: 1) filter construction for collection fetch 2) merge routines after the fetch has taken place.
If I expose the logic in a way that you can override these 2 parts, it plugs into the existing prefetch framework logic and requires less code, I think
Do you see any pitfalls with that? (As you already have written part 1) so it might require extra info)
Yes the separation you mention is correct and is the reason we need accessibility to the merge routines so that after we build the filter and execute the query we can call back in to the merge methods.
The problem is that my new filter construction requires the access to AliasStartEntity and AliasEndEntity... when it creates its FilterRelations. This is why I asked for the additional method extraction after you were reluctant to change their internal access.
Looking at it again... I think the original FetchPrefetchPath method could be broken down further to allow its constituent parts to be used separately in a more granular way than it is today...
In my view there are 4 parts to this logic. 1) Build the WHERE filter 2) Add the filters relations 3) Fetch the collection 4) Call Merge and then recurse...
The fetch collection and merge logic is already "extracted" which leaves 1) and 2) coupled in the same method (at the moment) and is the reason I suggested the extraction of the add filter relations method... If we wanted to keep everything consistent it might be an idea to extract the both 1) and 2) to seperate overidable methods. I think this would actually be a neater solution all round...
Good suggestion. I think it then makes it extensible for everyone who wants to extend it and in an easy way (not too much code required).
I'll change this in 1.0.2005.1. At the moment I'm still developing inheritance runtime/template code, so it will be later in august when I'll dig into this.
Here is my current DataAccessAdapterEx Class so that you can see where I'm coming from (BTW - I have already changed the relevant accisibilities on the DataAccessAdapterBase class).
Code looks good! (have to compound the if statements, but that's minor ). I'll include this in 1.0.2005.1, it's a good tweak I think. As said, I'm currently not busy with this, so I'll touch this later in august.
Joined: 17-Aug-2003
Marcus wrote:
BTW Frans... I know I've said this before but the above paging + prefetch works really well and you are very welcome to stick it into the base class if you like. I've got an IPR transfer letter which I use for contractors or people who "give" me code for my products so that they can't come back in a couple of years and say "Hey... that's my code, I want half your revenues". I'll send you the letter if you like...
But serious the above is working very well and performing GREAT. The ability to set the threshold gives you full backward compatability and obviously this value should be read from a config file...
The threshold setting is required I think to be able to tweak performance, though it is a great way to offer a developer a way to tweak what's going on. Threshold shouldn't be bigger than 500 I think, or maybe 1000, 1000 is the limit on Oracle, and I think it's not efficient anymore to go beyond 100 or so (that's not measured, but I think the more parameters you use, the more slower the query will become).
If you could send me the letter that would be great, thanks . the address is on the website: http://www.llblgen.com/pages/about.aspx
Joined: 23-Apr-2004
Otis wrote:
The threshold setting is required I think to be able to tweak performance, though it is a great way to offer a developer a way to tweak what's going on. Threshold shouldn't be bigger than 500 I think, or maybe 1000, 1000 is the limit on Oracle, and I think it's not efficient anymore to go beyond 100 or so (that's not measured, but I think the more parameters you use, the more slower the query will become).
I did some "very unscientific tests" and found that for GUIDs a threshold of > 200 performed worse on SQL Server than a single JOIN and > 350 performed worse for a m:n JOIN. Of course this will depend greatly on the size of the tables involved and the available indexes, I/O speed vs CPU speed on the target machine etc. In my tests I had fully optimised joins. I expect much better performance for int parameters.
But all in all, this tweak has eleminated 95% of the JOINs in my applications prefetch queries when I used a threshold value of as low as 100.
[EDIT] I just thought it would actually be VERY easy to write a small calibration application to give you an ideal threshold for a given system. The application would simply create a database with 3 tables + indexes etc, fill them with records and simply run incrementally timed prefetch path tests with increasing threshold values until an optimal threshold value is found. I know this won't be definitively accurate, but will certain be a good "guide" for a given machine.
Otis wrote:
If you could send me the letter that would be great, thanks . the address is on the website: http://www.llblgen.com/pages/about.aspx
Done.
Joined: 17-Aug-2003
Marcus wrote:
Otis wrote:
The threshold setting is required I think to be able to tweak performance, though it is a great way to offer a developer a way to tweak what's going on. Threshold shouldn't be bigger than 500 I think, or maybe 1000, 1000 is the limit on Oracle, and I think it's not efficient anymore to go beyond 100 or so (that's not measured, but I think the more parameters you use, the more slower the query will become).
I did some "very unscientific tests" and found that for GUIDs a threshold of > 200 performed worse on SQL Server than a single JOIN and > 350 performed worse for a m:n JOIN. Of course this will depend greatly on the size of the tables involved and the available indexes, I/O speed vs CPU speed on the target machine etc. In my tests I had fully optimised joins. I expect much better performance for int parameters.
But all in all, this tweak has eleminated 95% of the JOINs in my applications prefetch queries when I used a threshold value of as low as 100.
Well, with a tweak to be able to tune performance, people will be happy I think, as more control is always better. The treshold also optimizes m:1 path fetches I think (though these will be optimized further in 1.0.2005.1 to eliminate the unnecessary subquery).
Otis wrote:
If you could send me the letter that would be great, thanks . the address is on the website: http://www.llblgen.com/pages/about.aspx
Done.
Thanks, I received it. I'll check it out later this morning and get back to you .
Joined: 17-Aug-2003
Currently building this in. I'll let you know how it goes
Edit: why not a FieldCompareRangePredicate? This is IMHO faster than a set of field = var or field = var2 or field = var3 etc.
edit2: scratch that, that won't work with compound pk keys.
edit3: ok! works like a charm I added an optimization to the routine where in the case of a 1-field PK/FK relation it opts for a field IN (value, value value..) construct instead of a field = value1 OR field = value2 etc. construct
I also optimized the code a bit so it uses precalced indexes to find fields etc.
Joined: 23-Apr-2004
Otis wrote:
Currently building this in. I'll let you know how it goes
Cool!
Otis wrote:
Edit: why not a FieldCompareRangePredicate? This is IMHO faster than a set of field = var or field = var2 or field = var3 etc.
edit2: scratch that, that won't work with compound pk keys.
Joined: 23-Apr-2004
Otis wrote:
Edit: why not a FieldCompareRangePredicate? This is IMHO faster than a set of field = var or field = var2 or field = var3 etc.
Come to think of it... this was one of the optimisation I was planning...
if (currentElement.Relation.AmountFields == 1) use FieldCompareRangePredicate else use field = var or field = var2 or field = var3 etc
Marcus
Joined: 17-Aug-2003
Marcus wrote:
Otis wrote:
Edit: why not a FieldCompareRangePredicate? This is IMHO faster than a set of field = var or field = var2 or field = var3 etc.
Come to think of it... this was one of the optimisation I was planning...
if (currentElement.Relation.AmountFields == 1) use FieldCompareRangePredicate else use field = var or field = var2 or field = var3 etc
Marcus
here's the snip:
if((currentElement.Relation.AmountFields==1)&&(rootEntities.Count>1))
{
// optimization: use a CompareRange predicate instead. We select this routine only if there're more than 1 root entity as
// a WHERE field IN (value) is very slow on sqlserver, compared to WHERE field = value.
ArrayList values = new ArrayList(rootEntities.Count);
int fieldIndex = 0;
string aliasToUse=string.Empty;
IEntityFieldCore fieldToUse = null;
if (currentElement.Relation.StartEntityIsPkSide)
{
rootEntitiesArePkSide = true;
fieldIndex = rootEntities[0].Fields[currentElement.Relation.GetFKEntityFieldCore(0).Name].FieldIndex;
aliasToUse = ((EntityRelation)currentElement.Relation).AliasFKSide;
fieldToUse = currentElement.Relation.GetFKEntityFieldCore(0);
}
else
{
fieldIndex = rootEntities[0].Fields[currentElement.Relation.GetPKEntityFieldCore(0).Name].FieldIndex;
aliasToUse = ((EntityRelation)currentElement.Relation).AliasPKSide;
fieldToUse = currentElement.Relation.GetPKEntityFieldCore(0);
}
for (int m = 0; m < rootEntities.Count; m++)
{
// construct the filter based on the given relation. Start entity in the relation is always the parent entity for this fetch.
values.Add(rootEntities[m].Fields[fieldIndex].CurrentValue);
}
elementFilter.PredicateExpression.Add(new FieldCompareRangePredicate(
fieldToUse, null, aliasToUse, values));
}
else
{
for (int m = 0; m < rootEntities.Count; m++)
{
IPredicateExpression filter = new PredicateExpression();
// construct the filter based on the given relation. Start entity in the relation is always the parent entity for this fetch.
for (int j = 0; j < currentElement.Relation.AmountFields; j++)
{
if (currentElement.Relation.StartEntityIsPkSide)
{
rootEntitiesArePkSide = true;
IEntityFieldCore pkField = rootEntities[m].Fields[currentElement.Relation.GetPKEntityFieldCore(j).Name];
filter.Add(new FieldCompareValuePredicate(
currentElement.Relation.GetFKEntityFieldCore(j),
null,
ComparisonOperator.Equal,
pkField.CurrentValue,
((EntityRelation)currentElement.Relation).AliasFKSide));
}
else
{
IEntityFieldCore fkField = rootEntities[m].Fields[currentElement.Relation.GetFKEntityFieldCore(j).Name];
// root objects are FK side, so the subquery will contain the FK fields, and the entities to fetch the PK fields
filter.Add(new FieldCompareValuePredicate(
currentElement.Relation.GetPKEntityFieldCore(j),
null,
ComparisonOperator.Equal,
fkField.CurrentValue,
((EntityRelation)currentElement.Relation).AliasPKSide));
}
}
elementFilter.PredicateExpression.AddWithOr(filter);
}
}
whic is the branch for other relations than m:n. I explicitly test for rootEntities > 1, as it otherwise will result in a field IN (value) query, which is very unoptimized on sqlserver (try it, it's very slow).
Joined: 23-Apr-2004
Otis wrote:
<code>
Looks good...
Otis wrote:
whic is the branch for other relations than m:n. I explicitly test for rootEntities > 1, as it otherwise will result in a field IN (value) query, which is very unoptimized on sqlserver (try it, it's very slow).
You would think that the SQL Server optimizer would catch that "field IN (value)" and replace with "field = value"...
Joined: 23-Apr-2004
Just as a matter of interest: Are you using this new method for normal prefetches as well as prefetches in combination with paging...? I use it for both when I overrode the FetchPrefetchPath method :
protected override void FetchPrefetchPath(
IEntityCollection2 rootEntities,
IRelationPredicateBucket filterBucket,
long maxNumberOfItemsToReturn,
ISortExpression sortClauses,
IPrefetchPath2 prefetchPath)
{
if (rootEntities.Count <= 0)
return;
int pkCount = rootEntities[0].Fields.PrimaryKeyFields.Count;
int setCount = rootEntities.Count;
if ((setCount * pkCount) <= ParameterisedPrefetchPathThreshold)
FetchParameterisedPrefetchPath(rootEntities, maxNumberOfItemsToReturn, prefetchPath);
else
base.FetchPrefetchPath(rootEntities, filterBucket, maxNumberOfItemsToReturn, sortClauses, prefetchPath);
}
Joined: 17-Aug-2003
Marcus wrote:
Just as a matter of interest: Are you using this new method for normal prefetches as well as prefetches in combination with paging...? I use it for both when I overrode the FetchPrefetchPath method :
protected override void FetchPrefetchPath( IEntityCollection2 rootEntities, IRelationPredicateBucket filterBucket, long maxNumberOfItemsToReturn, ISortExpression sortClauses, IPrefetchPath2 prefetchPath) { if (rootEntities.Count <= 0) return; int pkCount = rootEntities[0].Fields.PrimaryKeyFields.Count; int setCount = rootEntities.Count; if ((setCount * pkCount) <= ParameterisedPrefetchPathThreshold) FetchParameterisedPrefetchPath(rootEntities, maxNumberOfItemsToReturn, prefetchPath); else base.FetchPrefetchPath(rootEntities, filterBucket, maxNumberOfItemsToReturn, sortClauses, prefetchPath); }
I've added the check to my FetchPrefetchPath routine: it either branches out to the parameterized version or continues with the old code. I've added the paging overload as well . I've to test that one still, but I think it works fine.
Joined: 17-Aug-2003
It also greatly optimizes single entity prefetch paths with m:1 relations.
[Test]
public void PrefetchPathSingleEntityManyToOne()
{
using(DataAccessAdapter adapter = new DataAccessAdapter())
{
OrderEntity order = new OrderEntity(10254);
PrefetchPath2 path = new PrefetchPath2((int)EntityType.OrderEntity);
path.Add(OrderEntity.PrefetchPathCustomers);
path.Add(OrderEntity.PrefetchPathEmployees);
Assert.IsTrue(adapter.FetchEntity(order, path));
}
}
results in 1.0.2004.2 in the query (for the customer)
SELECT [Northwind].[dbo].[Customers].[CustomerID] AS [CustomerId],
[Northwind].[dbo].[Customers].[CompanyName] AS [CompanyName],
[Northwind].[dbo].[Customers].[ContactName] AS [ContactName],
[Northwind].[dbo].[Customers].[ContactTitle] AS [ContactTitle],
[Northwind].[dbo].[Customers].[Address] AS [Address],
[Northwind].[dbo].[Customers].[City] AS [City],
[Northwind].[dbo].[Customers].[Region] AS [Region],
[Northwind].[dbo].[Customers].[PostalCode] AS [PostalCode],
[Northwind].[dbo].[Customers].[Country] AS [Country],
[Northwind].[dbo].[Customers].[Phone] AS [Phone],
[Northwind].[dbo].[Customers].[Fax] AS [Fax]
FROM [Northwind].[dbo].[Customers]
WHERE
(
[Northwind].[dbo].[Customers].[CustomerID]
IN
(
SELECT [Northwind].[dbo].[Orders].[CustomerID] AS [CustomerId]
FROM [Northwind].[dbo].[Orders]
WHERE
(
(
[Northwind].[dbo].[Orders].[OrderID] = @OrderId1
)
)
)
)
and in 1.0.2005.1
SELECT [Northwind].[dbo].[Customers].[CustomerID] AS [CustomerId],
[Northwind].[dbo].[Customers].[CompanyName] AS [CompanyName],
[Northwind].[dbo].[Customers].[ContactName] AS [ContactName],
[Northwind].[dbo].[Customers].[ContactTitle] AS [ContactTitle],
[Northwind].[dbo].[Customers].[Address] AS [Address],
[Northwind].[dbo].[Customers].[City] AS [City],
[Northwind].[dbo].[Customers].[Region] AS [Region],
[Northwind].[dbo].[Customers].[PostalCode] AS [PostalCode],
[Northwind].[dbo].[Customers].[Country] AS [Country],
[Northwind].[dbo].[Customers].[Phone] AS [Phone],
[Northwind].[dbo].[Customers].[Fax] AS [Fax]
FROM [Northwind].[dbo].[Customers]
WHERE
(
(
[Northwind].[dbo].[Customers].[CustomerID] = @CustomerId1
)
)
Very cool
(edit): ok paging works ok as well. I'll now port the code to selfservicing
Joined: 17-Aug-2003
Ok, it now also works in selfservicing. (the threshold is there global, but that's not a big problem).
I've set the threshold to 50 by default, which is pretty conservative, so users won't run into problems due to a massive amount of parameters. I've added a comment tag with your name and url, so you might get some extra hits as well /// Special thanks to Marcus MacInnes (http://www.styledesign.biz) for this optimization code.
If you don't want that, let me know
Joined: 23-Apr-2004
Otis wrote:
Ok, it now also works in selfservicing. (the threshold is there global, but that's not a big problem).
Glad it all went it so easily!
Otis wrote:
I've set the threshold to 50 by default, which is pretty conservative, so users won't run into problems due to a massive amount of parameters. I've added a comment tag with your name and url, so you might get some extra hits as well /// Special thanks to Marcus MacInnes (http://www.styledesign.biz) for this optimization code.
If you don't want that, let me know
Thanks!
It'll be interesting now to see what the users think of it... or if they notice any perceived performance improvement. (they should!)
Marcus Mac Innes (with a space)
Joined: 06-Sep-2005
This is a very interesting thread as large result-set manipulation is so core to many useful apps. I wish i would have noticed it a couple of days ago but feel compelled to chime in late. I'm new to GenPro but am looking forward to trying the approaches the gent from Dublin has proposed and already made work well.
In my previous experiences, paging in a stateless middle-tier environment is not such a horrific task, with the following, admittedly abbreviated, approach having worked well:
1) Use forward scrolling cursors in query execution.
2) If the result-set can be heuristically determined to be of a manageable size, resolve the count upon a first execution of the query. A timeout may have to be enforced in the event that the query result-set is large. In any event, the result-set size may or may not be obtainable.
3) The client (Thin Win app or web page) should specify a page-size in terms of number of rows as well as the index of the page being requested.
4) Where clause predicates can be used to prevent scrolling large volumes of data in the largest result-sets. For smaller sets, it is often pferformant to simply scroll through the forward scrolling cursor to the row index range representing the page index requested.
5) If result-sets are known to be large in the application context, the keysets should be built and stored across requests to ensure performance. I think this is the approach mentioned by Marcus involves this as well. I personally like this approach the best, but in apps with large user-bases this may be rejected on the grounds of not being scalable. But i think the scalability limitations can be mitigated effectively. Using keysets, the IN clause can be used specifying qualifying row ID's.
There's probably some details missing in these paragraphs, but I just wanted to chime in. It's greate to see the product team is working on this functionality.
Joined: 17-Aug-2003
The paging code without prefetch paths is already using low-level paging technology, but the prefetch code is done on a higher level. This means that the code is generic for all databases, and isn't aware of any db specific elements, like rowid's. SqlServer for example, among other db's, don't have rowid's, oracle does. Though with this approach there is a way to perform paging and prefetching.
Joined: 06-Sep-2005
Otis,
Is there a way to specify the cursor model in LLBLGen generated code? Or other DB-level pecuilarities for that matter? Not that cursor models are all that DB-specific, but when a DB does provide extended funcitonality, what is the general method of exposure to the developer? I'm still trying to get my hands around all this stuff as our shop uses the tool pretty extensively. Thanks.
Joined: 17-Aug-2003
Doug_Hettinger wrote:
Otis,
Is there a way to specify the cursor model in LLBLGen generated code? Or other DB-level pecuilarities for that matter? Not that cursor models are all that DB-specific, but when a DB does provide extended funcitonality, what is the general method of exposure to the developer? I'm still trying to get my hands around all this stuff as our shop uses the tool pretty extensively. Thanks.
The code doesn't use cursors, as ADO.NET isn't cursor oriented. It uses datareaders internally, which you can see as a server side cursor. This means that the data-retrieval is as efficient as it can be on ADO.NET: via a forwardonly cursor called a datareader. Even oracle converts its REF CURSOR output parameters to datareaders first, so there's not a lot of choice
Joined: 06-Sep-2005
Otis,
Thanks for the insight on the cursors. I must admit, when I first saw your statement that ADO.Net did not expose the choice of underlying cursor type, I was sure you must be mistaken, as even pre-.Net ADO enabled this specification through a property and it was enormously useful in tuning performance on the application server. But, i went to the docs and, so far, can see no mention of a cursor type property.
Most of my serious product development for the last ten years or so has been at the C++ API level - either ODBC or OLE-DB - where all database features are accessible, so I guss I'll have to deal with this reality in the .Net world. But it is a surprise. I remember during my years at MS that presenting too many choices to the masses can spell trouble, so maybe this was a wise choice on the part of the ADO.Net team. In any event, thanks for the info here.
In another post on this thread (thanks for the thread Marcus and Otis, it's such an important topic), the reference was made to Row ID's as a means of building a keyset. I think this one just was a viictim of the translation (and I compliment you on your English skills), but in the generic sense, the term row-ID (after all, the term row ID is not a copyrightable term) simply refers to a concatenation of the keys of the records used to build each row in the underlying result-set. I think this is a pretty standard approach to buildling application level keysets (concatenation of underlying primary keys that is). Anyway, as always, thanks for the info.