Retrieving a new entities PK without refetch?

Posts   
 
    
MattE avatar
MattE
User
Posts: 77
Joined: 11-Sep-2007
# Posted on: 26-Feb-2008 17:46:26   

HI all,

Is it possible to force LLBLGEN to pass back the new records PK on Save() ? (i.e. by using SCOPE_IDENTITY and an output parameter?

I'm using SelfServicing without Stored Procs - using profiler I've seen a couple of instances where LLBLGEN uses SCOPE_IDENTITY() in the generated SQL string..

e.g.

MyEntity entity= new MyEntity ();
          // not shown - set some properties
          entity.Save();

          // interrogating Id here, causes a refetch from storage (a whole new select) which I want to avoid
          SomeMethod(entity.Id);

Thanks for any feedback

daelmo avatar
daelmo
Support Team
Posts: 8245
Joined: 28-Nov-2005
# Posted on: 26-Feb-2008 21:45:25   

Hi Matthew,

Please read LLBLGenPro Help - Using the generated code - SelfServicing - Using the entity classes - Creating a new / modifying an existing entity, to know how LLBLGen treats the refetch of PK's.

What DB (and DB version) are you using? What LLBLGen version are you using? Is the PK autoincrement (by unique_identifier, auto inc, NEWSEQUENTIALID, trigger)?

David Elizondo | LLBLGen Support Team
MattE avatar
MattE
User
Posts: 77
Joined: 11-Sep-2007
# Posted on: 27-Feb-2008 09:08:59   

Hi daelmo

I will read the reference..

We're using SQL Server 2005. The PK is a standard auto-increment int.

As far as LLBLGen goes we're using 2.5 Pro

Thanks

Walaa avatar
Walaa
Support Team
Posts: 14946
Joined: 21-Aug-2005
# Posted on: 27-Feb-2008 09:45:28   

The PK value should be automatically available to you inside the entity just after the Save call returns.

MattE avatar
MattE
User
Posts: 77
Joined: 11-Sep-2007
# Posted on: 27-Feb-2008 10:54:51   

Ok - either I've mis-diagnosed the problem, or somethings out of whack

Will post when I know more.

MattE avatar
MattE
User
Posts: 77
Joined: 11-Sep-2007
# Posted on: 27-Feb-2008 11:42:03   

It looks like the PK field is there, returned as an output parameter:

SELECT @PageHitId=SCOPE_IDENTITY()

but a refetch is invoked, when the .PageHitId property is interrogated.

From the documentation, I think this is by design, and configurable. RTFM.

thanks

Walaa avatar
Walaa
Support Team
Posts: 14946
Joined: 21-Aug-2005
# Posted on: 27-Feb-2008 14:51:10   

Take it easy simple_smile Besides the manual is too big to be read and remembered.

Good Luck

MattE avatar
MattE
User
Posts: 77
Joined: 11-Sep-2007
# Posted on: 28-Feb-2008 11:55:48   

Ok. 1 more thing - I understand the rationale for the design decision to make entities automatically refetch after save, when using its properties (i.e. entity values which are out of synch with storage, because of triggers, database defaults etc.)

This may not always be the case, however - and in my case it is causing the generation and execution of unnecessary (and performance impacting) refetches.

I understand from the documentation that to change this behaviour is a global web / app.config setting, it can't be configured on an entity level.

To make this .config change introduces a significant risk (e.g. there may be cases in our code where this refetch behaviour is expected).

Is there no way to implement this on an entity level?

Thanks

Walaa avatar
Walaa
Support Team
Posts: 14946
Joined: 21-Aug-2005
# Posted on: 28-Feb-2008 14:51:44   

For an entity you can do the following:

myEntity.Fields.State = EntityState.Fetched
MattE avatar
MattE
User
Posts: 77
Joined: 11-Sep-2007
# Posted on: 05-Mar-2008 15:22:11   

Thank you. That should do it.

MattE avatar
MattE
User
Posts: 77
Joined: 11-Sep-2007
# Posted on: 14-Mar-2008 13:39:15   

Hi again,

I have released my modified code, which prevents the, in my case, unnecessary SELECT (described above) by forcing the fields collection's Fetched state.

I had hoped that this would resolve a series of timeouts I was getting from an InsertEntity() call , by reducing row and page locks caused by the subsequent selects.

(The code is running on every page hit, for tracking purposes, so its relatively high-volume. The target table is currently around 12 m rows)

But - no luck.

I know this is sub-optimal,but as a desparate measure, I also implemented this 'fix':

SD.LLBLGen.Pro.DQE.SqlServer.DynamicQueryEngine.UseNoLockHintOnSelects = true;

But I'm still getting the errors from time-to-time (typically in batches)

The error text is below:


SD.LLBLGen.Pro.ORMSupportClasses.ORMQueryExecutionException: An exception was caught during the execution of an action query: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.. Check InnerException, QueryExecuted and Parameters of this exception to examine the cause of this exception. ---> System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at SD.LLBLGen.Pro.ORMSupportClasses.ActionQuery.Execute() --- End of inner exception stack trace --- at SD.LLBLGen.Pro.ORMSupportClasses.ActionQuery.Execute() at SD.LLBLGen.Pro.ORMSupportClasses.BatchActionQuery.Execute() at SD.LLBLGen.Pro.ORMSupportClasses.DaoBase.ExecuteActionQuery(IActionQuery queryToExecute, ITransaction containingTransaction) at SD.LLBLGen.Pro.ORMSupportClasses.DaoBase.AddNew(IEntityFields fields, ITransaction containingTransaction) at SN.DL.EntityClasses.SourcedPageHitEntity.InsertEntity() at SD.LLBLGen.Pro.ORMSupportClasses.DaoBase.PersistQueue(List`1 queueToPersist, Boolean insertActions, ITransaction transactionToUse, Int32& totalAmountSaved) at SD.LLBLGen.Pro.ORMSupportClasses.EntityBase.Save(IPredicate updateRestriction, Boolean recurse) at SN.BL.SourcedPageHit.LogSourcedHitMidSession(HttpContext cont) 

The generated SQL is (with schema and paramater names obfuscated):


declare @p3 int
set @p3=1884367
exec sp_executesql N'INSERT INTO [dbName].[dbo].[tblName] ([intParam1], [varcharParam], [intparam2], [dateTimeParam])  VALUES (@intParam1, @varcharParam, @intparam2, @dateTimeParam);SELECT 
@outputId=SCOPE_IDENTITY()',N'@outputId int output,@intParam1 smallint,@varcharParam varchar(100),@intparam2 int,@dateTimeParam datetime',@outputId=@p3 
output,@intParam1=10,@varcharParam='',@intparam2=1927668,@dateTimeParam=''2008-03-14 14:25:44:543''
select @p3

The table is indexed OK, I think (or the performance tuning wizard thinks so). I'm a little baffled. Effectively this is a simple insert into a table, with an integer primary key - It should be quick.

Any collective wisdom very welcome.

Walaa avatar
Walaa
Support Team
Posts: 14946
Joined: 21-Aug-2005
# Posted on: 14-Mar-2008 15:28:41   

Well as you might have guessed, Timouts issues are most of the times due to deadlocks.

So the first thing you need to do is to use a database profiler (SQL Profiler) to check for deadlocks, then you should find out which queries or locks are causing the deadlock.

This should be the lead of the investigation.

MattE avatar
MattE
User
Posts: 77
Joined: 11-Sep-2007
# Posted on: 14-Mar-2008 16:51:50   

Fair enough. Running a trace tracking deadlock events / metrics