TransactionScope strange behavior

Posts   
 
    
pt
User
Posts: 23
Joined: 24-Dec-2009
# Posted on: 09-Jun-2011 17:30:57   

I'm trying to utilize TransactionScope in a web application and experiencing some confusing behavior, so I think I'm missing something about how this works.

Here's the scenario:

In some cases I want to force several different operations to happen within a transaction. Since these methods are likely to be spread across several different classes and it would be complicated to pass a transaction around with the DAA instance, I am trying to use TransactionScope to wrap them instead.

In order to prevent/limit transactions from escalating (or so I thought), I am reusing a single DataAccessAdapter instance per thread (cached in HttpContext) rather than creating a new one for each data access method.

So with that setup in mind, consider the following code. Note: SaveToDB is just a simple method that calls SaveEntity(entity, true, true) on the passed entity.


'.....DataAccessAdapter is opened at the beginning of the page request

Using txn As New TransactionScope()
     'Save order details
     OrderRepository.SaveToDB(newOrder)

     'Save subscription charge info
     SubscriptionRepository.SaveToDB(newSubscriptionCharge)

     'Commit the transaction
     txn.Complete()
End Using

' ** The above code works fine, executes the transaction and commits successfully **


'.....Here we process a credit card charge and then update various entity fields based on result


'Now we attempt to persist the updated entities back to the DB within a new transaction

Using txn2 As New TransactionScope()
     'Save updated order details
     OrderRepository.SaveToDB(newOrder)

     'Save updated subscription details  
     SubscriptionRepository.SaveToDB(pSubscription) '<---EXCEPTION OCCURS HERE

     'Commit transaction
     txn2.Complete()
End Using

The error on the last SaveToDb call is "the partner transaction manager has disabled its support for remote/network transactions." I checked and verified all the proper MSDTC settings are configured on both machines, firewalls opened etc. Also the first transaction works fine, so it doesn't seem like a configuration issue.

So is there anything that jumps out as being wrong with this code?

  • Could the reuse of a single DAA instance for the entire thread somehow be causing an issue? If so, why does the first transaction work fine?
  • Could there be a problem with the fact that the DAA is already opened prior to the transactionscope being started (i.e. will it only get enrolled in the transaction if the DAA is created/opened inside the transactionscope)? I did notice that at no point are either the IsTransactionInProgress or InSystemTransaction values set to true, which I found odd.
  • Why is the DTC getting involved anyway (based on the error message) since I'm reusing a single DAA instance? Shouldn't that mean the same connection context is being used and therefore the transaction does not require escalation?

FWIW, I'm using LLBL v3 against MS SQL 2005.

MTrinder
User
Posts: 1461
Joined: 08-Oct-2008
# Posted on: 09-Jun-2011 21:57:55   

Does it work successfully if you do use a seperate adapter instance for each transaction - it's our recommended way of working rather than keeping cached instances around as they are fairly light weight to create.

Thanks

Matt

pt
User
Posts: 23
Joined: 24-Dec-2009
# Posted on: 09-Jun-2011 23:43:54   

MTrinder wrote:

Does it work successfully if you do use a seperate adapter instance for each transaction

Not exactly that but I did try using separate adapters for each of the 4 calls to the database, and that gave the same exact error. The only difference I noticed when I did it that way was the InSystemTransaction flag was marked as true for each adapter, unlike when I used a single adapter for all. Same end result in both cases though rage

So you're saying I should use one adapter for the first 2 statements inside txn, then dispose it and create a new one to be used for the second 2 statements in txn2? I can try that but should it really make a difference since it did not even work when I used 4 separate adapters?

pt
User
Posts: 23
Joined: 24-Dec-2009
# Posted on: 10-Jun-2011 05:23:42   

pt wrote:

I can try that but should it really make a difference since it did not even work when I used 4 separate adapters?

I gave it a shot and still got the same error. Is there a requirement that the same adapter be used for all calls within a TransactionScope and that it has keepConnectionOpen set to true? I can't imagine that's the case because that would sort of defeat the purpose of using TransactionScope since I'd have to then pass the adapter into all participating methods, so I might as well use the built in adapter transaction methods in that case.

Any other ideas as to why this is not working?

Walaa avatar
Walaa
Support Team
Posts: 14995
Joined: 21-Aug-2005
# Posted on: 10-Jun-2011 11:34:04   

Using txn2 As New TransactionScope() 'Save updated order details OrderRepository.SaveToDB(newOrder)

 'Save updated subscription details 
 SubscriptionRepository.SaveToDB(pSubscription) '<---EXCEPTION OCCURS HERE

 'Commit transaction
 txn2.Complete()

End Using

I don't think the suggestion was to use one Adapter and/or transaction within nthe same scope.

But rather the adapters should be created within the scope, to sense that they are part of a TransactionScope.

So instead of reusing adapters, try to create adapters inside OrderRepository.SaveToDB() & SubscriptionRepository.SaveToDB()

(Edit) There are a lot on google for this error. One thread that might be useful is this one: http://social.msdn.microsoft.com/forums/en-US/adodotnetdataproviders/thread/7172223f-acbe-4472-8cdf-feec80fd2e64/

pt
User
Posts: 23
Joined: 24-Dec-2009
# Posted on: 10-Jun-2011 16:25:59   

Walaa wrote:

I don't think the suggestion was to use one Adapter and/or transaction within nthe same scope.

Yes I know Matt was not suggesting that, it was just an idea that came to me as maybe a possible reason it may have been failing. But apparently that's not the case, so nevermind...

Walaa wrote:

So instead of reusing adapters, try to create adapters inside OrderRepository.SaveToDB() & SubscriptionRepository.SaveToDB()

I did try it this way in my last test with no luck. I'm fairly sure at this point the error is something relating to some obscure firewall or network/server configuration setting somewhere. The requirements for getting this MSDTC stuff working right are pretty intricate, which is why I don't want to use it. Which leads me to my next question....

So my understanding is that the way to avoid MSDTC from getting involved (at least when using SQL 2005), is to ensure that the same exact connection (not connection string but actual connection itself) is used for all operations within the scope. So translating that into LLBL terminology, that should mean if I set keepConnectionOpen on an adapter and reuse that same adapter for all operations within the scope, then it should not escalate to a distributed transaction, correct?

So since I'm already using a sort of "singleton" pattern as far as reusing the same dataadapter for all operations within a single thread/request, could I just set the keepConnectionOpen property to true when I first create the adapter at the beginning of the request? If so, what would be the downside of that? Would I run the risk of connections hanging around or would they automatically close when the request completes or goes out of scope?

daelmo avatar
daelmo
Support Team
Posts: 8245
Joined: 28-Nov-2005
# Posted on: 13-Jun-2011 00:09:36   

pt wrote:

Walaa wrote:

So instead of reusing adapters, try to create adapters inside OrderRepository.SaveToDB() & SubscriptionRepository.SaveToDB()

I did try it this way in my last test with no luck. I'm fairly sure at this point the error is something relating to some obscure firewall or network/server configuration setting somewhere. The requirements for getting this MSDTC stuff working right are pretty intricate, which is why I don't want to use it. Which leads me to my next question....

I can't reproduce your problem with similar scenario (singleton adapter shared among TrasactionScopes, multiple fetch and saves in different scopes. I can't reproduce it. You should examine the stack trace info in order to get more details. Is this working in a controlled dev machine?

pt wrote:

So my understanding is that the way to avoid MSDTC from getting involved (at least when using SQL 2005), is to ensure that the same exact connection (not connection string but actual connection itself) is used for all operations within the scope. So translating that into LLBL terminology, that should mean if I set keepConnectionOpen on an adapter and reuse that same adapter for all operations within the scope, then it should not escalate to a distributed transaction, correct?

Not exactly. I quote the manual:

A DataAccessAdapter object is able to determine if it's participating inside an ambient transaction of System.Transactions. If so, it enlists a Resource Manager with the System.Transactions transaction. The Resource manager contains the DataAccessAdapter object. As soon as a Transaction or DataAccessAdapter is enlisted through a Resource Manager, the Commit() and Rollback() methods are setting the ResourceManager's commit/abort signal which is requested by the System.Transactions' Transaction manager. If multiple transactions are executed on a DataAccessAdapter and one rolled back, the resource manager will report an abort. As soon as the DataAccessAdapter is enlisted in the System.Transactions.Transaction, no ADO.NET transaction is started, it's a no-op. Once one rollback is requested, the transaction will always report a rollback to the MSDTC.

If multiple transactions are executed using the same DataAccessAdatper object, for the DataAccessAdapter it will look like its still inside the same transaction, so no new transaction is started. This will make sure that an entity which is already participating in the transaction isn't enlisted again and the field values aren't saved again etc.

pt wrote:

So since I'm already using a sort of "singleton" pattern as far as reusing the same dataadapter for all operations within a single thread/request, could I just set the keepConnectionOpen property to true when I first create the adapter at the beginning of the request? If so, what would be the downside of that? Would I run the risk of connections hanging around or would they automatically close when the request completes or goes out of scope?

I don't think that would resolve your problem (I think your problem is somewhere else on some network checkpoint). If you go to the singleton way, it is important to call Close() at some point so the open connection goes back to the pool. It's not harmful to keep a connection open in a method with a couple of actions, however it can be harmful to tuck away an open connection at thread level as you propose as you don't have control over it.

If you don't want to use MSDTC I think you can do that. You have some repository methods, What if you pass an existent scoped DAA to that methods? That DAA could open a native transaction:

using (var adapter = new DataAccessAdapter())
{
     try
     {  
          adapter.StartTransaction( ... );

          // these repositories have an overload to use the passed DAA to persist the objects
          OrderRepository.SaveToDB(order, adapter);
          SubscriptionRepository.SaveToDB(psubscription, adapter);

          adapter.Commit();
     }
     catch (Exception ex)
     {
          adapter.Rollback();
          ...
     }  
}
David Elizondo | LLBLGen Support Team
pt
User
Posts: 23
Joined: 24-Dec-2009
# Posted on: 14-Jun-2011 05:56:51   

David,

Thanks for the response and suggestions. Your point about passing the adapter around is actually how I originally started to structure it until I started running into complications which is what led me to TransactionScope. Not all db operations are initiated from within a single method in a single class as my example shows, some may be executed within methods in other classes. So passing the adapter around to all these methods makes it nearly impossible to know when and where to fire the commits because I don't know if I've reached the top of the stack to do so. With the TS, I don't have to worry about that since all operations should automatically participate in the same transaction, even if there are sub-transactions created within.

The frustrating thing is it seems to me that my setup should work if I could overcome the 2 main issues which are:

1) If I create adapters on the fly for each command and then immediately dispose them, I run into the MSDTC issues because of the opening of multiple connections.

2) If I instead create the adapter at the beginning of the page request and use the singleton approach throughout that page to always use that same adapter, the adapter does not enlist in the transactionscopes because it is already in existence at the time the scopes are created. And even if I was able to force it to enlist, it would still involve MSDTC because even though it's the same adapter, each call to SaveEntity opens and closes a new connection.

So it seems I need a way to utilize not just a single adapter but a single connection throughout the page request in order for the transaction to stay lightweight. But I need that single adapter/connection to recognize and enlist in existing transactions, even though those transactions are not created until after the adapter is.

I was thinking using "keepConnectionOpen" when I first create the singleton adapter might solve problem #1 but you mentioned that might result in lots of orphaned connections hanging around? Is there a way around this? Also I know there is an EnlistTransaction method that could maybe work for enlisting in the transactions to solve problem #2 but I need to attach that to an SqlConnection object and I'm not sure if or how that's possible with the DataAccessAdapter?

Any ideas how I could make this work without going the native transaction route?

Walaa avatar
Walaa
Support Team
Posts: 14995
Joined: 21-Aug-2005
# Posted on: 14-Jun-2011 11:48:38   

I was thinking using "keepConnectionOpen" when I first create the singleton adapter might solve problem #1 but you mentioned that might result in lots of orphaned connections hanging around?

I guess if you make sure you close the connection and dispose the adapter in the page unload event, you can beat that orphand connections problem.

pt
User
Posts: 23
Joined: 24-Dec-2009
# Posted on: 16-Jun-2011 05:36:22   

Walaa wrote:

I guess if you make sure you close the connection and dispose the adapter in the page unload event, you can beat that orphand connections problem.

That's a good idea, I guess I'd have to do the same in the page_error event also in case there was an exception that caused the unload not to be hit?

Also any idea how I could handle the other piece of the equation, calling the EnlistTransaction method on the adapter's connection? Is this possible with the adapter? I don't see a way to get at its connection object...

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39897
Joined: 17-Aug-2003
# Posted on: 16-Jun-2011 11:36:27   

The adapter has a transaction manager for distributed transactions. The instance for this is stored in its internal member _systemTransactionResourceManager.

when a transaction is started on the adapter, it actually doesn't start an ADO.NET transaction, it simply notes that one is in progress. When a commit or rollback is issued on the adapter, it tells the _systemTransactionResourceManager that a commit or rollback took place. The transaction manager then collects these 'votes' and tells the transaction scope the outcome: commit or rollback.

DB actions are executed as normal. The transaction scope has ordered MS DTC to make things happen within a distributed transaction so all actions over a connection are within that distributed transaction. (hence the reason no real ado.net transaction is started).

A transactionscope is threadlocal, so if you span a transaction over multiple requests, it's not going to work. All overhead for the distributed transaction is done by the transaction scope and MS DTC (which does all the work). A single connection to SQL Server will likely use a lightweight distributed transaction, as soon as you issue 2 connections, it will create a full distributed transaction. This can for example be triggered from the debugger, or when you issue multiple adapters inside the same distributed transaction. So sharing the adapter, with the connection open across classes (in the same thread!) can help keep the connection from becoming a full distributed transaction.

Now, what I think happens is that you re-use the same adapter from the FIRST scope in the SECOND scope. (sorry if this was already suggested above). That's the only thing I can think of.

Other than that, please use MS DTC monitoring to see whether the transaction indeed succeeds or fails and use the event log to see why this fails. it could very well be it's related to security or other mess. MS DTC is often not configured properly or fails without logical explanation.

Question: if you share the adapter anyway, why not use a normal ado.net transaction?

Frans Bouma | Lead developer LLBLGen Pro
pt
User
Posts: 23
Joined: 24-Dec-2009
# Posted on: 16-Jun-2011 16:19:22   

Frans,

Thanks for the response and explanation of how things work. Just to be clear in case my intention was not, my goal isn't to figure out how to get MSDTC working, it's how to get transactionscopes working without involving MSDTC unless absolutely necessary. I'd rather not deal with the configuration and firewalls and performance overhead of MSDTC when all my operations are performed on a single database so there should be no need to go to that level if I set things up properly.

Otis wrote:

when a transaction is started on the adapter, it actually doesn't start an ADO.NET transaction, it simply notes that one is in progress.

Doesn't this require the transactionscope to be created before the adapter is though? Where I'm running into a problem is that my singleton adapter gets created on the page Init but I may not create a TS until later in the page when I am executing some database operations. So the existing adapter does not seem to recognize it is inside a TS (i.e. InSystemTransaction reports false). That's why I was thinking I would need to force the adapter to check if there is a current system transaction in existence and if so then call EnlistTransaction on its own connection method. Do you think that would work and if so how would I get at the adapter's active connection object?

Otis wrote:

A single connection to SQL Server will likely use a lightweight distributed transaction, as soon as you issue 2 connections, it will create a full distributed transaction. This can for example be triggered from the debugger, or when you issue multiple adapters inside the same distributed transaction. So sharing the adapter, with the connection open across classes (in the same thread!) can help keep the connection from becoming a full distributed transaction.

Yes that's exactly what I'm trying to do, open a connection right when the adapter is initialized at the start of the page and set it to keepConnectionOpen. Then on Page_Unload as suggested above, I would close the connection and dispose the adapter. This way a single connection would be used throughout the entire page. Any downsides you can see to this?

Otis wrote:

Now, what I think happens is that you re-use the same adapter from the FIRST scope in the SECOND scope. (sorry if this was already suggested above). That's the only thing I can think of.

Yes in my original code I did do that. I also tried using a new adapter for each of the 4 database operations, still same error. My assumption was that in both cases there were multiple connections being opened and closed, and thus MSDTC was getting involved, which was not what I wanted.

Otis wrote:

Other than that, please use MS DTC monitoring to see whether the transaction indeed succeeds or fails and use the event log to see why this fails. it could very well be it's related to security or other mess. MS DTC is often not configured properly or fails without logical explanation.

I do know the transaction fails and I'm sure it is an MSDTC configuration error, which is exactly why I want to avoid MSDTC completely if I can.

Otis wrote:

Question: if you share the adapter anyway, why not use a normal ado.net transaction?

I had started out that way but I ended up running into a few issues. * Sometimes I did not or could not use the singleton adapter for whatever reason (maybe this is indicative of a design flaw) but in any event if I can't 100% be sure it will always be the same adapter then the ado.net route seems to not be an option. * There were often situations where I needed to know at what point in the call stack to issue the commit, since my transaction may span several methods within several classes. The same method may at times be called within a transaction and other times not be, so it's became tough to know when to commit it and when not. * Also with the ado.net adapter, the manual says best practice is to use a try..catch..finally and dispose the adapter in the finally block. But if I'm using a shared adapter, won't that pose a problem if there are additional actions to perform on that page after the failed transaction?

MTrinder
User
Posts: 1461
Joined: 08-Oct-2008
# Posted on: 16-Jun-2011 21:24:04   

since my transaction may span several methods within several classes.

This statement made me think that perhaps you do need to reconsider the design of your application...simple_smile IMO, all you should be doing between starting and committing a transaction is saving objects to the database, if you are having to call out to other classes and methods you are holding the transaction open for longer than you need to, which is generally considered a "bad thing"

What are you actually doing in these other class and method calls - do they actually NEED to be in the transaction...?

Matt

pt
User
Posts: 23
Joined: 24-Dec-2009
# Posted on: 16-Jun-2011 22:03:36   

MTrinder wrote:

What are you actually doing in these other class and method calls - do they actually NEED to be in the transaction...?

I guess an example would be saving a user's signup information. You may have a UserManager class that saves the contact information, an AuthManager class that saves the login credentials, and several others that populate related database tables as necessary to create the data needed for a new user in the system. They all need to work as a unit (otherwise the data may go out of sync) but all the methods may be in different classes in the business layer because they operate on different entities. Seems to be a fairly basic setup to me, nothing majorly complicated.

A very simplified example would be something like the below (ignore the syntax).


class ProcessSignup {
    //ths is where transaction would start
            UserManager.AddUser(u)
            AuthManager.AddLogin(u)
            //....etc
    //end transaction
}

class UserManager{
      function AddUser(u){
            UserRepository.Add(u)
            //...other operations here
     }
}

class AuthManager{
    function AddLogin(u){
        LoginRepository.Add(u)
        //...other operations here
    }
}

So the transaction needs to surround the whole ProcessSignup operation because it needs to either succeed or fail as a unit. (This, by the way, is another conceptual problem I have with the idea of using ADO transactions because it feels like bad design to have the business layer starting a transaction on the adapter that resides in the data layer.)

Hope that all makes sense.

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39897
Joined: 17-Aug-2003
# Posted on: 17-Jun-2011 16:28:57   

pt wrote:

Frans,

Thanks for the response and explanation of how things work. Just to be clear in case my intention was not, my goal isn't to figure out how to get MSDTC working, it's how to get transactionscopes working without involving MSDTC unless absolutely necessary. I'd rather not deal with the configuration and firewalls and performance overhead of MSDTC when all my operations are performed on a single database so there should be no need to go to that level if I set things up properly.

Distributed transactions always use DTC if I'm not mistaken. Anyway, that's not the issue here ->

Otis wrote:

when a transaction is started on the adapter, it actually doesn't start an ADO.NET transaction, it simply notes that one is in progress.

Doesn't this require the transactionscope to be created before the adapter is though? Where I'm running into a problem is that my singleton adapter gets created on the page Init but I may not create a TS until later in the page when I am executing some database operations. So the existing adapter does not seem to recognize it is inside a TS (i.e. InSystemTransaction reports false). That's why I was thinking I would need to force the adapter to check if there is a current system transaction in existence and if so then call EnlistTransaction on its own connection method. Do you think that would work and if so how would I get at the adapter's active connection object?

That's indeed the requirement: The TS has to be there, before the adapter is created (so it can enlist itself).

Doing that manually will be messy, I wouldn't go there. It's also not necessary, as all actions are done on the same DB, you share the adapter, so just use an ado.net transaction using adapter.BeginTransaction() and adapter.Commit(). But there's a better alternative: UnitOfWork2 simple_smile If you want work done in a single transaction, and you have to collect work in multiple parts of your code base, create a UnitOfWork2 object, and add the work to that object and pass that around.

When you're done, simply commit it using an adapter, it will be done in a single transaction which you can auto-commit. The transaction won't use DTC. Downside is that you can't do fetching with the unitofwork. But as there's no transaction in progress on the current thread, it's ok to create an adapter in a sub method somewhere along the way and fetch some data.

Keep an eye on singleton's / other sharing objects which share adapters and other objects like unit of work, entities: these objects aren't thread safe.

Otis wrote:

A single connection to SQL Server will likely use a lightweight distributed transaction, as soon as you issue 2 connections, it will create a full distributed transaction. This can for example be triggered from the debugger, or when you issue multiple adapters inside the same distributed transaction. So sharing the adapter, with the connection open across classes (in the same thread!) can help keep the connection from becoming a full distributed transaction.

Yes that's exactly what I'm trying to do, open a connection right when the adapter is initialized at the start of the page and set it to keepConnectionOpen. Then on Page_Unload as suggested above, I would close the connection and dispose the adapter. This way a single connection would be used throughout the entire page. Any downsides you can see to this?

Yes, that's too inefficient. You want to collect work, then quickly open a connection, do the work, and close it again. See above, a unit of work is much easier for this (and designed for this purpose simple_smile )

Otis wrote:

Question: if you share the adapter anyway, why not use a normal ado.net transaction?

I had started out that way but I ended up running into a few issues. * Sometimes I did not or could not use the singleton adapter for whatever reason (maybe this is indicative of a design flaw) but in any event if I can't 100% be sure it will always be the same adapter then the ado.net route seems to not be an option. * There were often situations where I needed to know at what point in the call stack to issue the commit, since my transaction may span several methods within several classes. The same method may at times be called within a transaction and other times not be, so it's became tough to know when to commit it and when not. * Also with the ado.net adapter, the manual says best practice is to use a try..catch..finally and dispose the adapter in the finally block. But if I'm using a shared adapter, won't that pose a problem if there are additional actions to perform on that page after the failed transaction?

adapters are cheap, they're created instantly. Therefore, don't keep them around, unless you need to (e.g. pass it around for transaction sharing). As unit of work is also usable for that (collecting work to be persisted in a single transaction) I'd go that route.

pt wrote:

MTrinder wrote:

What are you actually doing in these other class and method calls - do they actually NEED to be in the transaction...?

I guess an example would be saving a user's signup information. You may have a UserManager class that saves the contact information, an AuthManager class that saves the login credentials, and several others that populate related database tables as necessary to create the data needed for a new user in the system. They all need to work as a unit (otherwise the data may go out of sync) but all the methods may be in different classes in the business layer because they operate on different entities. Seems to be a fairly basic setup to me, nothing majorly complicated.

A very simplified example would be something like the below (ignore the syntax).


class ProcessSignup {
    //ths is where transaction would start
            UserManager.AddUser(u)
            AuthManager.AddLogin(u)
            //....etc
    //end transaction
}

class UserManager{
      function AddUser(u){
            UserRepository.Add(u)
            //...other operations here
     }
}

class AuthManager{
    function AddLogin(u){
        LoginRepository.Add(u)
        //...other operations here
    }
}

So the transaction needs to surround the whole ProcessSignup operation because it needs to either succeed or fail as a unit. (This, by the way, is another conceptual problem I have with the idea of using ADO transactions because it feels like bad design to have the business layer starting a transaction on the adapter that resides in the data layer.) Hope that all makes sense.

This is ideal for a unitofwork. Simply pass it along with the methods which create data. Add the entities to the unitofwork and in the end, persist it with a single call by doing:

using(var adapter = new DataAccessAdapter())
{
    myUnitOfWork.Commit(adapter, true);
}

Frans Bouma | Lead developer LLBLGen Pro
pt
User
Posts: 23
Joined: 24-Dec-2009
# Posted on: 17-Jun-2011 18:05:34   

Otis wrote:

This is ideal for a unitofwork. Simply pass it along with the methods which create data.

Well that was my original approach, using UoW and passing it everywhere, but I quickly began to feel like this guy rage for the reasons I mentioned above.

The biggest issue is how to know at what point I'm at the outermost transaction in order to do the commit? i.e. there may be times in method A where it should commit inside that method, but other times it may be part of a bigger transaction and therefore should not commit yet. I realize that could just indicate the method is doing too much and should be broken out into a few smaller pieces, but I don't know if it's as simple as that because otherwise I would have gone that route originally instead of seeking out alternatives.

Other primary issue, which is more of an aesthetic one I suppose, is that adding an additional parameter to every method that saves data just feels inelegant and unnecessary, not to mention the refactoring involved. Would it be possible/wise/threadsafe to implement a shared UoW, maybe stored in HttpContext as I was doing with the adapter? Then the methods could automatically add themselves to that without it having to be passed around?

FWIW I found this thread which echoes exactly what I'm talking about, same suggested solution though. Is there no better solution 3 years later? http://www.llblgen.com/tinyforum/Messages.aspx?ThreadID=13620&StartAtMessage=0&#75832

daelmo avatar
daelmo
Support Team
Posts: 8245
Joined: 28-Nov-2005
# Posted on: 19-Jun-2011 21:13:05   

pt wrote:

Otis wrote:

This is ideal for a unitofwork. Simply pass it along with the methods which create data.

Well that was my original approach, using UoW and passing it everywhere, but I quickly began to feel like this guy rage for the reasons I mentioned above.

UOW is ideal for your scenario. IMHO, your should know when it has to commit.

pt wrote:

The biggest issue is how to know at what point I'm at the outermost transaction in order to do the commit? i.e. there may be times in method A where it should commit inside that method, but other times it may be part of a bigger transaction and therefore should not commit yet. I realize that could just indicate the method is doing too much and should be broken out into a few smaller pieces, but I don't know if it's as simple as that because otherwise I would have gone that route originally instead of seeking out alternatives.

As I said above, your code should know when it finally has to commit. To overcome the situation you mentioned, you could create simply method overloads. Example:

public void SaveCustomer(CustomerEntity customerToSave)
{...}

public void SaveCustomer(CustomerEntity customerToSave, UnitOfWork uowToUse)
{...}

Then use it in a higher level

UnitOfWork uow = new UnitOfWork(..);

// add the call to uow
CustomerBL.SaveCustomer(theCustomer, uow);

// add other calls
...
uow.Commit();

// just save the customer and forget it
CustomerBL.SaveCustomer(theCustomer);

pt wrote:

Other primary issue, which is more of an aesthetic one I suppose, is that adding an additional parameter to every method that saves data just feels inelegant and unnecessary, not to mention the refactoring involved. Would it be possible/wise/threadsafe to implement a shared UoW, maybe stored in HttpContext as I was doing with the adapter? Then the methods could automatically add themselves to that without it having to be passed around?

Sure you can do that. You will face the same question though: "When should I commit?", but if that is clear to you, you can do that. Doing this is less problematic than keep a DAA in the thread.

pt wrote:

FWIW I found this thread which echoes exactly what I'm talking about, same suggested solution though. Is there no better solution 3 years later? http://www.llblgen.com/tinyforum/Messages.aspx?ThreadID=13620&StartAtMessage=0&#75832

Well, the features are there: UnitOfWork, DataAccessAdapter, native Transactions, TransactionScopes. I can't think a thing you cannot do with that. Also there are many patterns out there you can refer to know what is the best approach to you.

David Elizondo | LLBLGen Support Team