How to implement Global Transactions (per thread)?? Urgent

Posts   
 
    
trevorg
User
Posts: 104
Joined: 15-Nov-2007
# Posted on: 24-Mar-2008 17:49:11   

Environment:

VB.Net 2005 (.Net 2.0 Platform) Oracle 10g LLBLGen 2.5 Final SelfServicing

I am wanting to implement a global (per thread) transactional architecture. So, when writing code, I can simply surround transactional code with Global.BeginTrans(Me) and Global.CommitTrans(Me); EVERY database-centric line of code executed after BeginTrans will then participate in this Global transaction (for the current thread).

The reasons for doing this are: much simpler, easier to read code (do not have to add every single transactional object to a TxnManager), and less prone to bugs (no chance of forgetting to add an object to the TxnManager).

I implemented a class LLBTransactionManager that basically maintains an internal dictionary of Transactions, one per thread. However, I don't know how to force LLBLGen to use these transactions globally when it is performing CRUD operations. Is this possible????

Note: I'd rather not get into using COM+ or System.Transactions for a variety of reasons....I feel this approach is far simpler, if it can be done.

So let me know if there is a way to get LLBLGen to run all objects on top of this global transaction manager, and if you see any potential pitfalls with this design that I have overlooked.

Perhaps if I post some code it will be more obvious what I am trying to achieve....(note, this is just my first pass at it, but should give you an idea of what I am trying to accomplish):

Namespace MyCompany.TestApp.Domain.Extensions

Public Class LLBTransactionManager

    ' Implements Global Transaction support, on a per-thread basis

    Private Class TransactionInstance
        Public Owner As Object
        Public Transaction As TransactionBase
        Public Count As Integer ' Should properly resolve recursion and nested transactional functions within the same class?
    End Class

    Private _Transactions As System.Collections.Generic.Dictionary(Of Integer, TransactionInstance)
    Private _DefaultIsolationLevel As System.Data.IsolationLevel = IsolationLevel.ReadCommitted

    Sub New()
        '// Instantiate our pool of per-thread transactions
        _Transactions = New System.Collections.Generic.Dictionary(Of Integer, TransactionInstance)
    End Sub

    Public Sub BeginTrans(ByVal TransactionOwner As Object)
        Dim ThreadID As Integer = System.Threading.Thread.CurrentThread.GetHashCode
        Dim txn As TransactionInstance = Nothing
        _Transactions.TryGetValue(ThreadID, txn)
     If txn Is Nothing Then
            Dim newTransaction As New TransactionInstance
            newTransaction.Owner = TransactionOwner
            newTransaction.Transaction = New Transaction(IsolationLevel.ReadCommitted, TransactionOwner.ToString)
            newTransaction.Count = 1
            _Transactions.Add(ThreadID, newTransaction)
        Else
            If txn.Owner.Equals(TransactionOwner) Then
                txn.Count += 1
            Else
                '// Do nothing.  We assume this is another object type created by the original TransactionOwner, so we will just ignore its transaction requests entirely.
            End If
        End If
    End Sub

    Public Sub CommitTrans(ByVal TransactionOwner As Object)
        Dim ThreadID As Integer = System.Threading.Thread.CurrentThread.GetHashCode
         Dim txn As TransactionInstance = Nothing
        _Transactions.TryGetValue(ThreadID, txn)
      If txn Is Nothing Then Throw New System.Exception("FATAL ERROR: A transaction commit attempt was made while no transaction existed.") 'This should never happen....if it does its a bug in thecalling code!
        If txn.Owner.Equals(TransactionOwner) Then
            txn.Count -= 1
            If txn.Count < 0 Then Throw New System.Exception("FATAL ERROR: A transaction commit attempt was made with no remaining transactions.") 'This should never happen....if it does its a bug in thecalling code!
            If txn.Count = 0 Then
                '// The object trying to commit is the object that created the transaction, so ok to commit
                txn.Transaction.Commit()
                _Transactions.Remove(ThreadID)
            Else
                '// Do nothing...this was a secondary transaction launched by the TransactionOwner
            End If
        Else
            '// Do nothing.  We assume this is another object type created by the original TransactionOwner, so we will just ignore its transaction requests entirely.
        End If
    End Sub

    Public Sub RollbackTrans()
        Dim ThreadID As Integer = System.Threading.Thread.CurrentThread.GetHashCode
         Dim txn As TransactionInstance = Nothing
         _Transactions.TryGetValue(ThreadID, txn)
        If txn Is Nothing Then Throw New System.Exception("FATAL ERROR: A transaction rollback attempt was made while no transaction existed.") 'This should never happen....if it does its a bug in the calling code!
        txn.Transaction.Rollback()
        _Transactions.Remove(ThreadID)
    End Sub

    Public Function TransactionCount() As Integer
        Return Me._Transactions.Count
    End Function

    Public Function Status() As String
        Dim sb As New System.Text.StringBuilder
        sb.AppendFormat("There are currently {0} active transactions.{1}", Me._Transactions.Count, Environment.NewLine)
        For Each txn As TransactionInstance In Me._Transactions.Values
            sb.AppendFormat("   {0}   Count: {1}", txn.Owner.GetType.ToString, txn.Count)
        Next
        Return sb.ToString
    End Function


End Class

End Namespace

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39614
Joined: 17-Aug-2003
# Posted on: 25-Mar-2008 10:42:46   

Why would you reimplement something that's already there? I mean: if you start a transaction in code, THAT THREAD owns the transaction. Unless you share the 'Transaction' object among all threads, a different thread will create a different transaction object.

So if I do: using(Transaction trans = new Transaction(...)) { // do stuff here. }

every thread will have its own trans instance.

So I really fail to see why you would want to keep transactions per thread in some sort of global storage, because the thing you SHOULDN'T do is share transaction objects among threads. NEVER EVER do that. But, as creating a transaction in code already means it's local to the thread executing the code, you don't have to do a thing.

Frans Bouma | Lead developer LLBLGen Pro
trevorg
User
Posts: 104
Joined: 15-Nov-2007
# Posted on: 25-Mar-2008 16:21:52   

Ok, well now I am totally confused!!

From the help file: Normal native database transactions "LLBLGen Pro's native database transactions and also the COM+ transactions work the same for you: you create an instance of the transaction object with the type you want (COM+ or normal) and add the objects that should participate (use) that transaction to that transaction object. As of that moment the actions you perform on those objects are executed in the transaction of that transaction object."

Example:

// [C#] // Create the transaction object, pass the isolation level and give it a name Transaction transactionManager = new Transaction(IsolationLevel.ReadCommitted, "Test");

// create a new order and then 2 new order rows. try { // create new order entity. Use data from the object 'customer' OrderEntity newOrder = new OrderEntity();

// set the customer reference, which will sync FK-PK values.
// (newOrder.CustomerID = customer.CustomerID)
newOrder.Customer = customer;

newOrder.EmployeeID = 1;
newOrder.Freight = 10;
newOrder.OrderDate = DateTime.Now.AddDays(-3.0);
newOrder.RequiredDate = DateTime.Now.AddDays(3.0);
newOrder.ShipAddress = customer.Address;
newOrder.ShipCity = customer.City;
newOrder.ShipCountry = customer.Country;
newOrder.ShipName = "The Bounty";
newOrder.ShippedDate = DateTime.Now;
newOrder.ShipRegion = customer.Region;
newOrder.ShipVia = 1;
newOrder.ShipPostalCode = customer.PostalCode;

// add this new order to the transaction so actions will run inside the transaction
transactionManager.Add(newOrder);

// save the new order. When this fails, will throw exception which will terminate transaction.
newOrder.Save();

// Create new order row.
OrderDetailsEntity newOrderRow = new OrderDetailsEntity();
newOrderRow.OrderID = newOrder.OrderID; // will refetch order from persistent storage.
newOrderRow.Discount = 0;
newOrderRow.ProductID = 10;
newOrderRow.Quantity = 200;
newOrderRow.UnitPrice = 31;

// add this new orderrow to the transaction
transactionManager.Add(newOrderRow);

// save the new orderrow. When this fails, will throw exception which will terminate transaction.
newOrderRow.Save();

// done, commit the transaction
transactionManager.Commit();

So, in this example, every transactional object has to be explicitly added to the transaction: transactionManager.Add(newOrder); transactionManager.Add(newOrderRow);

The design I was describing wouldn't require adding all these objects to the transaction, it would happen automatically. (I think in your reply you were maybe focusing on my reference to threads...that part of the post is inconsequential....the most important point of my post is the global transaction, and while one exists, EVERY database interaction participates in that transaction.)

But maybe I am confused....is the example code in the help file not correct....is the explicit assigning of transactions not actually required??

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39614
Joined: 17-Aug-2003
# Posted on: 25-Mar-2008 17:31:34   

trevorg wrote:

Ok, well now I am totally confused!!

From the help file: Normal native database transactions "LLBLGen Pro's native database transactions and also the COM+ transactions work the same for you: you create an instance of the transaction object with the type you want (COM+ or normal) and add the objects that should participate (use) that transaction to that transaction object. As of that moment the actions you perform on those objects are executed in the transaction of that transaction object."

I think it's pretty clear: create a transaction object, add the entities / collections to that transaction object and they're controlled by that transaction object.

Example:

// [C#] // Create the transaction object, pass the isolation level and give it a name Transaction transactionManager = new Transaction(IsolationLevel.ReadCommitted, "Test");

// create a new order and then 2 new order rows. try { // create new order entity. Use data from the object 'customer' OrderEntity newOrder = new OrderEntity();

// set the customer reference, which will sync FK-PK values.
// (newOrder.CustomerID = customer.CustomerID)
newOrder.Customer = customer;

newOrder.EmployeeID = 1;
newOrder.Freight = 10;
newOrder.OrderDate = DateTime.Now.AddDays(-3.0);
newOrder.RequiredDate = DateTime.Now.AddDays(3.0);
newOrder.ShipAddress = customer.Address;
newOrder.ShipCity = customer.City;
newOrder.ShipCountry = customer.Country;
newOrder.ShipName = "The Bounty";
newOrder.ShippedDate = DateTime.Now;
newOrder.ShipRegion = customer.Region;
newOrder.ShipVia = 1;
newOrder.ShipPostalCode = customer.PostalCode;

// add this new order to the transaction so actions will run inside the transaction
transactionManager.Add(newOrder);

// save the new order. When this fails, will throw exception which will terminate transaction.
newOrder.Save();

// Create new order row.
OrderDetailsEntity newOrderRow = new OrderDetailsEntity();
newOrderRow.OrderID = newOrder.OrderID; // will refetch order from persistent storage.
newOrderRow.Discount = 0;
newOrderRow.ProductID = 10;
newOrderRow.Quantity = 200;
newOrderRow.UnitPrice = 31;

// add this new orderrow to the transaction
transactionManager.Add(newOrderRow);

// save the new orderrow. When this fails, will throw exception which will terminate transaction.
newOrderRow.Save();

// done, commit the transaction
transactionManager.Commit();

So, in this example, every transactional object has to be explicitly added to the transaction: transactionManager.Add(newOrder); transactionManager.Add(newOrderRow);

The design I was describing wouldn't require adding all these objects to the transaction, it would happen automatically.

I don't see how that would be the case. It's not as if a new entity is all of a sudden added to a transaction object, you have to add code to make that happen.

(I think in your reply you were maybe focusing on my reference to threads...that part of the post is inconsequential....the most important point of my post is the global transaction, and while one exists, EVERY database interaction participates in that transaction.)

One transaction for the whole application? Why would you want to have 1 transaction for the complete application?

You should isolate units of work to perform, e.g. save a series of entities. For that set of work, you create a transaction, add the entities and save the entities. You then commit the transaction. Done.

But maybe I am confused....is the example code in the help file not correct....is the explicit assigning of transactions not actually required??

You always have to add the entity to a transaction IF you want to have the actions on that entity (e.g. calling Save, Delete etc. on that entity object) take part in the transaction represented by the transaction object. You can't avoid that. It's key that you follow that design.

It's not possible to break away from that, as the actions are called on the collections/entities and therefore these objects have to know what the transaction is they have to use. So that transaction object has to be KNOWN inside the entity. This is also important for rollbacks: if the entity is saved and it gets fk fields synced with related entity's PK fields, it gets its PK field(s) set through sequences and the transaction rolls back the field values have to be restored. That's controlled by the transaction object.

You can have multiple transaction objects on a single transaction or decide to do things in multiple transactions, therefore you have to add it manually, as there's no other way to add an entity to a transaction. Doing it automatically using [ThreadScope] attributes is 'ok' but doesn't work well in all scenario's (like some ASP.NET scenario's).

Frans Bouma | Lead developer LLBLGen Pro
trevorg
User
Posts: 104
Joined: 15-Nov-2007
# Posted on: 25-Mar-2008 22:33:02   

Hi Frans,

You're still somewhat missing my point (and I don't mean that expression in the rude way!):

What I am trying to achieve is the difference between this:

dim trans as new Transaction("whatever"); dim T1 As New Table1Entity trans.add(T1) T1.Name="Steve" T1.Save

dim T2 As New Table2Entity trans.add(T2) T2.Name="Company ABC" T2.Save

dim T1Collection as New Table1Collection trans.add(T1Collection) T1Collection.GetMulti(Nothing) dim T2Collection as New Table2Collection trans.add(T2Collection) T2Collection.GetMulti(Nothing)

For each tempT1 as Table1Entity in T1Collection ' (I'm not sure if this is necessary, likely not): trans.add(tempT1) tempT1.TimeStamp = DateTime.Now() tempT1.Save Next

etc etc

trans.Commit

And this:

AppCore.TransactionManager.BeginTrans(Me) dim T1 As New Table1Entity T1.Name="Steve" T1.Save

dim T2 As New Table2Entity T2.Name="Company ABC" T2.Save

dim T1Collection as New Table1Collection T1Collection.GetMulti(Nothing) dim T2Collection as New Table2Collection T2Collection.GetMulti(Nothing)

For each tempT1 as Table1Entity in T1Collection tempT1.TimeStamp = DateTime.Now() tempT1.Save Next

etc etc

AppCore.TransactionManager.CommitTrans(Me)

So...(ignoring that the above example is quite silly).....what I am trying to avoid is having to explicitly attach every single database related entity or collection to a transaction object. Instead, I am simply setting the global transaction object at the start of the process, and then in the innards of runtime libraries, presumably when getting the connection reference, a check could be made (somehow) to whether a global transaction had been set, and if so, it would be used (unless the calling object had a transaction assigned to it explicitly, in which case it would use that one I suppose).

To appreciate the usefulness of this, you have to imagine the above code example, but multiplied many times over....if you are dealing with only 2 or 3 database objects in a function, it is a minor difference, but in complex functions with over 20 database objects for example, it is much more significant. Furthermore, if a person forgets to explicitly assign an object to the transaction object, data will be written to the database that was intended to be rolled back, and this is a very easy coding mistake to occur.

Of course I realize there are more complex transactional situations where this design would not be sufficient, so it would not be a desireable design for the foundation of the transactional architecture of LLBLGen, but for the majority of code in most projects, this provides sufficient transactional support, is simpler to code, and less prone to transaction related bugs.

But then the next question is, how would this be implemented. I have searched everywhere in the forums trying to figure a way to do this, but no luck. I was hoping perhaps that I could handle/override an event somewhere when a database connection is established and substitute my global transaction at that point, but I haven't found where to do this. Or something similar to Microsoft.Practices.EnterpriseLibrary.Data.Configuration.IDatabaseAssembler.Assemble if you are familiar with that.

So I hope you can understand what I am getting at?

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39614
Joined: 17-Aug-2003
# Posted on: 26-Mar-2008 10:20:02   

I do understand and did understand your point, though I don't think it's useful. The problem is that a transaction has a start and end: when it's created a connection is opened and a transaction is started. All actions on objects added to that transaction are keeping rows locked in the db (due to the RDBMS locking mechanism) till the transaction is committed or rolled back. Until that happens, the actions done on entities aren't finalized.

That's why it's IMHO not useful to have, because if you start a transaction 'somewhere' in the code, you really want to have that transaction committed as soon as the work is done, but that's only known in the routine which does the work, and when the transaction is committed, the instance isn't useful anymore.

You want to 'add' the entities participating in actions automatically to the transaction object alive on that thread. But when should this be done? Right before 'Save' ? Or when the entity is instantiated?

If you want to avoid missing adding an entity to the transaction, why not use a unitofwork object? Simply add all the work to the unitofwork object and commit it in one way, passing a transcation object. It seems what you want is implemented in that.

I could point you to ways how to override Save() in an entity class, use the [ThreadStatic] attribute to have a static variable be local to a thread and add the entity to that transaction instance in yuor Save override, but I think it will be problematic in the long run as you don't define a start/end for the transaction and when what happens.

So I want to point you to the unitofwork object instead.

so your code becomes: Dim uow As New UnitOfWork() dim T1 As New Table1Entity T1.Name="Steve" uow.AddForSave(T1)

dim T2 As New Table2Entity T2.Name="Company ABC" uow.AddForSave(T2)

dim T1Collection as New Table1Collection T1Collection.GetMulti(Nothing) dim T2Collection as New Table2Collection T2Collection.GetMulti(Nothing)

For each tempT1 as Table1Entity in T1Collection tempT1.TimeStamp = DateTime.Now() Next uow.AddForSave(T1Collection)

' commit uow.Commit(New Transaction(IsolationLevel.ReadCommitted, "UOW"), True)

No worries about dangling transaction objects, connections which stay open too long etc. etc., everything is encapsulated and you can't forget adding it to the transaction.

Frans Bouma | Lead developer LLBLGen Pro
trevorg
User
Posts: 104
Joined: 15-Nov-2007
# Posted on: 26-Mar-2008 17:10:33   

Frans, Thanks for the reply. And as always, when reading my replies, please don't ever interpret anything I say as disrespectful....it is sometimes too easy to sound that way when writing. simple_smile

"I do understand and did understand your point, though I don't think it's useful."

It is definitely useful. Less lines of code is useful. Less complexity is useful. It is useful/valuable to the degree that it lessens the the number of lines of code the developer must write, as well as reducing complexity and clutter (the object/transaction wiring lines of code), and the chance for error. Also, there are additional ease of coding benefits below (when transactional methods are called in sequence).

" The problem is that a transaction has a start and end: when it's created a connection is opened and a transaction is started. All actions on objects added to that transaction are keeping rows locked in the db (due to the RDBMS locking mechanism) till the transaction is committed or rolled back. Until that happens, the actions done on entities aren't finalized."

Agreed, and my design is no different whatsoever in that respect. As always, you have to pay special attention to efficiency when coding within transactions.

"If you want to avoid missing adding an entity to the transaction, why not use a unitofwork object? Simply add all the work to the unitofwork object and commit it in one way, passing a transcation object. It seems what you want is implemented in that. "

I had looked at UnitOfWork but it wasn't really what I was looking for. And with that, you have another new programming paradigm...now you have not only the concept of PersistableEntity.Save, but UnitOfWork.AddForSave. And I would assert that just like forgetting to add an entity to a transaction object, it is just as easy for a developer to accidentally code the .Save on an entity rather than remember to add it to the UnitOfWork object. Very easy mistake to make. Potentially very hard mistake to detect. Again, using a global transactions, all of these problems disappear. (Always keep in mind that some developers working on a project are quite fresh, so minimizing the possibilities of making an error is valuable.) Also, I don't think UnitOfWork supports the transactional methods possibly (but not necessarily) called sequentially scenario described below.

One benefit of this design is, worker classes can have transactional functions, and they will either participate in the existing transaction, or they will create their own. Also, they have access to any data that has been written but not committed within the existing global transaction. Using the standard transactional architecture within LLBLGen, for any objects of this type, I would have to pass a transaction object around between objects and method calls, and within a transactional method, I would always have to check whether a transaction is present and if not, create a new one, and commit it and the end of the function (but only if the function actually created the transaction, otherwise it would leave it). So again, with the design I am describing, none of this complexity is an issue. As usual, all you have to do is put Global.BeginTransaction and CommitTransaction around the transactional part of your code and you're done, everything else is taken care of for you. (I think from an end user programmer perspective, it is very similar to using COM+ transactions, where a method can simply be marked as requiring a transaction.)

Example:

Sub x()

Global.StartTransaction(Me)

' Each of the following classes do a variety of transactional database work. ' Each of these worker classes may be called individually, or in sequence (as below), in ' any arbitrary order. In each case, the work they perform must be executed within ' a transaction. If they are called individually, they must begin a new transaction ' and commit it when their work is complete. If they are called in sequence (as below) ' they must not begin a new transaction And in this case, each one must be able to ' read the data written by the others within the sequence.

Dim a1 As New Action1Performer Dim a2 As New Action2Performer Dim a3 As New Action3Performer a1.DoStuff() a2.DoStuff() a3.DoStuff()

Global.CommitTransaction(Me) End Sub

Public Class Action1Performer Public Sub DoStuff() Global.StartTransaction(Me) '// In here is a bunch of diverse database reads, writes etc. '// Access to other data written in other classes within the current transaction is required! Global.CommitTransaction(Me) ' NOTE: This will only perform a commit if the StartTransaction within this function was actually the call that really created the global transaction. End Sub End Class

"I could point you to ways how to override Save() in an entity class, use the [ThreadStatic] attribute to have a static variable be local to a thread and add the entity to that transaction instance in yuor Save override, but I think it will be problematic in the long run as you don't define a start/end for the transaction and when what happens. "

I wanted to avoid going the route of implementation by overriding Save(), because there are too many databases-centric methods at that level of the infrastructure that would require overriding, and I would undoubtedly miss some. Ideally, I was hoping that there would be a very low level place in the LLBLGen infrastructure where the global transaction could be assigned. For example, in the MS Enterprise Data Library, most everything is implemented via Command objects that can have a transaction assigned to them, so in previous applications I have simply used a very thin wrapper around Microsoft.Practices.EnterpriseLibrary.Data.Database for ExecuteDataset, ExecuteReader, ExecuteNonQuery, etc that would detect and use any present global transaction, and it worked very nicely. And there is definitely a Start and End of the Transaction. The end occurs when the final CommitTrans in the call stack is called.

So, I hope I've managed to convince you somewhat that there is value in this design. There is some, and I guess it may be a matter of personal opinion as to how much value there is.

But the question is, is there a relatively simple way to implement this in the LLBLGen architecture? It was very easy for me to implement it on top of the MSEntDataLibray, but perhaps that was just luck in that their chosen architecture was agreeable with this sort of a thing. I had started to drill into the LLBLGen source to see if I could discover anything but got lost pretty fast. So, I was hoping that, if you understand what I am trying to do (whether or not you think it is a good idea), that you would know if it is even possible to wire this functionality into the LLBLGen architecture globally...so, at a very low level I would think.

Thanks, I really appreciate your discussion on this!

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39614
Joined: 17-Aug-2003
# Posted on: 27-Mar-2008 11:14:29   

trevorg wrote:

Frans, Thanks for the reply. And as always, when reading my replies, please don't ever interpret anything I say as disrespectful....it is sometimes too easy to sound that way when writing. simple_smile

No problem there.

You just have to take my word for it that what you want isn't really the most easiest way in the sense of maintenance. You will run into problems, probably obscure problems, and I don't think it's worth the effort. Consider dangling inserts or rollbacks of transactions of work which has been saved some time ago, or user intervention during the transaction execution. You really don't want that: you want tight control over when what happens, only then are things controllable and thus maintainable.

Please next time use quote tags wink

I do understand and did understand your point, though I don't think it's useful.

It is definitely useful. Less lines of code is useful. Less complexity is useful.

I strongly disagree what you propose is less complex. Having worked rolled back out of the blue isn't something one is able to debug easily for example

It is useful/valuable to the degree that it lessens the the number of lines of code the developer must write, as well as reducing complexity and clutter (the object/transaction wiring lines of code), and the chance for error. Also, there are additional ease of coding benefits below (when transactional methods are called in sequence).

Adding an entity for save to a unitofwork vs. calling save. I don't see where the former is more complex, especially because it's proven, solid, easy to understand code. There's no magic, there's no hidden secret transaction somewhere, you don't have to babysit the transction so it commits on time. Your proposal requires you to make sure the transaction commits at the point you want it to commit.

If you want to avoid missing adding an entity to the transaction, why not use a unitofwork object? Simply add all the work to the unitofwork object and commit it in one way, passing a transcation object. It seems what you want is implemented in that.

I had looked at UnitOfWork but it wasn't really what I was looking for. And with that, you have another new programming paradigm...now you have not only the concept of PersistableEntity.Save, but UnitOfWork.AddForSave. And I would assert that just like forgetting to add an entity to a transaction object, it is just as easy for a developer to accidentally code the .Save on an entity rather than remember to add it to the UnitOfWork object. Very easy mistake to make. Potentially very hard mistake to detect. Again, using a global transactions, all of these problems disappear. (Always keep in mind that some developers working on a project are quite fresh, so minimizing the possibilities of making an error is valuable.) Also, I don't think UnitOfWork supports the transactional methods possibly (but not necessarily) called sequentially scenario described below.

Well, trust me: if you're doing it your way, you're in for a lot of fun as well. The downside is that once the errors pop up, you'll be posting here asking why data is dissapearing, why you get weird errors inside our framework, and it will take a lot of time for both of us to track that down.

If you want to save something inside a transaction, you have to add it to the transaction. If you want to avoid that, use a unit of work. If you hate that too, use adapter.

One benefit of this design is, worker classes can have transactional functions, and they will either participate in the existing transaction, or they will create their own. Also, they have access to any data that has been written but not committed within the existing global transaction. Using the standard transactional architecture within LLBLGen, for any objects of this type, I would have to pass a transaction object around between objects and method calls, and within a transactional method, I would always have to check whether a transaction is present and if not, create a new one, and commit it and the end of the function (but only if the function actually created the transaction, otherwise it would leave it). So again, with the design I am describing, none of this complexity is an issue. As usual, all you have to do is put Global.BeginTransaction and CommitTransaction around the transactional part of your code and you're done, everything else is taken care of for you. (I think from an end user programmer perspective, it is very similar to using COM+ transactions, where a method can simply be marked as requiring a transaction.)

And how are you going to check for this global transaction in your function? Via a property check on a singleton or something? How is that going to work in a multi-threaded (asp.net for example) environment?

Writing code inside transactions, to use transactions requires attention, agreed, but that's logical: if you screw up the code and write code which uses transactions badly, you'll run into a lot of issues you will probably find out at runtime in production, when strange deadlocks appear or works dissapears because a transaction is rolled back without notice.

I could point you to ways how to override Save() in an entity class, use the [ThreadStatic] attribute to have a static variable be local to a thread and add the entity to that transaction instance in yuor Save override, but I think it will be problematic in the long run as you don't define a start/end for the transaction and when what happens.

I wanted to avoid going the route of implementation by overriding Save(), because there are too many databases-centric methods at that level of the infrastructure that would require overriding, and I would undoubtedly miss some. Ideally, I was hoping that there would be a very low level place in the LLBLGen infrastructure where the global transaction could be assigned. For example, in the MS Enterprise Data Library, most everything is implemented via Command objects that can have a transaction assigned to them, so in previous applications I have simply used a very thin wrapper around Microsoft.Practices.EnterpriseLibrary.Data.Database for ExecuteDataset, ExecuteReader, ExecuteNonQuery, etc that would detect and use any present global transaction, and it worked very nicely.

How can a global transaction work at all... what if there are multiple users/threads in the system? You might think this is a BL layer for a single-user desktop app NOW, but what if 1 year from now you move to another project and some team takes over and has to write a web frontend for it, and decides to use your BL tier: BOOM, everything dies with strange errors at runtime. What will happen then? 10 to 1 they'll ask a question here which we can't answer as the symptoms are so weird.

I'll warn you up front, if I see these symptoms caused by this code they're on their own.

And there is definitely a Start and End of the Transaction. The end occurs when the final CommitTrans in the call stack is called.

So, I hope I've managed to convince you somewhat that there is value in this design. There is some, and I guess it may be a matter of personal opinion as to how much value there is.

I strongly advice you to avoid this. It's not solving the problem you have, it creates problems. Perhaps not now, but in the long run it will. The problem you have is the education level of your developers. The developers have to realize in every step they take that working with transactions is something that requires their attention: Do they understand what ACID means? Do they know that any user intervention during a transaction is a showstopper? Etc. If you think some developer isn't up to the task, don't let him/her write transaction code. or abstract the persistence away. We created adapter for this reason: people can write code without having to worry about persistence because it can be abstracted away in a layer where the adapter is known (but nowhere else, so people have to use the layer to get things done, not their own code)

If you really want this: - include a template in the DERIVED class of an entity, so not the Base class, but the derived class and override CreateTransaction. - instead of callling the base, you obtain the global transaction and return that. - derive a class from Transaction and make it use soft commits so a commit isn't a real commit till it's the last one. - change the DbUtils template and let it use your transaction object instead of the normal one.

but I really see no point in this.

Frans Bouma | Lead developer LLBLGen Pro
trevorg
User
Posts: 104
Joined: 15-Nov-2007
# Posted on: 29-Mar-2008 00:12:21   

Hey Frans,

I had a few items to discuss, but unfortunately, I am off on holidays for 3 weeks! rage

Thanks for your help on this, we will be going with your route on this project, and I will continue doing some thinking!

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39614
Joined: 17-Aug-2003
# Posted on: 29-Mar-2008 09:26:12   

trevorg wrote:

Hey Frans,

I had a few items to discuss, but unfortunately, I am off on holidays for 3 weeks! rage

haha smile Cheer up, you're going on holiday! wink

Thanks for your help on this, we will be going with your route on this project, and I will continue doing some thinking!

Oksimple_smile I'll close the thread for now, if you want to follow up on this, just post a message in this thread and it reopens.

Frans Bouma | Lead developer LLBLGen Pro