Transaction Isolation Level - Please advise

Posts   
 
    
karsam
User
Posts: 5
Joined: 09-Aug-2007
# Posted on: 09-Aug-2007 16:51:35   

Hi, In our financial application we need to perform updates into multiple tables within a transaction. These table updates will be performed by multiple users concurrently. Following is the sample code:

class Program { static void Main(string[] args) {

        // [C#]
        // create adapter for fetching and the transaction. 
        DataAccessAdapter adapter = new DataAccessAdapter();
        // start the transaction.
        adapter.StartTransaction(IsolationLevel.ReadCommitted, "TwoDebug");

        try
        {
            for(int i=0; i < 2; i++)
            {
                // fetch the two entities
                CustomerEntity customer = new CustomerEntity(1);
                ContactEntity contact = new ContactEntity(1);
                adapter.FetchEntity(customer);
                adapter.FetchEntity(contact);

                customer.TerritoryId += 1;
                customer.ModifiedDate = DateTime.Now;
                contact.FirstName = "Console";

                adapter.SaveEntity(customer);
                adapter.SaveEntity(contact);

            }
            adapter.Commit();
        }
        catch(ORMConcurrencyException ex)
        {
            // abort, roll back the transaction
            Console.WriteLine(ex.ToString());
            adapter.Rollback();
            // bubble up exception
            throw;
        }
        catch (Exception ex)
        {
            // abort, roll back the transaction
            Console.WriteLine(ex.ToString());
            adapter.Rollback();
            // bubble up exception
            throw;
        }
        finally
        {
            // clean up. Necessary action.
            adapter.Dispose();
        }

    }
}

As expected when I run one instance of the above code (simulating one user), everything works fine. In the sense, second iteration will get the customer.TerritoryId updated by the first iteration.

But, when I execute two instances of the above code(simulating concurrent users), I get data integrity issues. I simulated following sequences of execution, by using message boxes. -Start first instance -Fetch Customer -Suspend execution -Start Second instance -Run thru -Resume First instance -This instance still has old values(as expected)

When I used IsolationLevel.RepeatableRead I ran into Deadlock issues. All these behaviors are as expected. But, I feel like my approach towards this requirement is wrong, there has to be a way to address this, as most of the business applications will have similar requirements.

Please do suggest me, this is time critical and important for us.

Thanks in advance,

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39613
Joined: 17-Aug-2003
# Posted on: 09-Aug-2007 17:36:54   

It's key to know which database you're targeting. (and I pressume you're using v2.0 ?)

Frans Bouma | Lead developer LLBLGen Pro
karsam
User
Posts: 5
Joined: 09-Aug-2007
# Posted on: 09-Aug-2007 17:40:49   

Sorry, for not giving this info earlier:

LLBLGEN Version - Pro 2.0.0.0 Final Database - SQL Server 2005

Thanks

jeffreygg
User
Posts: 805
Joined: 26-Oct-2003
# Posted on: 09-Aug-2007 18:16:15   

Not sure if this is your problem, but for writes anything but Serializable will give you inconsistency...

Jeff...

karsam
User
Posts: 5
Joined: 09-Aug-2007
# Posted on: 09-Aug-2007 19:59:24   

Jeff, Thanks for your response. On changing my IsolationLevel to Serializable, I am running into Deadlocks

Thanks

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39613
Joined: 17-Aug-2003
# Posted on: 09-Aug-2007 21:05:49   

A transaction places locks on all rows touched with the connection the transaction is ran over. So in the code above: you start a transaction, then you read two rows. This places read locks on the rows. You then save the rows, which place write locks on the rows. then the transaction commits and the locks are freed.

If you read the rows first, then save 1 entity, then suspend the transaction, you will likely run into problems, as the other thread cant' write the data. It will wait for the read locks to be lifted and the write lock on one row. As the other is suspended, everything deadlocks.

Rule of thumb: - read data first - then start transaction - only WRITE data during transactions - commit early

Do not fetch data during a transaction unless the transaction is very quick, but it's not recommended. If you need data to perform changes on, fetch it FIRST, perform the changes, THEN write the data.

Repeatable read is like serializable an isolation level which holds read locks till the end of the transaction. ReadCommitted doesn't do this, it lifts readlocks right away. So this can be the problem: the reads inside the transaction.


// create adapter for fetching and the transaction.
using(DataAccessAdapter adapter = new DataAccessAdapter())
{
    for(int i=0; i < 2; i++)
    {
        // fetch the two entities
        CustomerEntity customer = new CustomerEntity(1);
        ContactEntity contact = new ContactEntity(1);
        adapter.FetchEntity(customer);
        adapter.FetchEntity(contact);

        // start the transaction.
        customer.TerritoryId += 1;
        customer.ModifiedDate = DateTime.Now;
        contact.FirstName = "Console";
        try
        {
            adapter.StartTransaction(IsolationLevel.ReadCommitted, "TwoDebug");
            adapter.SaveEntity(customer);
            adapter.SaveEntity(contact);
            adapter.Commit();
        }
        catch
        {
            adapter.RollBack();
            throw;
        }
    }
}

Btw, you HAVE TO use two separate threads to test multi-user. You can't use a loop.

If there's discussion about whether the reads should be INSIDE the transaction: all data placed outside the db is by definition stale. So this is perfectly acceptable. You can try SqlServer's Snapshot isolation level, but I think that's also not really good. If you really must follow the paths you outlined so read inside a transaction, use oracle.

Frans Bouma | Lead developer LLBLGen Pro
arschr
User
Posts: 893
Joined: 14-Dec-2003
# Posted on: 09-Aug-2007 21:08:02   

How do you want this to work in the face of two users or processes performing conflicting updates to the same data?

A typical approach is not start the transactions on the reads, but to include concurency control fields such as time stamps.

Then the transaction is started on save with identification when conflict happens.

Rogelio
User
Posts: 221
Joined: 29-Mar-2005
# Posted on: 10-Aug-2007 04:43:27   

karsam wrote:

Hi, In our financial application we need to perform updates into multiple tables within a transaction. These table updates will be performed by multiple users concurrently. Following is the sample code:

class Program { static void Main(string[] args) {

        // [C#]
        // create adapter for fetching and the transaction. 
        DataAccessAdapter adapter = new DataAccessAdapter();
        // start the transaction.
        adapter.StartTransaction(IsolationLevel.ReadCommitted, "TwoDebug");

        try
        {
            for(int i=0; i < 2; i++)
            {
                // fetch the two entities
                CustomerEntity customer = new CustomerEntity(1);
                ContactEntity contact = new ContactEntity(1);
                adapter.FetchEntity(customer);
                adapter.FetchEntity(contact);

                customer.TerritoryId += 1;
                customer.ModifiedDate = DateTime.Now;
                contact.FirstName = "Console";

                adapter.SaveEntity(customer);
                adapter.SaveEntity(contact);

            }
            adapter.Commit();
        }
        catch(ORMConcurrencyException ex)
        {
            // abort, roll back the transaction
            Console.WriteLine(ex.ToString());
            adapter.Rollback();
            // bubble up exception
            throw;
        }
        catch (Exception ex)
        {
            // abort, roll back the transaction
            Console.WriteLine(ex.ToString());
            adapter.Rollback();
            // bubble up exception
            throw;
        }
        finally
        {
            // clean up. Necessary action.
            adapter.Dispose();
        }

    }
}

As expected when I run one instance of the above code (simulating one user), everything works fine. In the sense, second iteration will get the customer.TerritoryId updated by the first iteration.

But, when I execute two instances of the above code(simulating concurrent users), I get data integrity issues. I simulated following sequences of execution, by using message boxes. -Start first instance -Fetch Customer -Suspend execution -Start Second instance -Run thru -Resume First instance -This instance still has old values(as expected)

When I used IsolationLevel.RepeatableRead I ran into Deadlock issues. All these behaviors are as expected. But, I feel like my approach towards this requirement is wrong, there has to be a way to address this, as most of the business applications will have similar requirements.

Please do suggest me, this is time critical and important for us.

Thanks in advance,

This is what I do actually using ReadCommited transactions:

Dim exp As ORM.IExpression

exp = New ORM.Expression(EntityFieldFactory.Create(customerFieldIndex.TerritoryId),ExOp.Add, 1)

customer.Fields(customerFieldIndex.TerritoryId).ExpressionToApply = exp customer.Fields.IsDirty = True

This way I am always sure that TerritoryId = Last DB Value + 1

Hope this help.

karsam
User
Posts: 5
Joined: 09-Aug-2007
# Posted on: 10-Aug-2007 07:12:35   

Hi, Our requirement has been misunderstood, may be my bad, I might have not clarified it well enough.

We have Document Header and Document Detail Lines with one-to-many relations. We have to process each document line at a time. If any one of them fail all the changes will have to be rolled back.

Pseudo Code:

**//I am simulating multiple users by executing two instances of the following code ** class Program { static void Main(string[] args) { Create Adapter; Read Document Header;

        Start Transaction;
        try
        {
            **//Following for loop will process all the document lines
            //All the document lines will have to go through following posting 
            // process. If any one of them fail the balance edit, all the
            //changes done to summary table will have to be rolled back.
            //This means I will have to start the transaction, before I start the
            //following loop and commit only after all the document lines are posted
            //**It is possible that every time we might hit the same row in the 
            //summary table(as we might be posting to the same grant)****

            for(int i=0; i < NoOfDocumentDetailLines; i++))
            {
                Read Data from Summary Table;//for ex: Grant
                Read the current balances; //for the document line's Grant
                Add the current document line's amount to the above balances;
                **//Balance Edit**
                Apply some business rule to find if the Grant has enough money;
                if(there is enough money in the Grant)
                {
                    Update balances in summary table with the current document line's amount;
                    Save data to Summary Table;
                }

            }

            Commit Transaction;

        }
        catch(Exception ex)
            {

        adapter.Rollback();
        throw;
        }
    }

}

Thanks for your help,

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39613
Joined: 17-Aug-2003
# Posted on: 10-Aug-2007 10:26:18   

You won't be able to pull this off the way you want to do it. The reason for this is that your code acts as if it's alone on the planet, but it's not. So the data it operates on isn't the data it owns, it's data which is shared.

This means that if you want to process data in an atomic fashion, you need to read the data to process first, then process it, then persist the results back, using concurrency rules (like update only if field X has still the old value of X) and otherwise fail.

Otherwise, you have to lock all the rows entirely for yourself, this is effectively what repeatableread or serializable isolation levels do. the thing is though: if the thread gets suspended, NOTHING can proceed till that other thread is continued. In general, this will work because a thread isn't suspended till doomsday, however it will hurt performance.

I'd also appreciate it if you acknowledge some things said in this thread. I now have the feeling I spend time on answering you without being read / understood at all, while the answer to your problem has been given a couple of times already.

Frans Bouma | Lead developer LLBLGen Pro
karsam
User
Posts: 5
Joined: 09-Aug-2007
# Posted on: 10-Aug-2007 16:24:33   

Hi Frans, I really appreciate your answers and I understood them.

I felt like I did not specify our business requirements clearly, so is the reason for my latter postings. I am very sorry about that.

I appreciate responses by Al, Jeff and Rogelio. I will incorporate these and will probably have to slightly modify our business requirements too, to workaround these.

Thanks a lot again,

Rogelio
User
Posts: 221
Joined: 29-Mar-2005
# Posted on: 10-Aug-2007 16:54:09   

Hi,

If I were you I would do the following:

  1. Before processing the document's lines Sort them by the same key that you use to seek the Summary table. With this sort you will be sure that threads do not end in dead lock.
  2. When saving use ConcurrencyException.
  3. Catch the ORMConcurrencyException, if this happens rollback.