- Home
- LLBLGen Pro
- LLBLGen Pro Runtime Framework
table triggers, LLBL transactions & others!!
Greetings,
The following questions are related to Ms-SQL (but I guess they are relevant to any DB)
1- How can we simulate the functionality of DELETE, INSERT, UPDATE triggers in the generated entity classes of LLBL (to be executed in a transaction)?
2- If I have a Customer entity that has an ORDERS child collection, and I need to handle some logic in the ORDER entity differently when it is a child entity than when its not. How can I know if an ORDER entity is actually an entity in a child collection (or a child entity if 1:1 relation) and how can I get a reference to the parent object?
OMAR
omar wrote:
The following questions are related to Ms-SQL (but I guess they are relevant to any DB)
1- How can we simulate the functionality of DELETE, INSERT, UPDATE triggers in the generated entity classes of LLBL (to be executed in a transaction)?
You mean, OnDelete kind of events? Only in Adapter: derive a class from DataAccessAdapter and override OnDeleteEntity, OnSaveEntity... etc. Database actions executed on that DataAccessAdapter object will run in the transaction that DataAccessAdapter object is in.
However, be aware that we support actions directly on the database, which circumvent in-memory triggers. These actions are added to avoid having to load 1000's of objects in memory just to delete them. Therefore, if you really need transactional triggers, you could opt for implementing them in the database. For example cascading delete triggers or cascading delete actions. It however depends on how much logic you want / need to put in the triggers: if it's a significant portion of your business logic, it's perhaps wise to opt for a memory approach in a derived DataAccessAdapter class.
2- If I have a Customer entity that has an ORDERS child collection, and I need to handle some logic in the ORDER entity differently when it is a child entity than when its not. How can I know if an ORDER entity is actually an entity in a child collection (or a child entity if 1:1 relation) and how can I get a reference to the parent object?
An OrderEntity's Customer property is null if it's not part of a Customer's Orders collection. If you're using selfservicing, you've to check the inner membervariable _customer.
You can construct a generic piece of TDL template you add to a file which you bind to the TemplateID Custom_EntityBaseTemplate in a copy of the template set config you're using (VB or C#). This code is for example a method which returns true/false if the entity has a parent (in a 1:1 relation where the entity is on the FK side or in an m:1 relation). If you want me to construct that TDL template for you, please let me know.
You mean, OnDelete kind of events? Only in Adapter: derive a class from DataAccessAdapter and override OnDeleteEntity, OnSaveEntity... etc. Database actions executed on that DataAccessAdapter object will run in the transaction that DataAccessAdapter object is in.
does this meen I have to derive a DataAccessAdapter for every entity I want to handle these events for? My real entintion here is to try to minimize (or eliminate if I can) any business logic that I might need to put in a table trigger. That leads me to another question: If I want to execute (as a single transaction and a single batch) a database update operation. For example, as I am updating bill and its detail itmes I want to update GL, AP, AR, TrxTrail, IP. The idea here is NOT ONLY for these operations to execute as a single transaction but also to execute as a single batch (they would take only ONE trip to the databse). Can I do this type of Batch operations using Unit Of Work or is there another way to do it?
However, be aware that we support actions directly on the database, which circumvent in-memory triggers. These actions are added to avoid having to load 1000's of objects in memory just to delete them.
I don't understand what you meen by "we support actions directly on the database"? do you meen table triggers or stored-procedures. Could you please elaborate using a senario example?
OMAR
omar wrote:
You mean, OnDelete kind of events? Only in Adapter: derive a class from DataAccessAdapter and override OnDeleteEntity, OnSaveEntity... etc. Database actions executed on that DataAccessAdapter object will run in the transaction that DataAccessAdapter object is in.
does this meen I have to derive a DataAccessAdapter for every entity I want to handle these events for?
No, you derive 1 class, override OnDeleteEntity etc. and decide in there what to do. I'm not sure what you want to do, or better: why you want to do this, (just a question so I understand it better ) so my answer is perhaps a bit vague.
My real entintion here is to try to minimize (or eliminate if I can) any business logic that I might need to put in a table trigger. That leads me to another question: If I want to execute (as a single transaction and a single batch) a database update operation. For example, as I am updating bill and its detail itmes I want to update GL, AP, AR, TrxTrail, IP. The idea here is NOT ONLY for these operations to execute as a single transaction but also to execute as a single batch (they would take only ONE trip to the databse). Can I do this type of Batch operations using Unit Of Work or is there another way to do it?
To do that as a single transaction, first load the graph with a prefetch path, then alter the properties and then save the graph recursively again. This is done in a single transaction (the save). You can also use a serialized transaction (start it with adapter.StartTransaction()) when you LOAD the graph, keep teh adapter around and use the same adapter with the connection still open to save the graph, so the locking on the rows will keep others from altering the rows in the mean time.
Batching of statements is not performed. The reason for that is that a lot of the providers (for example Oracle, firebird and access) don't support statement batching. Could you elaborate a bit about why you want everything in a single statement? (i.e. contactenated with ';' ? as it could cause query problems when teh amount of parameters exceeds limits or the query length is > the max. query length. )
However, be aware that we support actions directly on the database, which circumvent in-memory triggers. These actions are added to avoid having to load 1000's of objects in memory just to delete them.
I don't understand what you meen by "we support actions directly on the database"? do you meen table triggers or stored-procedures. Could you please elaborate using a senario example? OMAR
No I mean, updates directly on the database or deletes directly on the database. For updates on a lot of entities for example, say all employees of a given department get 10% salary increase, and the amount of employees is 5000, then normally you have to load all the entities in memory, alter each salary property with 10%, save them all back. That's very inefficient. You can also execute an update query directly, formulated with fields and a filter, so the update is executed as an UPDATE statement on the database, without loading 1 entity in memory. This is also true for deletes. See for example "How do I update a series of entities directly in the database?" in the Best practises - How do I? section in the documentation.
Because of that, if business logic is implemented in C# triggers, these would be circumvented.
But if you could elaborate a bit more about what you want to achieve, I can probably give you a better answer. I'm sorry if you already have tried that, it's not completely clear to me what you want to achieve.
Quote:
You mean, OnDelete kind of events? Only in Adapter: derive a class from DataAccessAdapter and override OnDeleteEntity, OnSaveEntity... etc. Database actions executed on that DataAccessAdapter object will run in the transaction that DataAccessAdapter object is in.does this meen I have to derive a DataAccessAdapter for every entity I want to handle these events for?
No, you derive 1 class, override OnDeleteEntity etc. and decide in there what to do. I'm not sure what you want to do, or better: why you want to do this, (just a question so I understand it better ) so my answer is perhaps a bit vague.
If I wanted a OnDelete trigger for several entities and I derived only one adapter class, this meens that I have to check the type of entity I am performing the delete action on before I implement whatever trigger logic I want perfromed for that specific entity type. This is what I understood from you saying that I need to derive ONE adapter class to implement OnDelete, OnSave triggers for entities..
You can also use a serialized transaction (start it with adapter.StartTransaction()) when you LOAD the graph, keep teh adapter around and use the same adapter with the connection still open to save the graph, ** so the locking on the rows will keep others from altering the rows in the mean time. **
does this meen that LLBL's QDE engine implements some sort of row locking or this row locking is performed because of the explicit transaction being performed?
Could you elaborate a bit about why you want everything in a single statement? (i.e. contactenated with ';' ? as it could cause query problems when teh amount of parameters exceeds limits or the query length is > the max. query length. )
My main reason for batching statements is performance. I am assuming (with LLBL I learned that alot of my previous assumptions are not always 100% accurate..like the issue of NOT using stored procedures) that the less trips I have to make to the DB: 1- better BL performance since every trip on the wire has to assume a 50% drop in response time in comparison to NOT making that trip in the first place (not counting ofcourse the time taken to perform the actual DB transaction) 2- better DB scalability since a new connection does NOT have to be opened to server this new request coming over from the wire (I think LLBL has elevated this issue by giving us the ability to keep the connection open during a transaction.. am I correct?) I don't know how accurate I am in these assumption and I would be very interested in hearing your take on this.
Aonther issues is exceeding the query length. Is this restriction by LLBL or by the DB itself?
No I mean, updates directly on the database or deletes directly on the database. ...
Now I understand what you meen by direct actions on the database and I can see how this would be a great feature for actions handeling large number of records
But if you could elaborate a bit more about what you want to achieve
Ou current ERP system uses a POSTING logic that is called by a POST stored proc for every table (we thouhgt of POST as a standard transaction just like a CRUD transaction is and we actually implemented that in or BL engine) . When the table's POST stored proc is called, it would hit other tables (GL, TrxTrail,...) with the posted records. Since I came to know LLBL, I am inclined to migrate the whole thing to LLBL so that to forget about the nightmare of maintaining 800+ stored procedures. My question was posted because I wanted to know what is the best (and most effecient) approach to perform entity tasks such as POSTING. In addition, I have some auditing being done in the table's update/insert/delete trigger that I also would like to handle in LLBL's code.
I'm sorry if you already have tried that, it's not completely clear to me what you want to achieve.
No at all. You responses are prompt and very helpful and show that you are a very patient guy
omar wrote:
does this meen I have to derive a DataAccessAdapter for every entity I want to handle these events for?
No, you derive 1 class, override OnDeleteEntity etc. and decide in there what to do. I'm not sure what you want to do, or better: why you want to do this, (just a question so I understand it better ) so my answer is perhaps a bit vague.
If I wanted a OnDelete trigger for several entities and I derived only one adapter class, this meens that I have to check the type of entity I am performing the delete action on before I implement whatever trigger logic I want perfromed for that specific entity type. This is what I understood from you saying that I need to derive ONE adapter class to implement OnDelete, OnSave triggers for entities..
Yes, as the only place where you can safely determine that an entity is deleted (or better: passed to the DeleteEntity() call) is in OnDeleteEntity(). What you do there is up to you, you could for example call a generic method from there, handling the trigger further. However it's a 'last resort' way of doing things. Higher up in the application, there must be (as I see it) a piece of code which has overview of the business process going on, so that piece of code schedules what's going on, which calls are made and on which entities. So when a set of entities are about to be deleted, you know in that piece of code which entities are deleted, moving the complex 'what to do if' code higher up in the call chain, freeing the lower levels of the application from this complex task. The reason I suggest this is that at a higher level it's easier to retrieve related data to determine what to do than on a lower level. I admit that 'set and forget' code is often easy to use and preferable, but to implement that kind of code on a low level takes sometimes more effort than it solves.
You can also use a serialized transaction (start it with adapter.StartTransaction()) when you LOAD the graph, keep teh adapter around and use the same adapter with the connection still open to save the graph, ** so the locking on the rows will keep others from altering the rows in the mean time. **
does this meen that LLBL's QDE engine implements some sort of row locking or this row locking is performed because of the explicit transaction being performed?
The locking performed because of teh explicit transaction. LLBLGen Pro doesn't perform database locking, it relies on the RDBMS for that, as locking rows yourself is often a cause for problems and causes performance degration in applications.
Could you elaborate a bit about why you want everything in a single statement? (i.e. contactenated with ';' ? as it could cause query problems when teh amount of parameters exceeds limits or the query length is > the max. query length. )
My main reason for batching statements is performance. I am assuming (with LLBL I learned that alot of my previous assumptions are not always 100% accurate..like the issue of NOT using stored procedures) that the less trips I have to make to the DB: 1- better BL performance since every trip on the wire has to assume a 50% drop in response time in comparison to NOT making that trip in the first place (not counting ofcourse the time taken to perform the actual DB transaction)
When you execute actions in a transaction, a single, open connection is used. This means that the statements are sent over the same open connection, taking a little overhead in communication, but that's pretty minor, as every database vendor tries to eliminate that overhead as much as possible. The vast majority of the time a query takes to execute is eaten away by the query processing. Due to the same connection being open, the RDBMS (at least the major ones ) keep every temp result in memory until the transaction / connection is closed. This means that for example 3 updates using the same filter, will not cause a lot of overhead, as the 2nd and 3rd can and will re-use the execution plan of the first query, and re-use the data already read in memory.
If you have links to tests which prove a 50% performance drop, please let me know . However as I said: for oracle for example, this isn't possible even, as batching statements using ODP.NET is not supported by ODP.NET
2- better DB scalability since a new connection does NOT have to be opened to server this new request coming over from the wire (I think LLBL has elevated this issue by giving us the ability to keep the connection open during a transaction.. am I correct?) I don't know how accurate I am in these assumption and I would be very interested in hearing your take on this.
A transaction is an ADO.NET transaction (also if you use COM+ transactions, the connection is kept open) and therefore bound to a connection, so within a transaction, the statements will use the same open connection to the database. So for example, if you're saving an object graph, there is just 1 connection used.
Using connection pooling (default on in every ado.net provider) it takes up to 20ms to retrieve a new open connection btw, so the overhead is not that high.
Aonther issues is exceeding the query length. Is this restriction by LLBL or by the DB itself?
No, not a restriction in my code but I can imagine the low-level connection protocol with the database server has limitations related to the amount of parameters and query length (string length). For sqlserver for example, the maximum length of a batch can be 65535 * 4KB (4kb is package length of TDS packet). Pretty big, but still... The string has to be parsed, which gives extra overhead on the server, because of the named-parameters, the parameters collection has to be searched by every batch. I'm not sure if that's faster than a couple of more TDS packets over the wire.
But if you could elaborate a bit more about what you want to achieve
Ou current ERP system uses a POSTING logic that is called by a POST stored proc for every table (we thouhgt of POST as a standard transaction just like a CRUD transaction is and we actually implemented that in or BL engine) . When the table's POST stored proc is called, it would hit other tables (GL, TrxTrail,...) with the posted records. Since I came to know LLBL, I am inclined to migrate the whole thing to LLBL so that to forget about the nightmare of maintaining 800+ stored procedures. My question was posted because I wanted to know what is the best (and most effecient) approach to perform entity tasks such as POSTING. In addition, I have some auditing being done in the table's update/insert/delete trigger that I also would like to handle in LLBL's code.
To be able to process GL, TrxTrail etc., you have to fetch them into memory first, which can be done when you fetch the entity you want to work on for example. However that can cause the GL and TrxTrail data to get 'stale' when the entity you want to work on is kept in memory for some time (for example in a gui, user has to change some things, which can take some time). If that's not a problem, you can thus fetch the related entities with a prefetch path and alter these entities in memory when you're about to save the entity you're working on. This can be done by a POST engine for example, which consume the entity you worked on and with the rules in that engine it alters the GL and TrxTrail entities related (and reachable through) to the entity passed to the POST engine.
If it's not possible to keep GL and TrxTrail objects in memory for a long period of time, you have to fetch them when the entity you worked on is saved. This then has to be done in the POST engine, and it then follows the same path as I described above: it alters the GL and TrxTrail objects and saves the entities in one go (1 transaction).
The POST engine, through its build in rules, has the overview what to alter when a given entity is passed in for post processing. You can ease development by keeping the filter (relationpredicatebucket) object around you used to fetch the entity you pass in.
So instead of calling a proc inside the db, you call a method in the POST engine, which consumes rules and the entity passed in and it is then up to you to let that engine do the persistence calls for example or just the altering of the related entities and that it has to pass back the entity graph to the POST engine caller which will do the persistence logic calls. The advantage of this is that you can first model your BL processes in abstract form in your design / research and then implement them in equivalent code in the POST engine, which gives a tight connection between functional abstract design of a real-life business process and the actual implementation: you can find back the theoretical design for a piece of code easily and vice versa, which makes maintenance easier, as changes to the design can be mapped onto the code more easier as the location of the implementation of a given piece of functionality is easier to track down.
I'm sorry if you already have tried that, it's not completely clear to me what you want to achieve.
No at all. You responses are prompt and very helpful and show that you are a very patient guy
![]()
My pleasure
. The great thing is that LLBLGen pro is used by so much different people who have different views (and software problems!) on the way software has to be developed that from all these views great feedback is coming back to us
. For example, your remarks on low-level OnDelete trigger-style code constructs in LLBLGen Pro code. Although it is not that easy to add it (because, what if IEntity2 gets methods like OnDelete(), what to do in there? The entity doesn't know of any Adapter, so it can't fetch related objects if it needs to. So it needs an adapter object as well, but doesn't that create a refenence you don't want? However IDataAccessAdapter is generic...), I think there could be a reason to have them. I avoided events though, as events require a delegate to handle them, which can cause severe problems if they're not cleaned up correctly as they hold a reference to an object and thus can keep an object in memory for too long.
An OrderEntity's Customer property is null if it's not part of a Customer's Orders collection. If you're using selfservicing, you've to check the inner membervariable _customer.
You can construct a generic piece of TDL template you add to a file which you bind to the TemplateID Custom_EntityBaseTemplate in a copy of the template set config you're using (VB or C#). This code is for example a method which returns true/false if the entity has a parent (in a 1:1 relation where the entity is on the FK side or in an m:1 relation). If you want me to construct that TDL template for you, please let me know.
I am having problems creating this IsChild property. Frans, can you please help in the TDL code for this property for the Adapter model.
Thank you,
OMAR