FbCommand not disposed on FetchExcludedFields

Posts   
 
    
Posts: 256
Joined: 05-Jul-2010
# Posted on: 04-Jun-2018 11:52:54   

Dear

Since we switched to 5.4.0.0 we noticed that the "FetchExcludedFields" doesn't release FbCommands fast enough. I do think you forgot a dispose somewhere on the command.

The error we're getting is "Too many open handles to database".

If I check the monitor tables of Firebird I indeed have 1 attachment and thousands of queries that cause the database to hang. I will investigate a bit further.

kind regards

Alexander

Posts: 256
Joined: 05-Jul-2010
# Posted on: 04-Jun-2018 11:54:38   

The target db is firebird 3, but i know earlier would suffer the same

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39614
Joined: 17-Aug-2003
# Posted on: 04-Jun-2018 13:07:18   

The queries are tracked in a list to be disposed when the connection is closed. We do that as the readers are returned from the command and disposing the reader would make the command to get disposed as well.

So in this particular case, we track the query (which contains the command) in a list and when the connection is closed we'll dispose them first. With excluded fields, we fetch in a batch so if you fetch for 1000 entities the excluded fields, it will fetch them in batches of 5parameterizedprefetchpath threshold which is 550 (by default), so 4 batches (4 queries) and will then dispose the queries and close the connection.

This hasn't changed since I think v4, so if you see this problem occurring now, it's a bit strange.

Frans Bouma | Lead developer LLBLGen Pro
Posts: 256
Joined: 05-Jul-2010
# Posted on: 04-Jun-2018 13:38:34   

Otis wrote:

The queries are tracked in a list to be disposed when the connection is closed. We do that as the readers are returned from the command and disposing the reader would make the command to get disposed as well.

So in this particular case, we track the query (which contains the command) in a list and when the connection is closed we'll dispose them first. With excluded fields, we fetch in a batch so if you fetch for 1000 entities the excluded fields, it will fetch them in batches of 5parameterizedprefetchpath threshold which is 550 (by default), so 4 batches (4 queries) and will then dispose the queries and close the connection.

This hasn't changed since I think v4, so if you see this problem occurring now, it's a bit strange.

Hi

We fixed a bug in our system that accidentally recreated connections all the time, so that might explain why it suddenly appears.

So this is a sequence of mistakes / bugs / features that lead to this.

  1. We have a few fields in the latest version that we don't want to fetch immediately anymore. We do this because we switched the datatype from varchar to blob. If we would fetch them, we would have a lot of bad blob roundtrips. Typical to firebird.

  2. We fixed a "feature" that sorted our internal datastructures in parallel. Because of this, our dal noticed that the db call was on another thread, and it spawned a NEW db connection. We never fully realized the consequences of this in combination with lazy fields like in 1. It actually was accessing the lazy properties in an async quicksort. Causing it to spawn a bunch short living db connections.

  3. Because It happens single threaded again and because it is reacting on worst case scenario (somebody did a sort on this lazy property), we keep all the commands open. And this goes "object" per "object". (not in batch)

So workarounds for me are now to

disable the lazy fetching (don't like that) / fetch all the blobs make sure that sorting doesn't happen on this property (but I'm on middle level, so I cannot always tell what other people are doing)

My question to you

Can you maybe limit the amount of queries that stay open in time and in number? I assume you keep the prepared statements? Working a lot with the provider directly, I never saw much advantage of keeping these suckers alive too long. Whatever you're gaining on the client, you're loosing on server side. Maybe we are suffering from this harder then other people, since we keep our connection open. It dates back from the previous provider where it was needed to keep the connection open, because of firebird events. Changing it now, would not be something I would love. I would even hate it, since it would have no added value and would cause a lot of work.

Maybe a setting or something would make my life a whole lot easier? Can I also ask, given that we never close the connection, how many commands you would keep alive? We really have a lot of entities, stuff and liveness (with fbevents) happening.

kind regards

Alexander

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39614
Joined: 17-Aug-2003
# Posted on: 05-Jun-2018 13:53:56   

Alexander wrote:

Otis wrote:

The queries are tracked in a list to be disposed when the connection is closed. We do that as the readers are returned from the command and disposing the reader would make the command to get disposed as well.

So in this particular case, we track the query (which contains the command) in a list and when the connection is closed we'll dispose them first. With excluded fields, we fetch in a batch so if you fetch for 1000 entities the excluded fields, it will fetch them in batches of 5parameterizedprefetchpath threshold which is 550 (by default), so 4 batches (4 queries) and will then dispose the queries and close the connection.

This hasn't changed since I think v4, so if you see this problem occurring now, it's a bit strange.

We fixed a bug in our system that accidentally recreated connections all the time, so that might explain why it suddenly appears.

Ok. For completeness, for the framework keeping connections open isn't needed. Only do that if you e.g. need to pass an adapter from method to method and perform several actions in a single transaction. Other than that, let the adapter manage the connection open/close: with connection pooling it should be fast and without problems to open/close lots of connections in a short period of time.

So this is a sequence of mistakes / bugs / features that lead to this.

  1. We have a few fields in the latest version that we don't want to fetch immediately anymore. We do this because we switched the datatype from varchar to blob. If we would fetch them, we would have a lot of bad blob roundtrips. Typical to firebird.

  2. We fixed a "feature" that sorted our internal datastructures in parallel. Because of this, our dal noticed that the db call was on another thread, and it spawned a NEW db connection. We never fully realized the consequences of this in combination with lazy fields like in 1. It actually was accessing the lazy properties in an async quicksort. Causing it to spawn a bunch short living db connections.

This shouldn't be that much of a problem however, considering connection pooling (if that's enabled of course. If connection pooling is off then things are not that great with lots of connection open/close actions)

  1. Because It happens single threaded again and because it is reacting on worst case scenario (somebody did a sort on this lazy property), we keep all the commands open. And this goes "object" per "object". (not in batch)

So workarounds for me are now to

disable the lazy fetching (don't like that) / fetch all the blobs make sure that sorting doesn't happen on this property (but I'm on middle level, so I cannot always tell what other people are doing)

If I understand you correctly, if someone sorts on a lazy property, the action to fetch the data for this lazy property should be a single call to FetchExcludedFields, passing the complete collection and the field to fetch, not per row. The adapter will then batch things for you internally, create e.g. 4 commands to fetch 4 batches, merge the fields and return your data. If you just let the connection open/close to the FetchExcludedFields method you'll get 1 open / close action and 4 command executions with dispose.

My question to you

Can you maybe limit the amount of queries that stay open in time and in number? I assume you keep the prepared statements? Working a lot with the provider directly, I never saw much advantage of keeping these suckers alive too long. Whatever you're gaining on the client, you're loosing on server side. Maybe we are suffering from this harder then other people, since we keep our connection open. It dates back from the previous provider where it was needed to keep the connection open, because of firebird events. Changing it now, would not be something I would love. I would even hate it, since it would have no added value and would cause a lot of work.

The area where query dispose is postponed is with FetchExcludedFields and when you fetch a datareader and consume it yourself. The former is what you run into I think. As you keep the connection open, the commands are never disposed as that's tied to the connection close.

In all other cases, the commands are disposed after the fact. So fetching e.g. an entity collection will use a using() statement for the query object, and it's disposed after the fetch.

So as you mention you have to keep the connection open, could you elaborate a bit about why that is? (As I'm not familiar with firebird events nor the use case. Is it similar to what SQL Server has so you get a call when a row has changed?)

Maybe a setting or something would make my life a whole lot easier? Can I also ask, given that we never close the connection, how many commands you would keep alive? We really have a lot of entities, stuff and liveness (with fbevents) happening.

At the moment there's no limit, as the design is based on the fact the adapter is short lived and not kept around, and the connection that comes with it therefore is only kept open as long as the operation is going and closed right after.

The datareader is disposed btw, the command isn't, that's disposed when the collection is closed.

If you need to keep the connection open and there's no other way, it might be solvable through reflection. In an override (e.g. in a partial class) of DataAccessAdapter.FetchExcludedFields() (or the async variant if you're using that), first call the base method to do the work. Then using reflection call the base class method private void DisposePostponedDisposeCandidates() {}

This is a private method. This will make sure the commands are disposed after the FetchExcludedFields command and clean it up so if you then close the connection afterwards, no harm is done and they're not disposed again.

Would that help?

Frans Bouma | Lead developer LLBLGen Pro
Posts: 256
Joined: 05-Jul-2010
# Posted on: 05-Jun-2018 14:20:46   

Hi Otis

We keep the connection open for following reasons.

  1. The Fb connectionpooling wasn't really reliable. Jiri is now releasing a new version where it is reworked. We'll test this and then maybe I can switch.

  2. If you used FirebirdEvents, it required an active connection. As a result of this, we wrote our application around this concept. I'm currently not at liberty to change this behavior.

  3. There are is a lot of code that nests and reuses the connections.

  4. In firebird launching transactions was a heavy operation. Well... in the period between 2000 and 2010. I don't know if somebody still cares. In our application we did and I cannot suddenly switch the framework.

We have release goals now and only next year in summer I can consider making dinosaurs distinct in the software. i'll do it then.

For now I'll work with your workaround. May I ask. Are you caching commands on other levels as well? Not only the fetchExcludedfields?

kind regards

a

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39614
Joined: 17-Aug-2003
# Posted on: 05-Jun-2018 17:26:28   

Alexander wrote:

Hi Otis

We keep the connection open for following reasons.

  1. The Fb connectionpooling wasn't really reliable. Jiri is now releasing a new version where it is reworked. We'll test this and then maybe I can switch.

  2. If you used FirebirdEvents, it required an active connection. As a result of this, we wrote our application around this concept. I'm currently not at liberty to change this behavior.

  3. There are is a lot of code that nests and reuses the connections.

  4. In firebird launching transactions was a heavy operation. Well... in the period between 2000 and 2010. I don't know if somebody still cares. In our application we did and I cannot suddenly switch the framework.

Fair points, except perhaps point 3, which is a bit of a design flaw perhaps, but I understand completely that changing / fixing that isn't going to be feasible (and likely followed from the bad connection pool code so you had to do this to make things work).

We have release goals now and only next year in summer I can consider making dinosaurs distinct in the software. i'll do it then.

smile

For now I'll work with your workaround. May I ask. Are you caching commands on other levels as well? Not only the FetchExcludedfields?

No that's the only place. It actually happens in the FetchDataReader() method, so if you call that, then it caches the command for dispose-at-connection-close as well (as it has to). The reflection call should work OK. The FetchDataReader() which does the work is virtual, but the one which tracks the query is the one calling it and that one isn't virtual, but as this only occurs in normal behavior with FetchExcludedFields, I'd override that method and use reflection to call the method I mentioned, which should fix most issues I think (I don't think you explicitly call FetchDataReader? )

Frans Bouma | Lead developer LLBLGen Pro
Otis avatar
Otis
LLBLGen Pro Team
Posts: 39614
Joined: 17-Aug-2003
# Posted on: 07-Jun-2018 16:50:56   

FB Provider 6.0 has been released: https://www.tabsoverspaces.com/233727-firebirdclient-version-6-0-entity-framework-core-2-x-provider-entity-framework-6-provider

It has some fixes regarding connection pooling, it might solve some of your issues simple_smile

Frans Bouma | Lead developer LLBLGen Pro