Exception Memory could not be allocated

Posts   
1  /  2
 
    
bernhard
User
Posts: 34
Joined: 15-Jan-2007
# Posted on: 15-Apr-2008 09:45:41   

Hi

I am using Oracle 10.2.0 and ODP.Net. After reading about 2 million entities form the database (with FetchEntityCollection) I always get the exception "Memory could not be allocated" and I have no idea what I am doing wrong.

I can reproduce the error when I create a loop in which I always open a new connection and reads a single entity from the database. After doing that for about 2 million times the memory of the process increased sacrificant although I do not keep any data in the memory and I get this exception.

Does anyone have an idea what causes this error? Thanks

Exception: Type: OracleDataAccess.Client.OracleException Message: Memory could not be allocated Stack: at Oracle.DataAccess.Client.OracleException.HandleErrorHelper(...) at Oracle.DataAccess.Client.OracleDataReader.Read() at SD.LLBLGen.Pro.ORMSupportClasses.DataAccessAdapterBase.ExecuteMultiRowRetriecalQuery(...) at SD.LLBLGen.Pro.ORMSupportClasses.DataAccessAdapterBase.FetchEntityCollectionInternal(...)

My code looks about like that:

public ReadData()
{
  while(!exit)
  {
    ReadEntity(i++);
  }
}

private ReadEntity(long id)
{
  PersonEntity person = null;
  EntityCollection<PersonEntity> personCollection = 
    new EntityCollection<PersonEntity>(new PersonEntityFactory);

  IRelationPredicateBucket filter = new RelationPredicateBucket();
  filter.PredicateExpression.Add(PersonFields.Id = id);

  IPrefetchPath2 prefetchPath = new PrefetchPath2((int)EntityType.Person);
  prefetchPath.Add(PersonEntity.PrefetchPathAddress);

  using(DataAccessAdapter adapter = new DataAccessAdapter())
 {
    adapter.FetchEntityCollection(personCollection, filter, 0, null, prefetchPath );
 }

}
Otis avatar
Otis
LLBLGen Pro Team
Posts: 39588
Joined: 17-Aug-2003
# Posted on: 15-Apr-2008 09:59:14   

runtime library version / build numer?

Frans Bouma | Lead developer LLBLGen Pro
bernhard
User
Posts: 34
Joined: 15-Jan-2007
# Posted on: 15-Apr-2008 10:32:21   

SD.LLBLGen.Pro.DQE.Oracle10g.NET20.dll has product version 2.0.0.060803 SD.LLBLGen.Pro.ORMSupportClasses.NET20.dll has product version 2.0.07.0308

Walaa avatar
Walaa
Support Team
Posts: 14946
Joined: 21-Aug-2005
# Posted on: 15-Apr-2008 11:06:08   

You are using an old runtime library, would you please try using the latest available one.

Also you can just use a PersonEntity to fetch the needed entity rather than using a collection as follows:

private ReadEntity(long id)
{
PersonEntity person = new PersonEntity(id);

IPrefetchPath2 prefetchPath = new PrefetchPath2((int)EntityType.Person);
prefetchPath.Add(PersonEntity.PrefetchPathAddress);

using(DataAccessAdapter adapter = new DataAccessAdapter())
{
    adapter.FetchEntity(person, prefetchPath );
}
}

Also to optimize performance, you should use a globaly defined adapter (passed as a parameter) with an open connection.

DataAccessAdapter adapter = new DataAccessAdapter(true);

Then close the connection after you finish looping.

adapter.CloseConnection();
bernhard
User
Posts: 34
Joined: 15-Jan-2007
# Posted on: 16-Apr-2008 16:02:49   

thanks for the help.

I will install the latest release and execute my test again. As soon I know more I will post it here.

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39588
Joined: 17-Aug-2003
# Posted on: 16-Apr-2008 16:35:35   

Still, oracle shouldn't claim all the memory. Open/close is just a matter of getting a connection from the pool. Be sure you don't switch OFF connection pooling in the connection string.

Frans Bouma | Lead developer LLBLGen Pro
bernhard
User
Posts: 34
Joined: 15-Jan-2007
# Posted on: 17-Apr-2008 15:40:16   

I updated LLBLGen to Version 2.5 and still get the same effect. The private memory of the process is increasing until I finally get the exception "Memory could not be allocated".

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39588
Joined: 17-Aug-2003
# Posted on: 17-Apr-2008 15:51:28   

bernhard wrote:

I updated LLBLGen to Version 2.5 and still get the same effect. The private memory of the process is increasing until I finally get the exception "Memory could not be allocated".

And when you pass an adapter in with an open connection (set KeepConnectionOpen to true in the ctor of the adapter) ?

Please check if you have DISABLED connection pooling on the connection in the connection string. You shouldn't disable it (it's enabled by default). We call dispose on all objects, even parameter objects used inside ODP.NET, so these leaks shouldn't happen.

also we found this thread: http://forums.oracle.com/forums/thread.jspa?threadID=630531&tstart=45 is this about the same issue?

Frans Bouma | Lead developer LLBLGen Pro
bernhard
User
Posts: 34
Joined: 15-Jan-2007
# Posted on: 17-Apr-2008 16:06:19   

I have tried using an open connection by passing true in the constructor, but behavior. And connection pooling is enabled.

Yes the thread on the Oracle Forum is about the same issue, posted be myself last month.

I have also set the statement caching to 1 like suggested in that thread but same again. I did not help.

bernhard
User
Posts: 34
Joined: 15-Jan-2007
# Posted on: 17-Apr-2008 16:47:40   

I also tried to move the constructor of the new DataAccessAdapter of my the loop (see my code below) so that I reall just opening one connection and closing it after reading the data. I still get the same effect.

public ReadData()
{
  using(DataAccessAdapter adapter = new DataAccessAdapter())
  {
    while(!exit)
    {
      ReadEntity(adapter , i++);
    }
  }
}

private ReadEntity(DataAccessAdapter adapter, long id)
{
  PersonEntity person = null;
  EntityCollection<PersonEntity> personCollection =
    new EntityCollection<PersonEntity>(new PersonEntityFactory);

  IRelationPredicateBucket filter = new RelationPredicateBucket();
  filter.PredicateExpression.Add(PersonFields.Id = id);

  IPrefetchPath2 prefetchPath = new PrefetchPath2((int)EntityType.Person);
  prefetchPath.Add(PersonEntity.PrefetchPathAddress);

  adapter.FetchEntityCollection(personCollection, filter, 0, null, prefetchPath );

}
Otis avatar
Otis
LLBLGen Pro Team
Posts: 39588
Joined: 17-Aug-2003
# Posted on: 17-Apr-2008 17:11:06   

Erm... ok though isn't it obvious that this is causing problems? I mean: you're reading a lot of data into objects, but the garbage collector isn't coming after every loop iteration! It's kicking in after the system went idle or when memory pressure is high.

However, you run into a tight loop so there's no cpu time left for the garbage collector and therefore your loop will eat up the memory very FAST. When the GC should kick in it's too late, the loop has eaten away too many memory.

I'm not sure what your loop is trying to achieve, but you can't assume garbage collection kicks in after an object goes out of scope.

Remember that every time you fetch data, the data is fetched in NEW objects by default. if you want to fetch them in the SAME objects, use a Context (see manual about Context).

Also, if you want, for testing only! (as you shouldn't call this method in production code!), you can add GC.Collect() at the end of your read entity to collect memory.

Frans Bouma | Lead developer LLBLGen Pro
bernhard
User
Posts: 34
Joined: 15-Jan-2007
# Posted on: 17-Apr-2008 18:31:41   

I know this sample does not make any sense. I just simplified my real application more and more.

I have called the GC.Collect() method too after reading may be 1000 entities but on a more complex sample, not the one I posted and I still got the exception. My real application also has a lot of idle time so the garbage collector has enough chances to clean up the memory.

I know in the simple example I could fetch the data to the same object but in my more complex application I can not. My application has a method which I will may be 100 times and then I have in idle time for 5 mins, then it is executed for another 50 times, and so on.... On my real application I got the the "Could not allocate memory" exception after a running time of about 17 hours with a lot of ide time and calling GC.Collect() at least once every 5 minutes.

Anyway tomorrow I will call the garbage collector explicitly and also include a Thread.sleep call to give the system some idle time test again if the memory will be released.

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39588
Joined: 17-Aug-2003
# Posted on: 17-Apr-2008 20:58:31   

If your example isn't representative, I fear we need an example which is.

the main thing is: if you fetch a lot of entities and store them somewhere, they'll be kept in memory. This sounds logical but see this example: you fetch a set of customers and you cache them.

Now you fetch a set of orders. You then do: order.Customer = someCachedCustomer; // save order

after the assignment, the order is stored inside the customer's Orders collection. As the Customer is cached in the application in this example, so is orders, as it will be kept in memory by the customer referring to it (which is kept in memory).

This can lead to a small memory leak which grows over time.

Pay close attention to entities which are for example for logging. These entities typically have a lot of relations with other entities but these relations only should be 1-side: from FK to PK. So say the logging entity is the PK side, you should hide all relations from that side, so you won't keep entities you log into memory inside a collection in the cached entity.

If you don't cache ANYTHING, you don't bind ANY eventhandler at all, then it's a different scenario, but otherwise, it's probably caused by the examples I gave above.

As it happens after a long time, memory profiling is perhaps very hard to do.

Frans Bouma | Lead developer LLBLGen Pro
bernhard
User
Posts: 34
Joined: 15-Jan-2007
# Posted on: 02-Jun-2008 08:42:49   

Hello

After I while I finally found some time to work again on my memory problem. I was able to make a simpler code example and I could reproduce the problem.

For the code see below. After running this code for around 24 hours I get the exception Oracle.DataAccess.Client.OracleException Memory could not be allocated

When I rund this example I can see that the private memory is increasing until I finally get the exception. I am not logging or caching anything, so I can not understand way I run into that exception. And I have just no Idea if I am doing someting wrong, if it is a LLGLGen problem or a ODP.Net problem. I also attached the code of my application, the used libraries and the sql-script which I used to create the schema and the data.

Thanks for the help.

Code:

private void ReadData()
      {
         using (DataAccessAdapter theDatabaseAdapter = new DataAccessAdapter())
         {

            while (!_stopGeneration)
            {
               PersonEntity person = LLBLAdapter.ReadPerson(theDatabaseAdapter, _personCounter);

               if (_personCounter % 100000 == 0)
               {
                  GC.Collect();
                  Thread.Sleep(5000);
               }

               _personCounter++;
            }

         }

      }


      internal static PersonEntity ReadPerson(DataAccessAdapter theDatabaseAdapter, long id)
      {
         PersonEntity person = null;

         EntityCollection<PersonEntity> entityCollection = new EntityCollection<PersonEntity>(new PersonEntityFactory());

         IRelationPredicateBucket relationBucket = new RelationPredicateBucket();

         relationBucket.PredicateExpression.Add(new FieldCompareValuePredicate(PersonFields.PersonId, null, ComparisonOperator.Equal, id));

         theDatabaseAdapter.FetchEntityCollection(entityCollection, relationBucket, 0, null, null);

         if (entityCollection.Count > 0)
         {
            person = entityCollection[0];
         }

         return person;
      }

Exception:

Oracle.DataAccess.Client.OracleException Memory could not be allocated  at Ora
cle.DataAccess.Client.OracleException.HandleErrorHelper(Int32 errCode, OracleCon
nection conn, IntPtr opsErrCtx, OpoSqlValCtx* pOpoSqlValCtx, Object src, String
procedure)
   at Oracle.DataAccess.Client.OracleDataReader.Read()
   at SD.LLBLGen.Pro.ORMSupportClasses.DataAccessAdapterBase.ExecuteMultiRowRetr
ievalQuery(IRetrievalQuery queryToExecute, IEntityFactory2 entityFactory, IEntit
yCollection2 collectionToFill, IFieldPersistenceInfo[] fieldsPersistenceInfo, Bo
olean allowDuplicates, IEntityFields2 fieldsUsedForQuery)
   at SD.LLBLGen.Pro.ORMSupportClasses.DataAccessAdapterBase.FetchEntityCollecti
onInternal(IEntityCollection2 collectionToFill, IRelationPredicateBucket& filter
Bucket, Int32 maxNumberOfItemsToReturn, ISortExpression sortClauses, ExcludeIncl
udeFieldsList excludedIncludedFields, Int32 pageNumber, Int32 pageSize)
   at SD.LLBLGen.Pro.ORMSupportClasses.DataAccessAdapterBase.FetchEntityCollecti
on(IEntityCollection2 collectionToFill, IRelationPredicateBucket filterBucket, I
nt32 maxNumberOfItemsToReturn, ISortExpression sortClauses, IPrefetchPath2 prefe
tchPath, ExcludeIncludeFieldsList excludedIncludedFields, Int32 pageNumber, Int3
2 pageSize)
   at SD.LLBLGen.Pro.ORMSupportClasses.DataAccessAdapterBase.FetchEntityCollecti
on(IEntityCollection2 collectionToFill, IRelationPredicateBucket filterBucket, I
nt32 maxNumberOfItemsToReturn, ISortExpression sortClauses, IPrefetchPath2 prefe
tchPath)
   at MemoryExceptionConsole.LLBLAdapter.ReadPerson(DataAccessAdapter theDatabas
eAdapter, Int64 id) in C:\Dokumente und Einstellungen\bwa\Desktop\MemoryTest\Mem
oryTest 1b (Nolog)\MemoryExceptionConsole\LLBLAdapter.cs:line 21
   at MemoryExceptionConsole.DataReader.ReadData() in C:\Dokumente und Einstellu
ngen\bwa\Desktop\MemoryTest\MemoryTest 1b (Nolog)\MemoryExceptionConsole\DataRea
der.cs:line 44
   at MemoryExceptionConsole.DataReader.Start() in C:\Dokumente und Einstellunge
n\bwa\Desktop\MemoryTest\MemoryTest 1b (Nolog)\MemoryExceptionConsole\DataReader
.cs:line 20
Walaa avatar
Walaa
Support Team
Posts: 14946
Joined: 21-Aug-2005
# Posted on: 02-Jun-2008 09:26:29   

We will check it out.

In the mean time could you please try modifying the ReadPerson method, to use theDatabaseAdapter.FetchNewEntity() instead of fetching a collection.

I think this would be more efficient. As I think your code keeps the collections alive in memory, as long as the entities are being used (in scope).

     internal static PersonEntity ReadPerson(DataAccessAdapter theDatabaseAdapter, long id)
     {
         PersonEntity person = null;

         IRelationPredicateBucket relationBucket = new RelationPredicateBucket();

         relationBucket.PredicateExpression.Add(new FieldCompareValuePredicate(PersonFields.PersonId, null, ComparisonOperator.Equal, id));

         person = (PersonEntity)theDatabaseAdapter.FetchNewEntity(new PersonEntityFactory(), relationBucket);

         return person;
     } 
Otis avatar
Otis
LLBLGen Pro Team
Posts: 39588
Joined: 17-Aug-2003
# Posted on: 02-Jun-2008 10:01:55   

Walaa wrote:

We will check it out.

In the mean time could you please try modifying the ReadPerson method, to use theDatabaseAdapter.FetchNewEntity() instead of fetching a collection.

I think this would be more efficient.

It's indeed better to fetch an entity using the methods which fetch a single entity, but FetchNewEntity is also using a collection fetch because it re-uses the polymorphic fetch code in FetchEntityCollection. simple_smile FetchEntity() where a new instance of PersonEntity is passed in with the ID set is better, but (as I haven't looked at the code yet) it might be PersonEntity is a supertype, and therefore that routine doesn't work properly for polymorphic fetches.

As the loop fetches entities in a row, it would IMHO be better if a batch of entities was fetched, e.g. 100 per batch, using paging. Now the thread is put to sleep after every 100000th entity, i.o.w.: there's still no garbage collection until 100000 entities are read. GC.Collect therefore also not ran (and GC.Collect is also not guaranteed to collect objects immediately)

As I think your code keeps the collections alive in memory, as long as the entities are being used (in scope).

Indeed, as the collection subscribes to the events of the entity, thus the entity object gets a reference to the handler routine in the collection.

I'll look into the testcode, but I'm afraid there's little we can do about issues inside ODP.NET. As I said before, your code uses a very tight loop where no time is reserved for the garbage collector. ODP.NET has Dispose methods on almost every class, and our runtime calls all these methods when objects go out of scope (parameters, connections, commands), though it's up to ODP.NET's code what to do with that. As ODP.NET's underlying layer is written in Java, it too needs breathing room to clean up memory as Java's GC also has to run somewhere.

This is just hypothetical, but as we in our tests haven't seen any unforeseen memory increase during our sqlserver tests with our fetch benchmark code (which wasn't reclaimed properly), it's likely outside our code (as the same framework code is used on oracle).

Frans Bouma | Lead developer LLBLGen Pro
bernhard
User
Posts: 34
Joined: 15-Jan-2007
# Posted on: 02-Jun-2008 10:14:00   

I know I could use FetchEntity to improve performance. This is just a sime example, in my real code I have to use FetchEntityCollection when I do a query other than on the primary key.

And I know this is a very thight loop, in my reall application the loop is not that tight but I see the same problems.

I monitored the garbage collection and on this example it never grew more than 1 MB, the problem is the private memory not the grabage collector. The provate memory keeps increasing until I finally get that exception.

I am doing to see this example to Oracle too. Hope they can give me some information why this happend.

Thanks for the help.

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39588
Joined: 17-Aug-2003
# Posted on: 02-Jun-2008 11:34:14   

Err... you wanted to kill our Oracle testbox?


    FOR personId IN 1..20000000 LOOP
      INSERT INTO PERSON (PERSON_ID, FIRST_NAME, LAST_NAME, EMAIL, PHONE_NUMBER) VALUES (personId, 'FIRST_NAME ' || personId, 'LAST_NAME ' || personId, 'EMAIL ' || personId, 'PHONE_NUMBER ' || personId);
    END LOOP;

I definitely don't think 20 million(!) rows are necessary to test a fetch loop which could be tested with 1 (one) row. disappointed (as you're fetching 1 row per time)

Anyway, you should first clean up your testcode: 1) batch-fetch the entities per 100 into a collection, using paging: order person by id asc, then fetch the 1-100, 101-200 etc. per iteration. This is also more efficient than your current routine. 2) if processing is fast (you didn't include any processing, you just loop, your code in the repro isn't different from any code posted already in this thread), you could keep the connection open in the adapter (you don't do that now), so you don't open/close a connection every time. You're now open/closing a connection for every entity, that's a lot of open/close/dispose calls in your loop.

As I said before in this thread, which you apparently completely ignored: the garbage collector has to be able to clean up allocated objects and your loop doesn't allow that, on the contrary, as fetching 100000 entities will likely take a couple of seconds.

And above all: you said your example code posted in this thread wasn't representative for your real code. Now you post a 'repro case' which tries to insert 20 million rows in a testtable and has code which is an exact copy of the code you already posted here. I'm sorry, but if you want us to spend time on this, don't try to pull these kind of stunts, because I'm not amused by this, to say the least.

We're very willing to help, but there's an end to our patience. I don't need your complete solution in sourcecode. What I want is a representative piece of code of what you're doing. And no, I don't need a reprocase which inserts 20 million rows in our oracle testbox which runs on a virtual machine.

So I'll say it one last time: if you want us to reproduce what you see, send us a REPRESENTATIVE piece of code. Not a tight loop which obviously fails because of the issue I explained above. Nothing is done in the loop, all data fetched is lost. So everything that is allocated is to be cleaned up, however that's not done till there's some breathing room for the GC, OR when memory pressure is high. However the memory pressure trigger often comes too late, because a tight loop can allocate a lot of data in a very short time.

When you monitor the memory consumption with performance counters, be sure to check if the memory is really allocated by the CLR or if it's allocated elsewhere. Remember that you're opening/closing a connection for every entity, which happens about 20million times in your loop. As said above, ODP.NET is build on top of the java CLI, and even though we call Dispose() on all ODP.NET's tiny little objects, it's still is up to Oracle to clean up the memory. We call dispose on parameters, command objects and connections, i.e. all the objects involved.

(edit) According to your stacktrace, the exception of no more memory occurs inside the CLI (client interface, the java code) during fetching of the data, not inside the CLR. It wouldn't surprise me if the Dispose() calls to the ODP.NET objects have little effect because the java VM GC hasn't been able to clean up much of the allocated objects.

Frans Bouma | Lead developer LLBLGen Pro
Otis avatar
Otis
LLBLGen Pro Team
Posts: 39588
Joined: 17-Aug-2003
# Posted on: 02-Jun-2008 12:35:39   

I've checked with our code (v2.5) but we always call dispose on (in this order) - datareader - connection - parameters - command

(the last two are only for ODP.NET, as all other ado.net providers call dispose on command/parameters when the connection is closed)

I've checked the Dispose routines on OracleCommand, OracleDataReader, OracleConnection and OracleParameter and they're all calling into the native code (java) to free up memory, often inside try/catch clauses which swallow exceptions (so if a free doesn't succeed, you'll never know... perhaps oracle has a reason for this, not sure)

In the past we had small memory leaks on odp.net (before v2.0) where we didn't dispose parameters, but that's been covered for a long time. We also have a lot of oracle users, and we haven't received any memory issue reports for some time now.

As you're saying the memory of the GC isn't increasing above 1MB, the leak is in native/code outside the CLR, and IMHO it's ODP.NET, but as I've described above: we do call Dispose() on all used oracle objects, so I'm not sure what else is there to call to make ODP.NET clean up its native allocated memory .

Frans Bouma | Lead developer LLBLGen Pro
bernhard
User
Posts: 34
Joined: 15-Jan-2007
# Posted on: 02-Jun-2008 13:44:36   

Thanks for your help.

I will improve the test application and then run it again to see if anything changed. If I sill see any memory issues I guess I have to contact oracle. Anyway I if have more information I will post it here.

Thanks

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39588
Joined: 17-Aug-2003
# Posted on: 02-Jun-2008 14:18:59   

I'll close the thread for now. If you have any more info, or need information from us, or if Oracle needs information from us, just post a new message in this thread and it reopens automatically (and is added to our queues).

Be aware that oracle rarely responds on their forums, unfortunately... So if your employer or your client has a service contract with Oracle, use that route, otherwise you're likely have to wait a looooooooong time... (if they ever get back to you)

Frans Bouma | Lead developer LLBLGen Pro
bernhard
User
Posts: 34
Joined: 15-Jan-2007
# Posted on: 22-Aug-2008 20:21:16   

Just to end this thread here.... After installing the latest Oracle patch on my client machine (ODP.Net dlls changed from 10.2.0.2.20 to 10.2.0.3.3) the problem disappeared. It seems there was a memory leak with the 10.2.0.2.20 version.

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39588
Joined: 17-Aug-2003
# Posted on: 23-Aug-2008 08:56:13   

Thanks for the info! simple_smile

Frans Bouma | Lead developer LLBLGen Pro
changomarcelo avatar
Posts: 62
Joined: 15-Feb-2007
# Posted on: 09-Sep-2008 16:46:54   

Hi. We are having a similar problem. But how do we obtain the latest version of ODP.net to change the DLLs to 10.2.0.3.3?? I can't find the correct download!!

We are running an Oracle 10g Release 2 database over a Windows Server 2003.

Thanks!

Walaa avatar
Walaa
Support Team
Posts: 14946
Joined: 21-Aug-2005
# Posted on: 09-Sep-2008 17:07:42   

Please check this link: http://www.oracle.com/technology/software/tech/oci/instantclient/htdocs/winsoft.html

And don't forget to select "Accept" to see all the links.

1  /  2