- Home
- LLBLGen Pro
- Bugs & Issues
Issues with ASP.NET 2.0, Datagrids, LLBLGen, and Memory
Joined: 06-Dec-2004
Hello -
I have been working on this issue for some time and I am lost. Any insight would be greatly appreciated.
We have an ASP.NET application that we recently upgraded to ASP.NET 2.0 and LLBLGen 2. This application used to work fine but now we are experiencing frustrating memory issues. Here is an example:
-
We have an accounts table linked to two other tables (Address & BankingInfo). These are simple tables with less than 10 columns each. The table itself has 536 rows of data.
-
We have a search page that displays account searches in a datagrid.
-
Searching with empty criteria returns all 536 records. Doing this query results in a memory jump of 161,588 KB. This memory is never returned unless the Application Pool is cycled. Each subsequent query results in more memory use and the memory never stops going up until the machine runs out and we receive Out of Memory errors, there appears to be no ceiling.
-
I have turned off viewstate on the datagrid. I have set the ObjectCacheTTL to 0 minutes disabling caching. We are not storing data from this query in the session and to be safe I set sessions to expire after 2 minutes, waiting 10 minutes shows the memory as still in use. Nothing I have come up with has resolved this issue.
I just cannot figure out what is going on. Why would the machine even need 161 MB of memory to show a simple 500 row result set? Why isn't it being given back. I have searched high and low for an explanation but I cannot find one. I do not think our code is flawed as it is so simple and used to work fine in 1.1.
Any help would be very greatly appreciated. We are using IIS6 and LLBLGen Pro 2.0.0.0 Final built on July 12th, 2006.
Craig
First you should find out which piece of code is making this memory use. This can be done by debugging and watching the memory.
Double Check: If you found that some lines of code are causing this effect, try to escape them when you are debugging (step over them). Then if the memory did not go up. Then you have confirmed that these lines are consuming the great deal of memory.
Then, post this piece of code here, so we can examine it. Also please post the ruuntime library version you are using (right click on the "SD.LLBLGen.Pro.ORMSupportClasses.NETxx.dll" file select Properties, then go to the version tab, and there you will find the "File Version" attribute).
Good Luck.
536 entities can't take 161MB of memory, unless you've stored large blobs inside them. When I load 50,000 entities in a collection with 12 fields of strings and ints and guids, the app consumes about 180MB of memory, give or take a few.
Also, if the searchresults are kept alive, it means that you store a reference to the results which is never released.
This forum has a search feature which pulls with llblgen pro code data from the db. The search can at most return 500 rows at a time. Never does the forum in a whole exceed over 100MB of memory usage, as it always stays within 90-100 MB of memory usage, for the complete forum.
What you should do is start with the basics: measuring. So you should start with the windows profiler. On your server, do the following: - be sure you're starting with a clean slate, so no search has been performed. - go to start-> control panel -> administrative tools -> Performance - remove all counters currently present. - click on the [+] button - under performance object, select .NET CLR memory - in the list of counters, select: # Gen 0 collections # Gen 1 collections # Gen 2 collections # Bytes in all heaps # Total comitted bytes # Total reserved bytes Gen 0 Heap size Gen 1 Heap size Gen 2 Heap size
With Control pressed you can select more than one counter at once.
There are other counters as well for ASP.NET, you should check these out as well. Click on the Explain button if you're not sure what a counter can do or what it means. Ok, at the right, you can pick the instances the counters have to be read for. You can select Global, which is the aggregate of ALL ASP.NET worker processes combined, OR you can pick a workerprocess. This is a bit tricky if you've running more sites on the box, which worker process you've to pick.
- For each counter and instance combination, click Add
- When you're done, click Close
To get a proper overview, click the button which gives you report overview. This gives you a numeric overview.
Now, you're going to perform a couple of searches and you should check what happens: how much data is actually allocated by the CLR and how much memory is stuck in the Gen 0, 1, and 2 and how much is collected by the GC.
You'll likely see that memory consumption will simply go up. This is because the GC only kicks in after a given period of time or when there's memory pressure.
Also be sure that you've given the application enough room to breath in the Application Pools. If you've given it 200MB of memory, and it eats that in normal operation (and asp.net 2.0 is memory intensive), it will give outofmemory exceptions even though there's plenty left on the machine.
The numbers on the counters (heap sizes etc.) are in bytes.
If you're using selfservicing, be sure you're not lazy loading a lot of related entities into memory you're not aware of. For example if you bind your address entities to the grid and address has a Field mapped onto related fields into bankinginfo, it triggers lazy loading if you're not prefetching that data as well.
Joined: 06-Dec-2004
Ok, I did a little more work and here is some of the information you asked for (sorry about the code length but it is exactly what gets executed).
Our LLBLGen versions are as follows:
SD.LLBLGen.Pro.ORMSupportClasses.NET20 - Version 2.0.0.0 - Runtime Version v2.0.50727 SD.LLBLGen.Pro.DQE.SqlServer.NET20 - Version 2.0.0.0 - Runtime Version v2.0.50727
Here is the code we are executing and please note that doing an SQL Profiler Trace while running this code results in what appears to be literally thousands of queries being executed against the database. Also, the code that causes the large memory utilization is the ConsumerGrid.DataBind() call.
Again, any help at all would be greatly appreciated.
Craig
Here is the code being executed:
string keyword = Searchbox.Text;
RelationCollection relationsToUse = new RelationCollection(); IPredicateExpression filter = new PredicateExpression();
// general filter... filter.Add(PredicateFactory.Like(WSCSData.AddressFieldIndex.Address1,"%"+keyword+"%")); filter.AddWithOr(PredicateFactory.Like(WSCSData.AddressFieldIndex.Address2,"%"+keyword+"%")); filter.AddWithOr(PredicateFactory.Like(WSCSData.AddressFieldIndex.City,"%"+keyword+"%")); filter.AddWithOr(PredicateFactory.Like(WSCSData.AddressFieldIndex.State,"%"+keyword+"%")); filter.AddWithOr(PredicateFactory.Like(WSCSData.AddressFieldIndex.Phone,"%"+keyword+"%")); filter.AddWithOr(PredicateFactory.Like(WSCSData.AddressFieldIndex.Fax,"%"+keyword+"%")); filter.AddWithOr(PredicateFactory.Like(WSCSData.AccountFieldIndex.Email,"%"+keyword+"%")); filter.AddWithOr(PredicateFactory.Like(WSCSData.AccountFieldIndex.FirstName,"%"+keyword+"%")); filter.AddWithOr(PredicateFactory.Like(WSCSData.AccountFieldIndex.LastName,"%"+keyword+"%")); filter.AddWithOr(PredicateFactory.Like(WSCSData.AccountFieldIndex.Login,"%"+keyword+"%")); filter.AddWithOr(PredicateFactory.Like(WSCSData.AccountFieldIndex.Password,"%"+keyword+"%"));
ConsumerGrid.Visible = true; ConsumerCollection cc = new ConsumerCollection();
relationsToUse.Add(ConsumerEntity.Relations.AccountEntityUsingAccountId); relationsToUse.Add(ConsumerEntity.Relations.AddressEntityUsingAddressId); filter.AddWithOr(PredicateFactory.Like(WSCSData.ConsumerFieldIndex.CompanyName,"%"+keyword+"%"));
cc.GetMulti(filter,0,null,relationsToUse); ConsumerGrid.DataSource = cc; Results.Text = "Search Returned " + cc.Count.ToString() + " Results"; ConsumerGrid.DataKeyField = "Id"; ConsumerGrid.DataBind();
Joined: 24-Aug-2005
Hello. I'm working with Craig on this problem.
I've just finished running through some tests using the Windows server profiler, as recommended by Otis. This is the first time I've used this tool, so it's possible I'm doing s/thing wrong ... however, none of the specific pointers listed in Otis' post are touched by our web application. The numbers displayed do not change at all, no matter what I do in our site. I don't know what this means, but it seems significant ...
Oh yes, note that we are running this site in isolation on our server, no other websites on here at all, to confuse the issue.
Thanks for any suggestions,
rich
legos211 wrote:
Ok, I did a little more work and here is some of the information you asked for (sorry about the code length but it is exactly what gets executed).
Our LLBLGen versions are as follows:
SD.LLBLGen.Pro.ORMSupportClasses.NET20 - Version 2.0.0.0 - Runtime Version v2.0.50727 SD.LLBLGen.Pro.DQE.SqlServer.NET20 - Version 2.0.0.0 - Runtime Version v2.0.50727
That's the .NET version, not the runtime lib version, please rightclick the ormsupportclasses dll in explorer -> properties -> version tab.
but I don't think it's related to a bug we might have fixed, but using the latest runtimes won't hurt.
Here is the code we are executing and please note that doing an SQL Profiler Trace while running this code results in what appears to be literally thousands of queries being executed against the database. Also, the code that causes the large memory utilization is the ConsumerGrid.DataBind() call.
You should start sqlprofiler, start a new trace, then run your application in DEBUG mode. Place a breakpoint at the start of the routine you expect to be harmful. When you hit the breakpoint, clear the sqlprofiler trace window, then step through the code till you see the explosion of the queries.
Here is the code being executed:
string keyword = Searchbox.Text; RelationCollection relationsToUse = new RelationCollection(); IPredicateExpression filter = new PredicateExpression(); // general filter... filter.Add(PredicateFactory.Like(WSCSData.AddressFieldIndex.Address1,"%"+keyword+"%")); filter.AddWithOr(PredicateFactory.Like(WSCSData.AddressFieldIndex.Address2,"%"+keyword+"%")); filter.AddWithOr(PredicateFactory.Like(WSCSData.AddressFieldIndex.City,"%"+keyword+"%")); filter.AddWithOr(PredicateFactory.Like(WSCSData.AddressFieldIndex.State,"%"+keyword+"%")); filter.AddWithOr(PredicateFactory.Like(WSCSData.AddressFieldIndex.Phone,"%"+keyword+"%")); filter.AddWithOr(PredicateFactory.Like(WSCSData.AddressFieldIndex.Fax,"%"+keyword+"%")); filter.AddWithOr(PredicateFactory.Like(WSCSData.AccountFieldIndex.Email,"%"+keyword+"%")); filter.AddWithOr(PredicateFactory.Like(WSCSData.AccountFieldIndex.FirstName,"%"+keyword+"%")); filter.AddWithOr(PredicateFactory.Like(WSCSData.AccountFieldIndex.LastName,"%"+keyword+"%")); filter.AddWithOr(PredicateFactory.Like(WSCSData.AccountFieldIndex.Login,"%"+keyword+"%")); filter.AddWithOr(PredicateFactory.Like(WSCSData.AccountFieldIndex.Password,"%"+keyword+"%")); ConsumerGrid.Visible = true; ConsumerCollection cc = new ConsumerCollection(); relationsToUse.Add(ConsumerEntity.Relations.AccountEntityUsingAccountId); relationsToUse.Add(ConsumerEntity.Relations.AddressEntityUsingAddressId); filter.AddWithOr(PredicateFactory.Like(WSCSData.ConsumerFieldIndex.CompanyName,"%"+keyword+"%")); cc.GetMulti(filter,0,null,relationsToUse); ConsumerGrid.DataSource = cc; Results.Text = "Search Returned " + cc.Count.ToString() + " Results"; ConsumerGrid.DataKeyField = "Id"; ConsumerGrid.DataBind();
The problem is: this results in a SINGLE query (executed on the GetMulti() call). Could you please verify that with the debugger step through technique I described above?
However, what's the type of ConsumerGrid. Is that an infragistics grid by any chance? Have you set maxbanddepth to 1?
(edit): it seems like the grid pulls the complete db into memory, which only happens (to our knowledge) with infragistics grids as they don't implement support for ITypedList so they trigger lazy loading by literarly reading the collection properties, and thus reading data on and on deeper into the graph. This can be stopped by setting MaxBandDepth to 1, forcing the grid not to read the data beyond the current level.
rainbird wrote:
Hello. I'm working with Craig on this problem.
I've just finished running through some tests using the Windows server profiler, as recommended by Otis. This is the first time I've used this tool, so it's possible I'm doing s/thing wrong ... however, none of the specific pointers listed in Otis' post are touched by our web application. The numbers displayed do not change at all, no matter what I do in our site. I don't know what this means, but it seems significant ...
Which process did you select at the right side? Global or the wp process?
Oh yes, note that we are running this site in isolation on our server, no other websites on here at all, to confuse the issue. Thanks for any suggestions, rich
That's ok, on our testbox I had one asp.net 2.0 website running as well, when I walked through the steps when I described them in my post.
Joined: 24-Aug-2005
Which process did you select at the right side? Global or the wp process?
I used the Global process.
To clarify, very nearly the entire 161 MB memory spike comes as a result of the ConsumerGrid.DataBind() call. The ConsumerGrid, as far as I can tell, is not an infragistics grid; it is the standard ASP.NET datagrid (System.Web.UI.WebControls.DataGrid).
I will try redoing the SQL trace as you outlined and get back to you with the results of that shortly.
rich
rainbird wrote:
Which process did you select at the right side? Global or the wp process?
I used the Global process.
To clarify, very nearly the entire 161 MB memory spike comes as a result of the ConsumerGrid.DataBind() call. The ConsumerGrid, as far as I can tell, is not an infragistics grid; it is the standard ASP.NET datagrid (System.Web.UI.WebControls.DataGrid).
Then, still somehow lazy loading is triggered. Do you have fields mapped onto related fields in the entities bound to the grid (and are these fields visible in the grid) ?
I will try redoing the SQL trace as you outlined and get back to you with the results of that shortly. rich
Good, though check every possibility how the grid can trigger lazy loading, i.e.: read related entities while databinding. Also, be sure to use the latest runtime libs, available in the customer area.
Joined: 24-Aug-2005
You should start sqlprofiler, start a new trace, then run your application in DEBUG mode. Place a breakpoint at the start of the routine you expect to be harmful. When you hit the breakpoint, clear the sqlprofiler trace window, then step through the code till you see the explosion of the queries.
I re-ran a trace through the SqlProfiler as outlined here by Otis.
Shortly before I did this, I set up a current back-up copy of the site's production db on our development server, for other items we're working on. So note that what follows is relating to a slightly larger dataset (about 2500 records in the Consumer table, instead of 500). Nevertheless, this should still be a very small query, using nothing near the memory we're seeing.
I cleared the trace file, to log only the activity generated by the single command, ConsumerGrid.DataBind(). The trace recorded slightly more than 19,000 queries, and the application's memory use went from 45 MB to over 500 MB ... at which time my development environment reset itself.
Here is one possible cause of this, but I do not understand exactly why. What follows is a sample of the code we are using on the HTML page to load most of the data into the datagrid:
<asp: DataGrid id="ConsumerGrid" runat="server" Width="100%" AutoGenerateColumns="False" Font-Names="Arial" Font-Size="X-Small" EnableViewState="True"> <alternatingitemstyle backcolor="White"></alternatingitemstyle> <headerstyle font-bold="True" forecolor="White" backcolor="Green"></headerstyle> <columns> <asp:templatecolumn headertext="Acct#"> <itemtemplate> <asp:label runat="server" id="Label61"> <%#String.Format("{0:D6}", ((WSCSData.EntityClasses.ConsumerEntity)Container.DataItem).Account.Id ) %> </asp:label> </itemtemplate> </asp:templatecolumn> <asp:TemplateColumn HeaderText="Type"> <itemtemplate> <asp:Label runat="server" ID="Label62"> <%# ((WSCSData.EntityClasses.ConsumerEntity)Container.DataItem).Account.AccountType.Name %> </asp:Label> </itemtemplate> </asp:TemplateColumn>
... and so forth, for 7 or 8 columns.
I believe this is what is generating the multiple queries we're seeing, but even so, each query would only be returning a single record. 19,000 records x 10-20 KB is still nothing near the memory draw we're seeing.
I've looked around for some way to identify if a datagrid is using the "lazy loading" process you mentioned, and if so, how to turn it off. I couldn't find any reference to it in relation to ASP 2.0. There were GC problems with ASP 1.0 and 1.1, but the coding community seems to think these issues were resolved in ASP 2.0.
The only thing I could find to try was to shut off the 'EnableViewState' property on the datagrid, which might cause some large memory caching, but that made no difference in this case.
Let us know if this sparks any other ideas. Thanks for your input.
rich
Joined: 10-Mar-2006
When you reference Account.Id, that triggers the lazy loading of the Account related data. When you reference AccountType.Name, that triggers the lazy loading of the AccountType related data..
and so forth for 7 or 8 columns.
Put a test grid on the form. Have that test grid just show a single piece of data that is in the primary table (I think 'Accounts' or 'Consumer' in your case).
Do the query and bind that grid - I think you will see good results.
To make the grid you currently have work, add column by column and test. When adding each column, if it is from another table be sure you add that table to the prefetch path.
Hope this helps!
Wayne
I agree with Wayne that the property usage in the grid is what's triggering lazy loading and that will cause a lot of queries to be executed.
Though what's a bit odd is that this shouldn't result in such a high memory load alone. THough what might be a cause of this as well is that you bind 2500 rows in a webgrid. This leads to a lot of overhead for the grid, as 2500 rows in a grid is a lot.
Nevertheless, if you need a lot of fields from related entities, you definitely should use a prefetch path. This should lead to just a few queries, likely not more than 2 or 3.
Also, if you're viewing the data in a read-only form, i.e. you're not editing the entities inside the grid as well, you might want to use a typedlist in this case, as a typedlist was designed for this purpose: readonly lists of data build from multiple entities.
Joined: 24-Aug-2005
Thanks for the feedback. I will investigate using prefetch paths for the additionally referenced tables.
However, if this is the problem, then it introduces a related problem for us, in that we now have a complete, custom-written e-commerce site that is quite large, and uses similar data loading methods on literally hundreds of datagrids. Prior to the switch to ASP 2.0 and LLBLGen 2.0, we never saw any kind of memory problem using this method (however inefficient it appears to be, judging by your comments).
So what changed between ASP 1.1 and 2.0 (or LLBLGen 2.0) to cause this to suddenly become such a crippling problem? Manually recoding how all of these datagrids retrieve their data represents potentially a very large chunk of work, and before we start doing it, I'd like to make sure there aren't any other possible solutions.
Thanks again for your help with this.
rich
rainbird wrote:
Thanks for the feedback. I will investigate using prefetch paths for the additionally referenced tables.
However, if this is the problem, then it introduces a related problem for us, in that we now have a complete, custom-written e-commerce site that is quite large, and uses similar data loading methods on literally hundreds of datagrids. Prior to the switch to ASP 2.0 and LLBLGen 2.0, we never saw any kind of memory problem using this method (however inefficient it appears to be, judging by your comments).
So what changed between ASP 1.1 and 2.0 (or LLBLGen 2.0) to cause this to suddenly become such a crippling problem? Manually recoding how all of these datagrids retrieve their data represents potentially a very large chunk of work, and before we start doing it, I'd like to make sure there aren't any other possible solutions.
Thanks again for your help with this. rich
I'm surprised that you didn't see these sluggines on .NET 1.1, as lazy loading was triggered the same as it is done now. If you would run the 1.1 site, you will notice as well that there is a lot of db activity going on in the same form.
The fact is that memory consumption per entity is way lower in LLBLGen Pro v2.0 code than it is in 1.0.2005.1, so it should be LESS of a problem.
What might be a thing is that the grid you now use is different from what was used in .net 1.x and asp.net is different and binds data different.
The prefetch path can be setup in the codebehind. So you can pass the prefetch path to the GetMulti() call that's in the codebehind in your codesnippet you posted.. You could do this on a per-form basis when a form is giving problems, however it might be that the majority of the grids don't utilize the property retrieving as you do in this particular grid (i.e.: they show just the fields of 1 entity and not the fields of related entities).
Lazy loading is part of selfservicing, it's a core feature and discussed alot in the documentation. I don't see how you could have missed it when the code was written. It might be it was never an issue when the code was written and testdata was small and the code was ran locally.
As you found this forum to be particular harmful for the application, I think that if you solve this form, it will be abig step forward.
Joined: 24-Aug-2005
Bad news. Since I've never used prefetch paths before, I may be doing it incorrectly. However, I tried it with the form in question, and I'm still seeing exactly the same memory spike.
There is one notable difference ... previously, we were seeing the memory spike as a result of the DataGrid.DataBind() call, but now we're seeing it during the Collection.GetMulti() call.
Here is the modification I made to the previously posted code:
ConsumerCollection cc = new ConsumerCollection(); IPrefetchPath prefetchPath = new PrefetchPath((int)EntityType.ConsumerEntity); prefetchPath.Add(ConsumerEntity.PrefetchPathAccount); prefetchPath.Add(ConsumerEntity.PrefetchPathAddress);
relationsToUse.Add(ConsumerEntity.Relations.AccountEntityUsingAccountId); relationsToUse.Add(ConsumerEntity.Relations.AddressEntityUsingAddressId); filter.AddWithOr(PredicateFactory.Like(WSCSData.ConsumerFieldIndex.CompanyName,"%"+keyword+"%"));
cc.GetMulti(filter,0,null,relationsToUse,prefetchPath);
Again, thanks for your help with this.
rich
Joined: 24-Aug-2005
Follow-up: I re-ran the SQL trace, specifically against the Collection.GetMulti() call. This only generated ~200 queries before our test server reset itself due to lack of memory. This is certainly better than the 19,000 queries I saw yesterday; however, it's still way off-track from what we'd expect.
The first three queries generated made sense ... the first two queries were against Consumer, joined to Account and Address. The third query was against Address, but before it completed that query, it fired off about 100 additional queries against a bunch of tables in our database that should have nothing to do with this dataset.
It took 8.5 seconds to complete the third query (after completing the 100 sub-queries). At that point, the .GetMulti() call should have been complete. However, it then fired off another 100 or so queries, which look suspiciously like repeats of the first 100.
For the record, there have been no changes made to our database structure between the ASP 1.1 and ASP 2.0 versions, so whatever is happening is somehow related to the ASP or LLBLGen upgrade.
Any additional suggestions would be greatly appreciated.
rich
The prefetch path should greatly reduce the # of entities fetched as the duplicates in entities are now not fetched anymore, so if 2 rows refer to the same account, you would get just 1 account entity back.
What I find odd is that you get a lot of queries still. Do you initialize any related entities in Consumer, Account or Address's initclass routines?
Keep in mind that if you want to show a list of say 10 fields from, say 5 related entities, ALL data of these 5 entities is loaded, despite the fact that you just show 10. It can be that you have a blob/text/image field in one of the entity types, which is then loaded. Is that the case?
Did you just migrate the project to asp.net 2.0 and that's it? Or did you change code as well?
Joined: 24-Aug-2005
The Consumer, Account, and Address entities and tables are the only ones this form should be touching. We are not linking them to any other entities or tables at all--or at least, not directly in our code, although some process in ASP or LLBLGen is linking to other tables anyway.
There are only two tables in our database that contain blob/text/image type fields. Neither of these tables are even remotely related to the queries that should be generated for this form, and there is no reason this search should touch either of those tables at all.
Every other table in our entire database is made up exclusively of fields with small pieces of data--255 bytes at the most, and in most cases 50 bytes or less. The fact is, except for the two tables that handle our blobs, the entire database is significantly smaller than the memory spike we're seeing from this form.
Yes, we just migrated the project to ASP 2.0. This did involve code changes (replacing a third-party MasterPage plug-in with ASP 2.0's internal MasterPage system, and similar items), but all changes were specific to the upgrade.
Note that I just ran a SQL trace against the same form/query in the current ASP 1.1 site, and I did see a similar problem. The memory spike was only 130 MB (compared to 500+ MB now), but it did generate a total of about 9,000 queries before the datagrid loaded. So, yes, this issue existed prior to the ASP/LLBLGen 2.0 upgrade, but it just wasn't enough of a problem to cause the server-resource issues we see now.
rich
Follow-up: I just searched through the traces. None of the queries are referencing any of the tables that contain blob/text/image data fields, so these are not being loaded into memory.
Ok, I think we can keep on guessing but I don't think that will lead to a swift solution. So what I'd like to ask you is to pack and zip the following and mail it to support AT llblgen.com. I'll then look into it and report back to you.
- the page in question, including the masterpage: ASPX and codebehind. If the page contains controls, also include these. I just need one page, namely the one with the the grid. No need for graphics, stylesheets but if you don't mind you could include them, it will make testing easier.
- the .lgp file
- the generated code, all of it.
- if there are classes called from the form, please also include these.
- DDL SQL to re-create the schema/catalog. You don't have to send any DATA, although if you have TEST data (thus not production data), that would be great, otherwise I'll generate testdata myself.
If you think other info is also of use, please include that as well.
We're in the timezone GMT+2, so if you could send this before our morning starts, (it's 20:18 now) we can do our testing first thing tomorrow so I can report back to you when your day starts .
The memory difference between 1.1 and 2.0 isn't due to extra memory taken by llblgen pro entities, on the contrary, these take at least 40% less memory in LLBLGen Pro v2.0.
Joined: 06-Dec-2004
Otis,
I have e-mailed a zip file to support at llblgen. Unfortunately, for contract reasons, I cannot send you our actual application, it's not my decision to make.
However, what I did do is create a new database with the 4 tables in question. I populated them with dummy data so there are about 3000 records in each of the tables. I then created a new solution with a page using the same code as our actual application.
I am seeing the same exact problem. While the memory usage is obviously concerning, what is more concerning is the fact that the memory is never released. This must be a problem for more then just us. As you will see this little application is very simple and must be similar to things other people are doing. The nice thing about separating this problem into this new application is the fact that there is no interference from all of our other code so it shows the issue pretty clearly. Furthermore, I build this application on a machine that has none of the components of our actual application. There is just a base install of VS.NET 2005 and LLBLGen 2.0. There is no SQL server on this machine.
The zip file I e-mailed includes the following:
- A database creation script and four CSV files with dummy data.
- A Visual Studio 2005 project with a single page containing a search form and a datagrid.
- The LLBLGen project file (found in the dgMedTestData project folder.
The simple search page and datagrid definition use the same exact code as our actual application and the datagrid definition is exact as well.
Let me know what you think.
I truly appreciate all the help you have given me and my developers over the last 2 years. Your product rocks and so does your support of it.
Thanks, Craig Anderson NSR Business Solutions
Joined: 06-Dec-2004
Otis,
I have reviewed your response in all of its detail. Thank you very much, I can't believe the amount of effort you put into that.
However, I am now incredibly frustrated. I have added your code for prefetching and I still cannot get this query to even complete on a machine with 512MB or RAM allocated to the application pool. It just keeps using memory until it dies.
I just don't get it. I am going to do some more review of the memory utilization to see where things are going but it must be something specific to our environment.
I will let you know what I find.
Thanks Again, Craig