Ok, I will explain my reasons fully:
Once again, I do not care about the size of other collateral created objects.
The reason that I want to know comes form another "breach" in MS structure for NET arhitecture. They say in MSDN:
[blue]
The .NET memory manager places all allocations of 85,000 bytes or larger into a separate heap called the large object heap. This heap consists of a series of virtual memory blocks that are independent from the main managed heap. Using a separate heap for larger objects makes garbage collection of the main managed heap more efficient because collection requires moving memory, and moving large blocks of memory is expensive. However, the large object heap is never compacted; this is something you must consider when you make large memory allocations in .NET.
For example, if you allocate 1 MB of memory to a single block, the large object heap expands to 1 MB in size. When you free this object, the large object heap does not decommit the virtual memory, so the heap stays at 1 MB in size. If you allocate another 500-KB block later, the new block is allocated within the 1 MB block of memory belonging to the large object heap.
During the process lifetime, the large object heap always grows to hold all the large block allocations currently referenced, but never shrinks when objects are released, even if a garbage collection occurs. Figure 2.4 shows an example of a large object heap.
[/blue]
Now an ASP.NET application stub of MS(aspnet_wp.exe) crushes when the upper limit for memory allocation is reached and this happens quite often on a medium application with 10 simultaneous users(at about 15-20 minute), losing every information like session and other stuff for all of them.
Any ideas?
s-)
Blessed is he who in the name of justice and goodwill, sheperds the weak through the valley of darkness...