memory usage of new[]

If you are a new Irrlicht Engine user, and have a newbie-question, this is the forum for you. You may also post general programming questions here.
Post Reply
Dorth
Posts: 931
Joined: Sat May 26, 2007 11:03 pm

Post by Dorth »

This results in the fact that new int[1000] consums less memory than 1000 times new int
That's not true. It's quite the opposite.
for (int i = 0; i < 1000; ++i)
{ something = new int; }

will take 1000*sizeof(int) in memory while

something = new int[1000];

will take 1000*sizeof(int) + bookkeeping of new[] array size.

However, the second call is relatively faster and in a multi-threaded application, will also result in a adjacent block of memory being allocated while no such guarantee is made for the first method.
Nox
Posts: 304
Joined: Wed Jan 14, 2009 6:23 pm

Post by Nox »

@ Dorth just try a while(1) new char[0]; . Wait some time and good luck & have fun while trying to get your OS back to live ;) .
OS save some hiden information for each new
hidden = not in everycase counted to the apps memory consuming.
Dorth
Posts: 931
Joined: Sat May 26, 2007 11:03 pm

Post by Dorth »

Wtf are you talking about? That's completly beside what you said earlier or I. Woohoo, you've discovered infinite loop, go have a cookie. And indeed, most implementation have a int stored at array[-1], but it's not mandatory and no one cares, 'cuz it still must be stored somewhere, there or elsewhere.

Memory != cpu cycle nor running time.
Nox
Posts: 304
Joined: Wed Jan 14, 2009 6:23 pm

Post by Nox »

Didnt you get it? new char[0] => array of zero size => should be no mem consuming if one assume that new itself does not consum mem. The fact: your os allocates mem for saving information for every single allocated memory block. Even for blocks with the size of ZERO. Which means: many news => high memory consumption by the hidden information!
Got it now?

P.S: only tested under windows. Dont know how Unix/MacOS/BSD/Linux handles an allocation of zero size.
Dorth
Posts: 931
Joined: Sat May 26, 2007 11:03 pm

Post by Dorth »

First off:
When the object being created is an array, only the first dimension can be a general expression. All subsequent dimensions must be constant integral expressions. The first dimension can be a general expression even when an existing type is used. You can create an array with zero bounds with the new operator. For example:

char * c = new char[0];

In this case, a pointer to a unique object is returned.
Second: Have you even read what I wrote? 'cuz you are trying to argue that new[] use more memory than new, which was my point exactly AND the inverse of what you first said

Nox:
This results in the fact that new int[1000] consums less memory than 1000 times new int
Nox
Posts: 304
Joined: Wed Jan 14, 2009 6:23 pm

Post by Nox »

@mods please seperate this disscussion from the main topic.

@dorth
'cuz you are trying to argue that new[] use more memory than new, which was my point exactly AND the inverse of what you first said
Sorry but where did i argue that new[] use more memory?


EDIT:

because im tired to argue anymore, i wrote this test only for you. Maybe you believe facts (tested under win7 64 - first test ~80MB, second test ~200MB):

Code: Select all

#include "windows.h"
#include <iostream>

int main(int argc, char argv[])
{
	char in;
	const size_t count = 10 * 1000 * 1000;

	std::cout << "press 1 for the new[] and 2 for the new\t"<<std::flush;
	std::cin >> in;
	
	//i want to be fair so i add the pointer array to both tests although its useless in the first case
	int **v = new int*[count];
	
	if(in == '1')
	{
		int* v2 = new int[count];

		//doing some stuff
		v2[0] = v2[1] = 1;
		for(size_t counter = 2; counter < count; counter++)
			v2[counter] = v2[counter - 1] + v2[counter - 2];

		std::cout << "going to sleep" << std::endl;
		Sleep(60 * 1000);
		delete[] v2;
	}
	if(in == '2')
	{
		for(size_t counter = 0; counter < count; counter++)
			v[counter] = new int;

		//doing some stuff
		*v[0] = *v[1] = 1;
		for(size_t counter = 2; counter < count; counter++)
			*v[counter] = (*v[counter - 1]) + (*v[counter - 2]);

		std::cout << "going to sleep" << std::endl;
		Sleep(60 * 1000);

		for(size_t counter = 0; counter < count; counter++)
			delete v[counter];

	}
	delete v;
	return 0;
}
Nox
Posts: 304
Joined: Wed Jan 14, 2009 6:23 pm

Post by Nox »

@Dorth: you believe me now, dont you?
vitek
Bug Slayer
Posts: 3919
Joined: Mon Jan 16, 2006 10:52 am
Location: Corvallis, OR

Post by vitek »

Every allocation from the a general purpose allocator (regardless of if it is allocated with new, new[] or malloc()) has to keep some book keeping information. Typically allocations are slightly more than the requested size. These two factors will cause N small allocations to waste more memory than 1 large allocation.

Of course a custom fixed size allocator can be used to avoid some of this waste, but that is not consistent with the situation presented above.

Travis
vitek
Bug Slayer
Posts: 3919
Joined: Mon Jan 16, 2006 10:52 am
Location: Corvallis, OR

Post by vitek »

Nox wrote:Dont know how Unix/MacOS/BSD/Linux handles an allocation of zero size.
0 byte allocations are required by the C++ Standard to return a unique address. The amount of wasted space might change between implementations, but the fact that there is waste is inevitable for a conforming implementation.

Another issue to consider is that small allocations tend to fragment the heap. This can cause premature out of memory conditions, even if the memory is deallocated, or it can cause performance problems when the deallocated blocks on the heap are coalesced.

Travis
MasterGod
Posts: 2061
Joined: Fri May 25, 2007 8:06 pm
Location: Israel
Contact:

Post by MasterGod »

Nice closure vitek.

Cool discussion subject..
Image
Dev State: Abandoned (For now..)
Requirements Analysis Doc: ~87%
UML: ~0.5%
Post Reply