CPU usage?
CPU usage?
Why do all the Irrlicht applications utilize 100% CPU usage the whole time they're running? The screen buffer also updates constantly, even when there's no rendering at all. Is it possible to fix this?
I don't think that this may be fixed, it's inherent to 3D applications. You try to achieve the highest frame rate possible. Thus, you constantly process all scene nodes as fast as possible, i.e. using up all ressources available. You may skip some scene nodes if you know that they don't change the visualization, but this would only increase FPS, not lower CPU usage. The only conceivable solution would be to limit the frame rate, and stop processing if enough frame have been drawn (Well, you would better skip some frames in between to stay smooth, but dosn't matter here). This would lower CPU usage, but it's better to limit CPU usage externally, i.e. from the OS. Give the process a lower priority and you should be able to use another program, though you won't get good FPS anymore.
This will not make the Irrlicht application idle, the OS simply assigns more CPU time to the other applications. What I wanted was to make the Irrlicht application idle when there's no rendering, such as the "User Interface" demo.hybrid wrote:Give the process a lower priority and you should be able to use another program
Re: CPU usage?
Yes, it is. But not before CVS will be availableuser222 wrote:Is it possible to fix this?
Re: CPU usage?
So you think that an event based rendering would be possible based on CVSVox wrote:Yes, it is. But not before CVS will be available
Well, there could be the rare case where a completely static world with no movement of the cameras could be rendered once, and never updated. But in all other cases you have to go at least through the animated scene nodes.
You probably saw sometimes some silly implementation of applications using event polling. Even if only the event loop is iterated without doing anything the CPU usage will go up to 100%. That's where blocking on event arrival comes in. But how would you block the rendering if somewhere on the scene something is moving? As I said you could gain some ressources and get a higher FPS like NX++ does in some places, but you won't be able to have your app sleep until the next event occurs. It's directly related to calculating the frame rate. If you do not redraw every x milliseconds your framerate is limited to 1000/x so either restrict your framerate or use maximum CPU. If it's not the case you are welcome to post some preliminary code - should be possible without CVS as well.
Re: CPU usage?
hmm? Irrlicht is still open source, so you can modify it on your own version. If you want CVS, check out IrrlichtNX (though limited commit rights)Vox wrote: Yes, it is. But not before CVS will be available
You do a lot of programming? Really? I try to get some in, but the debugging keeps me pretty busy.
Crucible of Stars
Crucible of Stars
Re: CPU usage?
It is not about that. Irrlicht is processing more data than it should. But I'm not ready to discuss this now.hybrid wrote:So you think that an event based rendering would be possible based on CVSVox wrote:Yes, it is. But not before CVS will be available
Well, there could be the rare case where a completely static world with no movement of the cameras could be rendered once, and never updated. But in all other cases you have to go at least through the animated scene nodes.
A really easy way to avoid that is to add in your main loop a Sleep(0); or Sleep(1) (but I wouldn't recommand the later as slower machines may start to stutter (inconsistent dt_frame)...)
Sleep(0); simply signify the scheduler that the process/thread (can't remember) has finished its work for the current timeslice.
On Linux (where the problem is worse than under Windows because the app can take all the CPU time if it wish to) you have the much more precise usleep() function so you can usleep(100) (0.1ms) for example...
I can't be sure for Linux but I've been using the Sleep(0) under Windows for ages with no problems... Of course if your app is busy busying this won't make any difference .
Sleep(0); simply signify the scheduler that the process/thread (can't remember) has finished its work for the current timeslice.
On Linux (where the problem is worse than under Windows because the app can take all the CPU time if it wish to) you have the much more precise usleep() function so you can usleep(100) (0.1ms) for example...
I can't be sure for Linux but I've been using the Sleep(0) under Windows for ages with no problems... Of course if your app is busy busying this won't make any difference .
Ok this might be me, but I thought any process that is running will take the maximal amount of processing power that it requires. Obviously any other processes that are runnning will get a certain amount of scheduable processor time, however, how much is quite dependant on a couple of things.
A, the development environment - because I think each one has a different interface with the kernel.
B, the type of Windows you are using (because I think each Kernel has a different way of dealing with processes and there time slices).
I don't like this priority thing as well, we've done a recent test in Java, and I've done a similar 1 in C++ and seem to get weird results. Well not weird but not what you would expect, we ended up comparing priorities of threads/processes to a ticket to get to the front on fair ground rides. Sometimes you get there first sometimes you don't...
Ok the first section maybe a load of bull, but hey i'm tired and felt like a rant about processes/threads....
A, the development environment - because I think each one has a different interface with the kernel.
B, the type of Windows you are using (because I think each Kernel has a different way of dealing with processes and there time slices).
I don't like this priority thing as well, we've done a recent test in Java, and I've done a similar 1 in C++ and seem to get weird results. Well not weird but not what you would expect, we ended up comparing priorities of threads/processes to a ticket to get to the front on fair ground rides. Sometimes you get there first sometimes you don't...
Ok the first section maybe a load of bull, but hey i'm tired and felt like a rant about processes/threads....
Take a look at this code : I think it fixes your problem
This limits the frame rate to a maximum of 100fps thus freeing the CPU quite a lot.
This limits the frame rate to a maximum of 100fps thus freeing the CPU quite a lot.
Code: Select all
void run()
{
u32 start, end, elapsed, frame = 10; // 10 millisec per frame = 100 hz
while(videoDevice->run())
{
start = videoDevice->getTimer()->getTime();
videoDriver->beginScene(true, true, video::SColor(0,87,133,157));
sceneManager->drawAll();
gameController->onRenderFrame();
videoDriver->endScene();
end = videoDevice->getTimer()->getTime();
elapsed = end-start;
if(elapsed < frame)
{
#ifdef LINUX
usleep(1000* (frame - elapsed));
#endif
#ifdef WINDOWS
Sleep(frame - elapsed);
#endif
}
}
videoDevice->drop();
}
Another point here that may have been intended in the Original post..
"real" windows multitasking apps (including games) when minimized (alt-tabbed) should use ZERO cpu. In custom frameworks or using DX directly you can usually put some conditions in your winmain loop to only render/update if a window is active. This cuts cpu usage to 0.
The other side is sometimes you want rendering but no logic updates.. again this can be done (even in irrlicht) by putting a clause/condition in the main loop .
I don't know how you would have the flexibility to do this properly without a full winmain function processing messages etc, but this should be looked at asap in my opinion. Alt Tabbed app should use 0 cpu and a windowed app that looses focus should stop (using in game flags to bring up a pause menu) while using very little cpu. Once maximised let it use X% cpu if you want.
"real" windows multitasking apps (including games) when minimized (alt-tabbed) should use ZERO cpu. In custom frameworks or using DX directly you can usually put some conditions in your winmain loop to only render/update if a window is active. This cuts cpu usage to 0.
The other side is sometimes you want rendering but no logic updates.. again this can be done (even in irrlicht) by putting a clause/condition in the main loop .
I don't know how you would have the flexibility to do this properly without a full winmain function processing messages etc, but this should be looked at asap in my opinion. Alt Tabbed app should use 0 cpu and a windowed app that looses focus should stop (using in game flags to bring up a pause menu) while using very little cpu. Once maximised let it use X% cpu if you want.
You can do this in irrlicht with this:Anonymous wrote:I don't know how you would have the flexibility to do this properly without a full winmain function processing messages etc
Code: Select all
while(device->run())
if (device->isWindowActive()) // <- will stop rending if not active
{
// draw as usual here
}
How about this code:
sleep prolly frees more cpu, as it actually stops execution, and this still processes messages, but it will allow other messages to be handeled instead of freezing in that while() loop..
Code: Select all
while(device->run())
if (device->isWindowActive()) // <- will stop rending if not active
{
// draw as usual here
} else{
while (PeekMessage (&msg,NULL,0,0,PM_REMOVE)){
DispatchMessage ( &msg);
}
}
Yeah. sleep works well in conjunction with the (while window active) posted above.. so it only sleeps when window is NOT active. I now get 0 cpu on focus loss and 0 CPU + lower mem use on ALT/TAB full screen app!
[/code]
Code: Select all
while(device->run())
{
if (device->isWindowActive()) // <- will stop rending if not active
{
// draw as usual here
}
else
{
sleep(1); // or sleep(30) or whatever..
}
}