Asking about what has / what will happen to the software renderer?
Asking about what has / what will happen to the software renderer?
Hi everyone. I've come to Irrlicht very late on. I think software now isn't updated to support any 3D stuff, but what were the intentions before? How far did it get? I've planned what I could share and what content to upload, and saw something throughout my game dev notes. 3D basically is the base of why I program.
Re: Asking about what has / what will happen to the software renderer?
Irrlicht loads under certain conditions the 3d models without problems using software renderer, the first example of irrlicht uses software renderer. Visually it is more detailed. The graphics card is designed to be fast, not detailed. It is not noticeable in static renders as in a screenshot, but when it comes to animating the difference is abysmal, it seems that the graphics card rounds the values a lot, obtaining rougher and less smooth animations, while the CPU does it much more accurate, but slower. It used to be talked about but there is a lot of lie on the internet saying that GPU renders with the same quality, no, it doesn't, it is much more volatile to have inconsistency when producing a rounded calculation. I have rendered with different GPU's and just rendering with different GPU's you get different results, while the CPU stays the same.
A reddit comment: "GPUs also natively work with floating-point integers where both bitness and precision is imposed by the manufacturer, meaning you cannot get around precision issues. CPUs do not suffer from such, developers can implement floating-point integers with whatever bitness and precision they want within memory limitations meaning superior accuracy over GPUs which don’t have upgradeable memory either. "
Now, most of the examples that you try to test irrlicht will give problems with renderer software, it seems that the double buffer is not calculated well, making a "flick" effect constantly, very common when the double buffer either does not exist or is not well implemented, but I do not know, maybe it is another problem.
To get the quality of software renderer, even in textures, or get close to it, I remember that irrlicht had a function that allowed you to choose the quality of the textures and leave them in "high" so to say.
If you want we can build a project together so you can see the real potential, precomputed graphics are always superior when you have a much longer time to calculate something (it's the principle of double buffering itself).
A reddit comment: "GPUs also natively work with floating-point integers where both bitness and precision is imposed by the manufacturer, meaning you cannot get around precision issues. CPUs do not suffer from such, developers can implement floating-point integers with whatever bitness and precision they want within memory limitations meaning superior accuracy over GPUs which don’t have upgradeable memory either. "
Now, most of the examples that you try to test irrlicht will give problems with renderer software, it seems that the double buffer is not calculated well, making a "flick" effect constantly, very common when the double buffer either does not exist or is not well implemented, but I do not know, maybe it is another problem.
To get the quality of software renderer, even in textures, or get close to it, I remember that irrlicht had a function that allowed you to choose the quality of the textures and leave them in "high" so to say.
If you want we can build a project together so you can see the real potential, precomputed graphics are always superior when you have a much longer time to calculate something (it's the principle of double buffering itself).
**
If you are looking for people with whom to develop your game, even to try functionalities, I can help you, free. CC0 man.

**
If you are looking for people with whom to develop your game, even to try functionalities, I can help you, free. CC0 man.

**
Re: Asking about what has / what will happen to the software renderer?
We can pre-calculate the shadows and light in several binary files, create a double buffer and you would have a realistic game. We use the default camera(not the FPS), and you could have a not bugged 3d game using software renderer. Totally possible and beautiful.
It is easier to create CPU based shaders than GPU based shaders honestly, because if you look for shader information using GPU, you will be taken with real time calculations based on what the GPU API can read... while with CPU you have no limits, you can pre-calculate everything and get excellent results.
Of course, your game will be heavier, since the precomputed data will remain in binary files on disk, the CPU will only store them in RAM according to certain in-game conditions.
It is similar to saving images of what it would look like in a cinematic render in jpg files and then displaying those images, only that in the binaries would be described how it would be displayed on screen based on coordinates, for example, shadows or reflections.
It is easier to create CPU based shaders than GPU based shaders honestly, because if you look for shader information using GPU, you will be taken with real time calculations based on what the GPU API can read... while with CPU you have no limits, you can pre-calculate everything and get excellent results.
Of course, your game will be heavier, since the precomputed data will remain in binary files on disk, the CPU will only store them in RAM according to certain in-game conditions.
It is similar to saving images of what it would look like in a cinematic render in jpg files and then displaying those images, only that in the binaries would be described how it would be displayed on screen based on coordinates, for example, shadows or reflections.
**
If you are looking for people with whom to develop your game, even to try functionalities, I can help you, free. CC0 man.

**
If you are looking for people with whom to develop your game, even to try functionalities, I can help you, free. CC0 man.

**
Re: Asking about what has / what will happen to the software renderer?
Ahh yes, using Irrlicht 1.3. Be it on our heads!
I just wanted a clearer answer, not:
Looping music
Models
Maps
Environments
Voice acting
Animations
Concept art
Character bios
Weapons
Triggers / switches
Recording SFX
Mobile / desktop
... And on and on...
You get a little further when you specialise.
I just wanted a clearer answer, not:
Looping music
Models
Maps
Environments
Voice acting
Animations
Concept art
Character bios
Weapons
Triggers / switches
Recording SFX
Mobile / desktop
... And on and on...
You get a little further when you specialise.
Re: Asking about what has / what will happen to the software renderer?
Noie...........
Okay, so after our heavy open discussion... Courtesy of Source Forge... Something I can search for in any books...
Pre calculated
CPU cores
Assembly could help
Hold data in the binary and load into RAM at certain points.
****. That is something.
Okay, so after our heavy open discussion... Courtesy of Source Forge... Something I can search for in any books...
Pre calculated
CPU cores
Assembly could help
Hold data in the binary and load into RAM at certain points.
****. That is something.
Re: Asking about what has / what will happen to the software renderer?
The looping music, voice actor, all that I know how to do, what I don't know how to do and I would like to do, is to program in assembly to optimize my code in c++, I don't know modern c++ either, I am writing c++ tutorials for html, there I will include all the things I have mentioned to you.
There are many ways to precompute, most of these things I learned without books, trial and error basically. Build a 3d model without gpu, get your shadows(raytracing), get the color coordinates relative to the observer, save it in a binary file, save it in ram, create double buffer to recalculate the readout on screen, so that only the result would be read, and you get a maximally optimized game.
Consider that god of war ran on ps2.
ps2:
CPU Emotion Engine @ ~300 MHz(0.3ghz)
Memory 32 MB RAM, 4 MB Video RAM
Graphics Graphics Synthesizer @ 150 MHz(0.1ghz)
We are talking about god of war had shadow volume(or like this), and particles, literally the guy was on fire while he had shadows, try to make any game with opengl or directx using shaders and make it with so little ram and so little cpu, the result is obvious, it doesn't run.
Just including shadows makes it impossible to run, that's because god of war precalculated everything in binary files and then only read it in a small space of the ram, they even made scenes where the character took time to get from one point to another locked with few elements in order to clean the memory and load more data without loading screen and without overloading the ram or gpu.
In the old days this is how games were made... everything precalculated.
And we are talking about reading directly from the DVD disk, they didn't even read from the hard disk.
GTA san andreas ran with those requirements on ps2, on pc it asked for much more, the following:
Memory: 256MB RAM
Graphics Graphics Synthesizer @ 150 MHz(OK)
Processor: 1000 mhz(1ghz)Graphics Card: 64MB Video Card
And we are talking that on PC many of the precalculated shaders were removed. You don't even see the damn orange light at the start of the game like in ps2.
If ps2 could run those graphics, obviously raspberry pi could do the same.
There are many ways to precompute, most of these things I learned without books, trial and error basically. Build a 3d model without gpu, get your shadows(raytracing), get the color coordinates relative to the observer, save it in a binary file, save it in ram, create double buffer to recalculate the readout on screen, so that only the result would be read, and you get a maximally optimized game.
Consider that god of war ran on ps2.
ps2:
CPU Emotion Engine @ ~300 MHz(0.3ghz)
Memory 32 MB RAM, 4 MB Video RAM
Graphics Graphics Synthesizer @ 150 MHz(0.1ghz)
We are talking about god of war had shadow volume(or like this), and particles, literally the guy was on fire while he had shadows, try to make any game with opengl or directx using shaders and make it with so little ram and so little cpu, the result is obvious, it doesn't run.
Just including shadows makes it impossible to run, that's because god of war precalculated everything in binary files and then only read it in a small space of the ram, they even made scenes where the character took time to get from one point to another locked with few elements in order to clean the memory and load more data without loading screen and without overloading the ram or gpu.
In the old days this is how games were made... everything precalculated.
And we are talking about reading directly from the DVD disk, they didn't even read from the hard disk.
GTA san andreas ran with those requirements on ps2, on pc it asked for much more, the following:
Memory: 256MB RAM
Graphics Graphics Synthesizer @ 150 MHz(OK)
Processor: 1000 mhz(1ghz)Graphics Card: 64MB Video Card
And we are talking that on PC many of the precalculated shaders were removed. You don't even see the damn orange light at the start of the game like in ps2.
If ps2 could run those graphics, obviously raspberry pi could do the same.
**
If you are looking for people with whom to develop your game, even to try functionalities, I can help you, free. CC0 man.

**
If you are looking for people with whom to develop your game, even to try functionalities, I can help you, free. CC0 man.

**
Re: Asking about what has / what will happen to the software renderer?
I'm getting quite a few glitches on Lubuntu. The way to bug report was sign up, create a secure key, join a team, use a special PGP program, etc.. I don't think I'll get that involved. If it fails to work I may have to try another.
In my book it said about assembly and profiling tools to optimise. I did investigate some of the assembly parts and it uses Intel SSE Intrinsics (the book is 20+ years old). You need to install the pack with Visual Studio or with the ISPC package in linux, but have a look at this:
There were assmebly movaps and addps but these were harder to figure out.
I don't want to invest much time in this because trying to get SSE running in Dev-C++ would be a huge thing.
SIMD runs parallel operations on the CPU and can be used for game graphics etc.. and speed ups 200% or more.
In my book it said about assembly and profiling tools to optimise. I did investigate some of the assembly parts and it uses Intel SSE Intrinsics (the book is 20+ years old). You need to install the pack with Visual Studio or with the ISPC package in linux, but have a look at this:
Code: Select all
#include "xmmintrin.h"
int main() {
__attribute__((aligned(16))) static float x[4] = {1,2,3,4};
__attribute__((aligned(16))) static float y[4] = {5,6,7,8};
__attribute__((aligned(16))) static float z[4] = {0,0,0,0};
__m128 m0, m1, m2;
m0 = _mm_load_ps(x);
m1 = _mm_load_ps(y);
m2 = _mm_add_ps(m0,m1);
_mm_store_ps(z, m2);
// print result
"x 1,2,3,4"
"y 5,6,7,8"
"z 6,8,10,12"
I don't want to invest much time in this because trying to get SSE running in Dev-C++ would be a huge thing.
SIMD runs parallel operations on the CPU and can be used for game graphics etc.. and speed ups 200% or more.
Re: Asking about what has / what will happen to the software renderer?
I used std::clock c_start, c_end = std::clock() and the outcome was terrible. I'm not using Intel's stuff correctly, or maybe not considering the whole page on how to bench mark things, but a simple check before and after (this is the first time I tested the speed of code in milliseconds)
_mm commands to store and add two numbers - 1405ms
adding c[0] = a[0] + b[0] in C++ code - 721ms
There was one file that was odd. A .ispc file format that Intel designed. It's similar to C++. So perhaps the _mm functions aren't in the right place, or something else is slowing them down... Are they actually using all the cores the way they should do?
Still interested to hear your thoughts. Maybe you can't test Intrinsics, so perhaps plain ol' assembly might be better?
_mm commands to store and add two numbers - 1405ms
adding c[0] = a[0] + b[0] in C++ code - 721ms
There was one file that was odd. A .ispc file format that Intel designed. It's similar to C++. So perhaps the _mm functions aren't in the right place, or something else is slowing them down... Are they actually using all the cores the way they should do?
Still interested to hear your thoughts. Maybe you can't test Intrinsics, so perhaps plain ol' assembly might be better?
Re: Asking about what has / what will happen to the software renderer?
***
Sorry Noie if I'm not following along with your info. If you can do things by yourself but just need to learn assembly, maybe I'm not needed as such.
But talking with you I did consolidate a lot (hope that's the right word). I think I'm figuring what I want too and it does kinda tie into CPU, RAM and video.
Sorry Noie if I'm not following along with your info. If you can do things by yourself but just need to learn assembly, maybe I'm not needed as such.
But talking with you I did consolidate a lot (hope that's the right word). I think I'm figuring what I want too and it does kinda tie into CPU, RAM and video.
Re: Asking about what has / what will happen to the software renderer?
Baked jpegs based on camera position and rotation? Take the pre calculated coordinates, render them to an image and display that image on screen with effects?
It brings me back to old computers that calculated the render "behind" what was being displayed. But at least it smoothed out the visuals?
Also Barney's helmet in Half-Life was also noted. Shaders... GPUs...
Say under water effects or sun glint on a window pane... Shaders would run on the GPU depending on camera position? Again it's the speed of the GPU.
I've never tried shaders because even the most basic GLSL example had vertex and fragment stuff that I never wanted to learn.
It brings me back to old computers that calculated the render "behind" what was being displayed. But at least it smoothed out the visuals?
Also Barney's helmet in Half-Life was also noted. Shaders... GPUs...
Say under water effects or sun glint on a window pane... Shaders would run on the GPU depending on camera position? Again it's the speed of the GPU.
I've never tried shaders because even the most basic GLSL example had vertex and fragment stuff that I never wanted to learn.
Re: Asking about what has / what will happen to the software renderer?
The idea is to make the GPU calculate as little as possible, as much as possible, it's not just baking in jpg, it's an example, what you bake are the position of the colors based on the screen resolution as coordinates, something like (x,y)(rgb), and you get 3d graphics if you add depth (x,y)*(x,y)(3d world coordinates) = (x,y)(screen coordinates)(rgb).
The advantage of calculating in real time is that you can have almost unlimited possibilities of coordinates, for example from 0.001 to 0.002, while if we do it in a precalculated way, it would only take for the used frames, if it runs at 25 FPS, it would take from 1 to 2, something like (1.04)->(1.08)->(1.12), etc. Similar to isometric games that pre-render the sprites, then render the sprites on screen according to the position of the player, etc.
It would be something like having a loading screen that precalculates within the possibilities, then the CPU would be reading, as well as the GPU.
Now, instead of having a long loading screen, you could get such hyper-realistic bakes that take hours to calculate on a computer, then convert them into binary files based on coordinates, you would have a more realistic game than modern games, and it would run on a raspberry pi.
But for that you must know how to create good hyperrealistic renders, for example Blender can render with the CPU everything, without using GPU, I'm talking about shadows, reflections and so on, and Blender is written in c++, it uses python as a means of access to the c++ api, that is, the GPU only becomes useful when "displaying" that image.
What I want is to create a console on raspberry pi with someone, as you can see PS2 doesn't have great hardware, it just has well optimized games. I love irrlicht, but this I say I can do without irrlicht.... then I am in a misfortune, so instead of being an ungrateful bastard, I want to leave irrlicht at least uploading all the 3d models I have, optimized and making known the things I think could be useful, even if nobody sees them or finds them useful, it's for my own peace of mind lmao.
I don't need a raspberry pi either, but I'm seeing possible to make new friends with these ideas... I'm just interested in precalculating as much as possible and showing an RPG game of my dreams...
By the way, I know very little about assembly, so I don't know if I can help you with it, consider that c++ compiles your code into assembly, and these are converted into hexadecimal and binary instructions, it is possible that the compiler writes better optimized assembly code than us, or maybe your timer is slow because of the linux kernel that is slowing down the process, it is likely to be familiar with an api rather than assembly, same as before, precompute is faster(in this case security).
The advantage of calculating in real time is that you can have almost unlimited possibilities of coordinates, for example from 0.001 to 0.002, while if we do it in a precalculated way, it would only take for the used frames, if it runs at 25 FPS, it would take from 1 to 2, something like (1.04)->(1.08)->(1.12), etc. Similar to isometric games that pre-render the sprites, then render the sprites on screen according to the position of the player, etc.
It would be something like having a loading screen that precalculates within the possibilities, then the CPU would be reading, as well as the GPU.
Now, instead of having a long loading screen, you could get such hyper-realistic bakes that take hours to calculate on a computer, then convert them into binary files based on coordinates, you would have a more realistic game than modern games, and it would run on a raspberry pi.
But for that you must know how to create good hyperrealistic renders, for example Blender can render with the CPU everything, without using GPU, I'm talking about shadows, reflections and so on, and Blender is written in c++, it uses python as a means of access to the c++ api, that is, the GPU only becomes useful when "displaying" that image.
What I want is to create a console on raspberry pi with someone, as you can see PS2 doesn't have great hardware, it just has well optimized games. I love irrlicht, but this I say I can do without irrlicht.... then I am in a misfortune, so instead of being an ungrateful bastard, I want to leave irrlicht at least uploading all the 3d models I have, optimized and making known the things I think could be useful, even if nobody sees them or finds them useful, it's for my own peace of mind lmao.
I don't need a raspberry pi either, but I'm seeing possible to make new friends with these ideas... I'm just interested in precalculating as much as possible and showing an RPG game of my dreams...
By the way, I know very little about assembly, so I don't know if I can help you with it, consider that c++ compiles your code into assembly, and these are converted into hexadecimal and binary instructions, it is possible that the compiler writes better optimized assembly code than us, or maybe your timer is slow because of the linux kernel that is slowing down the process, it is likely to be familiar with an api rather than assembly, same as before, precompute is faster(in this case security).
**
If you are looking for people with whom to develop your game, even to try functionalities, I can help you, free. CC0 man.

**
If you are looking for people with whom to develop your game, even to try functionalities, I can help you, free. CC0 man.

**
Re: Asking about what has / what will happen to the software renderer?
Damn, I do feel very happy looking back at my art 
If only a way to do it more.
Add Bank, Bill, Cal, Date, Sun, Hours, Minuites
If only a way to do it more.
Add Bank, Bill, Cal, Date, Sun, Hours, Minuites
Re: Asking about what has / what will happen to the software renderer?
Oh yes, Sourceforge is really intermittent/not allowing me on.
I guess I leave. I found many of the sites on-line were difficult to handle. I just wasn't one of them. Like, 99% of them.
I guess I leave. I found many of the sites on-line were difficult to handle. I just wasn't one of them. Like, 99% of them.
Re: Asking about what has / what will happen to the software renderer?
I could help you, but I need you to tell me what your app will do.wizard4 wrote: Mon Jan 27, 2025 3:49 pm Damn, I do feel very happy looking back at my art
If only a way to do it more.
Add Bank, Bill, Cal, Date, Sun, Hours, Minuites
I would like to learn shaders someday to help cutealien with that at least, I feel useless.
**
If you are looking for people with whom to develop your game, even to try functionalities, I can help you, free. CC0 man.

**
If you are looking for people with whom to develop your game, even to try functionalities, I can help you, free. CC0 man.

**
Re: Asking about what has / what will happen to the software renderer?
Quick answer about software renderers: There are 2 of those in Irrlicht- "Software" and "Burningsvideo".
First one was written by Niko (aka the Irrlicht founder). It is no longer under active development, but we'll keep it around as long as it doesn't cause troubles (so far that's the case). It's main use is for quick UI stuff without 3D. So mainly for dialogs before your application starts. It supports a bit 3D, but mainly just enough to show a single 3D model or so. Anything advanced isn't really supported, so don't use it for 3d scenes. I suspect it might also be useful when trying to get Irrlicht running on a new platform as it's likely easier to port than the other software renderer (way less complex).
Burnings video is the software renderer from Thomas Alten. And once in a while he still works on it. And he usually also reacts quickly when we give him any bug reports. And that one is for real 3D. As long as you don't use too much resolution it works pretty well even for some quite advanced stuff. And it also has another feature which I know some (non forum) users like: The colors are more reproducible than with any of the other drivers (like opengl with different drivers you might not get the exact same results on screen). Which can be quite important in some applications (although we never really guaranteed that, so people using it for that live a tiny bit risky...).
First one was written by Niko (aka the Irrlicht founder). It is no longer under active development, but we'll keep it around as long as it doesn't cause troubles (so far that's the case). It's main use is for quick UI stuff without 3D. So mainly for dialogs before your application starts. It supports a bit 3D, but mainly just enough to show a single 3D model or so. Anything advanced isn't really supported, so don't use it for 3d scenes. I suspect it might also be useful when trying to get Irrlicht running on a new platform as it's likely easier to port than the other software renderer (way less complex).
Burnings video is the software renderer from Thomas Alten. And once in a while he still works on it. And he usually also reacts quickly when we give him any bug reports. And that one is for real 3D. As long as you don't use too much resolution it works pretty well even for some quite advanced stuff. And it also has another feature which I know some (non forum) users like: The colors are more reproducible than with any of the other drivers (like opengl with different drivers you might not get the exact same results on screen). Which can be quite important in some applications (although we never really guaranteed that, so people using it for that live a tiny bit risky...).
IRC: #irrlicht on irc.libera.chat
Code snippet repository: https://github.com/mzeilfelder/irr-playground-micha
Free racer made with Irrlicht: http://www.irrgheist.com/hcraftsource.htm
Code snippet repository: https://github.com/mzeilfelder/irr-playground-micha
Free racer made with Irrlicht: http://www.irrgheist.com/hcraftsource.htm