Is there really any difference between say double and f64? whats the point in the irrlicht version of these numbers?
I understand that on really old computers they had different sizes for numbers, but I don't think that Irrlicht would be able to run on those types of machines anyway
Is it just for potential, eg in a 386 a int has somewhere around 60k values, so when the user compiles on that machine he can specify the s32 to have four ints?
The idea is to make it so that you have control over the size [and range] of the values. If you want a 32-bit unsigned integer, you use u32. If you tried to use unsigned or unsigned long you might actually get a 64-bit quantity on some platforms.
The f32 typdef could also be used for fixed point classes to come in on machines without a hardware FPU. And yeah, if you'd use a s32/u32 type with less than 32bits you'd be in great trouble.
Another reason you'd use typedef's is for cross-platform compatibility
so say you port your application to platform X and say platform X doesn't
support a certain type so then you can just change the typedef to something
related without having to edit the entire engine.