If you are used to writing software for modern machines, you probably don’t think much about computing something like one divided by three. Modern computers handle floating point quite well. However, in constrained systems, there is a trap you should be aware of. While modern compilers are happy to let you use and abuse floating point numbers, the hardware is often woefully slow. It also tends to eat up lots of resources. So what do you do? Well, as [Low Byte Productions] explains, you can opt for fixed-point math.
In theory, the idea is simple. Just put an arbitrary decimal point in your integers. So, for example, if we have two numbers, say 123 and 456, we could remember that we really mean 1.23 and 4.56. Adding, then, becomes trivial since 123+456=579, which is, of course, 5.79.
But, of course, nothing is simple. Multiplyting those two numbers gives you 56088 but that’s 5.6088 and not 560.88. So keeping track of the decimal point is a little more complicated than the addition case would make you think.
How much more complicated is it? Well, the video covers a lot but it takes an hour and half to do it. There’s plenty of code and explanations so if you haven’t dealt with fixed point math or you want a refresher, this video’s worth the time to watch.
Want to do 3D rendering on an ATMega? Fixed point is your friend. We’ve done our own deep dive on the topic way back in 2016.