# Fast Square Root – Quake III

by | Last Updated:
ANSI C, C++, Math

Computers and more specifically hardware are fast these days. You can buy a modern GPU and crunch a lot of computations in zero time. But it wasn’t always that way. Engineers and programmers before had to become quite creative and do a lot of mathematical and computational optimizations. Even in modern days CPUs still compute certain operations faster than others. For example, multiplication is faster than division. Square root on the other hand is times slower than squaring a number.

To be efficient we always strive to find a way to compute code faster. This sometimes comes at the cost of a bit of accuracy but what you will learn about game development is that accuracy is often sacrificed. You wouldn’t want to get less than a certain amount of frames per second.

## The math of computers

Before explaining how to efficiently calculate square roots we need to examine how the computer does the math and why accuracy and speed are lost during some operations. Computers can only calculate in binary numbers. Binary is represented only by two values: 0 and 1.

### Human counting

If we consider our normal way of counting we count from 0 to 9 normally and then when we get to 10 we shift and add a second number on the left. Our numbering system can also be represented by the following formula:

$420 = (4 * 100) + (2 * 10) + (0 * 1) \newline 420 = (4 * 10^{2}) + (2 * 10^{1}) + (0 * 10^{0})$

And this will go on with every number we add to the left we will increment the power of 10 that we count with.

### Computer Counting

Computers count in a similar fashion but they have only two digits to work with which means they are shifting numbers more often.

$420_{10} = 110100100_{2} \newline 110100100_{2} = (1 * 2^{8}) + (1 * 2^{7}) + (0 * 2^{6}) + (1 * 2^{5}) + (0 * 2^{4}) + (0 * 2^{3}) + (1 * 2^{2}) + (0 * 2^{1}) + (0 * 2^{0})$

This is how a number would be represented in a binary format. The smallest size of a number type in your computer is a byte which consists of 8 bits (1 bit is a positional 0 or 1 value). Usually, we would also need to represent signed numbers too so in the case of a minus number you would check the left most bit for the sign.

$\overbrace{\underbrace{1}_\text{sign}\underbrace{1111111_{2}}_\text{number}}^{\text{1 Byte}} = -1^{10}$

This is at least for how computers count integer numbers. Floating-point numbers are a bit different. They are following something that is called the IEEE 754 standard. It consists of 3 main parts: the sign, the exponent, and the mantissa.

$\underbrace{1}_\text{sign} \underbrace{10100110}_\text{exponent} \underbrace{10100110101001101010011_{2}}_\text{mantissa}$

The exponent value is actually the number -127.

On the other hand the mantissa is actually fractions meaning that the way it is represented is this:

$\underbrace{11100_{2}}_{mantissa} = (1 * 2^{-1}) + (1 * 2^{-2}) + (1 * 2^{-3}) + (0 * 2^{-4})…\newline \text{…or…}\newline \frac{10100110101001101010011_{2}}{2^{23}}$

The way you transfer this into a decimal number would be with the following formula:

$\text{sign} * 2^{(\text{exponent} – 127)} * (1.0 + \frac{\text{mantissa}}{2^{23}})$

There is also a connection between an integer and a float. This connection is described by the following formula deriving from how floats are stored in memory:

$\text{float's integer representation} = \text{exponent} * 2^{23} + \text{mantissa}$

Note: Sometimes the computer represents floats that it calculates it may show something like this 1.37438953472e+11. All this is a simple way of saying take the number 1.37438953472 and multiply it by 100000000000 (10 to the 11th) bringing the number to the following value: 137 438 953 472

### Calculator

You can take a look at this simple calculator representation of floats and integers in the same memory space. You can set some bit values and see what and how numbers are calculated. I also put dividers for the float’s sign|exponent|mantissa values as a guide.

## Logarithms

I want to write a quick reminder about what logarithms are. Here is the main formula about what a logarightms is:

$y = log_{a}(x) \text{ when } x = a^{y}\newline 2^{3} = 8 \text{ then } log_{2}(8) = 3$

Its is also considered to be the reverse of an exponent.

I want to examine one topic in detail. That is that between 0 and 1 the log function with base 2 of 1 + x would return almost linear values. If we add some constant correction value we will get a decent approximation using just a linear function. We would have the formula of type:

$log_{2}(1 + x) \approx x + k \text{ where } x \in [0,1] \text{ and } k \approx 0.0430$

## Shifting Bits

As you may have noticed above in the simple calculator for bits I’ve added two shift operators that will move the bits to the left and right. One handy property that can be noticed is that shifting bits to the right actually divides the integer number by 2 and shifting it to the left multiplies it by 2.

$\text{x}\gg 1 \approx \frac{x}{2} \newline \text{x}\ll 1 \approx 2x$

This also works interestingly for floats as shifting the mantissa actually shifts the exponent number and the mantissa. Consider that we represent floats with the following formula as mentioned above:

$\text{float's integer representation} = \text{exponent} * 2^{23} + \text{mantissa}$

Then shifting the bits (with the rounding error) will approximately give us the following equation:

$\text{float's integer representation}\gg 1 \approx \frac{exponent*2^{23} + mantissa}{2}$

## Quake III’s approach

Quake 3 solves the equation of the inverse square root which is 1 / sqrt(x). This is quite useful by itself and we can solve square root just by multiplying the inverse square to the original number. There are also quite a lot of functions that use the inverse square directly.

Here is the original source code taken from Quake 3’s open sourced engine:

float Q_rsqrt( float number )
{
long i;
float x2, y;
const float threehalfs = 1.5F;

x2 = number * 0.5F;
y  = number;
i  = * ( long * ) &y;                       // evil floating point bit level hacking
i  = 0x5f3759df - ( i >> 1 );               // what the fuck?
y  = * ( float * ) &i;
y  = y * ( threehalfs - ( x2 * y * y ) );   // 1st iteration
//	y  = y * ( threehalfs - ( x2 * y * y ) );   // 2nd iteration, this can be removed

return y;
}

### Math behind the approach

Well, let’s combine everything mentioned above and examine how Quake 3’s source code manages to calculate square roots efficiently. We will start by first investigating how we can represent the log2 of a floating-point number more easily. This will all make sense later bear with me:

$log_{2}(2^{(\text{exponent} – 127)} * (1.0 + \frac{\text{mantissa}}{2^{23}})) = \newline = log_{2}(2^{(\text{exponent} – 127)}) + log_{2}(1.0 + \frac{\text{mantissa}}{2^{23}}) = \newline = \text{exponent} – 127 + log_{2}(1.0 + \frac{\text{mantissa}}{2^{23}}) = \newline = \text{exponent} – 127 + \frac{\text{mantissa}}{2^{23}} + k = \newline = \frac{\overbrace{\text{exponent}*2^{23}+\text{mantissa}}^{\text{integer representation}}}{2^{23}} + k – 127$

What this simplification gets us is a representation of the log of the float number which is defined by its integer bits representation. We also introduced the constant correction value k that we mentioned in the logarithm section.

What we actually wanto to target to find is how to calculate:

$y = \frac{1}{\sqrt{x}} = x^{-\frac{1}{2}}$

Here comes the real magic. Lets define our solutions as y. If we apply log to both sides of the equation we get the following:

$log(y) = log(\frac{1}{\sqrt{x}}) \newline log(y) = log(x^{-\frac{1}{2}}) \newline log(y) = -\frac{1}{2}log(x)$

Then we can replace the logarithms with the bit representations (I mark M form mantissa and E for exponent):

$\frac{M_{y} + E_{y}2^{23}}{2^{23}} + k – 127 = -\frac{1}{2}(\frac{M_{x} + E_{x}2^{23}}{2^{23}} + k – 127)$

This could further be solved into:

$\underbrace{M_{y} + E_{y}2^{23}}_{\text{integer representation}} = \frac{3(127 – k)}{2}2^{23} – \frac{\overbrace{M_{x} + E_{x}2^{23}}^{\text{integer representation}}}{2}$

And if we do the division with approximation on the right because we have an integer we can replace the solution above with:

$\underbrace{M_{y} + E_{y}2^{23}}_{\text{integer representation}} = K_{\text{correction}} – (\underbrace{M_{x} + E_{x}2^{23}}_{\text{integer representation}} \gg 1)$

### Newton’s Iteration

This will give us values that are quite off the correct inverse square root though it will be a decent guess. This is why we need a correction function. One such correction function is the newton iteration. Newton’s iteration states that if we have a good approximation of a function we can achieve a better one just by iterating using the following formula:

$x_{1} = x_{0} – \frac{f(x_{0})}{f'({x_0})} \newline f(x) = \frac{1}{x^{2}} – n \newline f'(x) = -\frac{1}{2x^{3}}$

We can have as many iterations of newton’s method as we want but since we have an already decent representation of our x0 we will just do one iteration. This gives us the following equation where n is our initial number:

$x_{1} = x_{0} – \frac{x_{0}^{-2} – n}{-2x_{0}^{-3}} \newline x_{1} = x_{0} – (-\frac{1}{2}x_{0} + \frac{1}{2}nx_{0}^{3}) \newline x_{1} = x_{0} + \frac{1}{2}x_{0} – \frac{1}{2}nx_{0}^{3} \newline x_{1} = \frac{3}{2}x_{0} – \frac{n}{2}x_{0}^{3} \newline x_{1} = x_{0}(\frac{3}{2} – \frac{n}{2}x_{0}^{2})$

And this concludes the mathematical part of this article. The rest is just an implementation where we get the integer representation by casting the memory of the float to a memory of int and implementing these formulas.

## My C++ implementation

float QuickInverseSqrt(float n)
{
// Make float and int variables occupying the same memory space
union {
float x_0;
int integerRepresentation;
};

x_0 = n;
const int K_correction = 0x5f3759df;
integerRepresentation = K_correction - (integerRepresentation >> 1);
float x_1 = x_0 * (1.5f - (n * 0.5) * (x_0 * x_0));

return x_1;
}

float QuickSqrt(float n)
{
float y = QuickInverseSqrt(n);
return y * n;
}


Or if you prefer you can test it out in this repl:

## Conclusion

This was a bit of a complex topic but in game development, such optimizations are crucial to keeping the game in the best frame count possible. Such optimizations save the phone’s battery life and make the operations faster meaning there is more time to process game objects and better graphics.