I’m learning about bit packing and bit buffers and I came across this line of code
int NumBits = sizeof(float) << 3
now the output is correct, a float is 4 bytes and when you convert that to bits it’s 32 bits, which is the value of NumBits
my question is how did bit shifting to the left by 3 give us the correct value ?
also they could do it in the opposite way, by shifting the actual NumBits to the right by 3
int numbytes = NumBits >> 3;
>Solution :
sizeof(float) returns the size of a float in bytes. In most systems(e.g. PC), a float is 4 bytes, so sizeof(float) is equal to 4.
The << operator is the left shift operator. When you left-shift a value by a certain number of bits, you effectively multiply the value by 2 raised to the power of the number of bits shifted. In this case, you are left-shifting by 3 bits, so it’s equivalent to multiplying by 2^3, which is 8.
So, sizeof(float) << 3 is equivalent to 4 << 3, which is equal to 32.
In summary, shifting to the left by 3 is same as multiplying with 8, which gives the number of bits. Likewise, shifting to the right by 3 is same as dividing to 8.