I know that a number x lies between n and f (f > n > 0). So my idea is to bring that range to [0, 0.65535] by 0.65535 * (x – n) / (f – n).

Then I just could multiply by 10000, round and store integer in two bytes.

Is it going to be an effective use of storage in terms of precision?

I’m doing it for a WebGL1.0 shader, so I’d like to have simple encoding/decoding math, I don’t have access to bitwise operations.

### >Solution :

Why multiply by `0.65535`

and then by `10000.0`

? That introduces a second rounding with an unnecessary loss of precision.

The data will be represented well if it has equal likelihood over the entire range `(f,n)`

. But this is not always a reasonable assumption. What you’re doing is similar to creating a fixed-point representation (fixed step size, just not starting at 0 or with steps that are negative powers of 2).

Floating-point numbers use bigger step sizes for bigger numbers. You could do the same by calculating `log(x/f) / log(n/f) * 65535`