when I write
tf.math.exp(2.0/20.0)
then I will get the result: <tf.Tensor: shape=(), dtype=float32, numpy=1.105171>
but when I write
tf.math.exp(y[i]/20) #where y is a numpy array with float32 digits
I will get this :<tf.Tensor: shape=(), dtype=float64, numpy=0.9864462929738023>
why float 64 instead of float 32?
of course I can use tf.cast , I’m just wondering why this is happening
>Solution :
As per my knowledge, inn TensorFlow, when you mix different types of numbers, like a float32 and an integer (int64), TensorFlow promotes the result to the higher precision type, which is float64, to make sure you don’t lose accuracy in your calculations. Hope this helps.!
In your case, I guess, when you calculate tf.math.exp(y[i]/20),
TensorFlow encounters an element from your NumPy array y, which is of type float32.
However, the constant 20 is treated as a Python integer, which is of type int64
If you want to maintain the result as float32, you can explicitly cast the constant 20 to float32 like this:
tf.math.exp(y[i] / tf.constant(20.0, dtype=tf.float32))