I have a python script that finds the derivative of a function, within which is the gamma function. When substituting values in, instead of finding the digamma of the values and returning it as a float, sympy only returns polygamma (0, 1.05)
or whatever the output is. Below is my code:
import mpmath
import time
import sympy
x = sympy.symbols ('x')
s = sympy.symbols ('s')
from sympy import S, I, pi, gamma, lambdify
Original = ((((sympy.pi**(x/2))*(s**x))/sympy.gamma((x/2)+1))-(((2*s)/(x**0.5))**x))
Prime = Original.diff (x)
Prime = lambdify ((x, s), Prime, modules = 'sympy')
for s_times_10 in range (1, 31):
s = float (int (s_times_10) / 10)
for x_times_10 in range (1, 151):
x = float ((int (x_times_10) / 10))
print ("x: " + str (x) + ", s: " + str (s))
print (Prime (x, s))
if (x > 0.3):
if (Prime (x + 0.1, s) < Prime (x, s)):
print ("MAXIMUM N LOCATED: " + str (x))
time.sleep (1)
break
print ("=======")
time.sleep (0.5)
And below is the output for the first 5 values of x within the for loop:
x: 0.1, s: 0.1
-0.579691734344519 - 0.432005861274674*polygamma(0, 1.05)
=======
x: 0.2, s: 0.1
-0.175935858863424 - 0.371829906705536*polygamma(0, 1.1)
=======
x: 0.3, s: 0.1
0.0107518316667914 - 0.31889065255819*polygamma(0, 1.15)
=======
x: 0.4, s: 0.1
0.098684205215577 - 0.27256963654143*polygamma(0, 1.2)
=======
x: 0.5, s: 0.1
0.133891927091406 - 0.232239660951436*polygamma(0, 1.25)
MAXIMUM N LOCATED: 0.5
As you can see, instead of giving me a simple float answer, it returns an unsolved polygamma function. How do I get rid of this and end up with a float as the final answer?
TLDR: Substituted values into a differentiated gamma function, and instead of returning a float it returned an unsolved polygamma function.
>Solution :
Often substituting a float into an symbolic SymPy function will lead to automatic floating point evaluation:
In [36]: sin(1)
Out[36]: sin(1)
In [37]: sin(1.0)
Out[37]: 0.841470984807897
In your case your expression has an exact integer as well as a float and so does not evaluate in floating point automatically:
In [38]: polygamma(0, 0.1)
Out[38]: polygamma(0, 0.1)
In [39]: polygamma(0.0, 0.1)
Out[39]: -10.4237549404111
Really though if you want floating point evaluation you should ask for it explicitly rather than depending on it happening implicitly:
In [40]: polygamma(0, 0.1).evalf()
Out[40]: -10.4237549404111