Follow

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Contact

Cannot turn response.text into a dictionary in Python

I am using the Python request module and having trouble converting my response.text into a Python dictionary.

def my_function():
    url = API_URI["test"]
    headers = {'Content-type': 'application/json', 'Accept': 'text/plain'}    
    r = requests.post(url, data=json.dumps(api), headers=headers)
    print(r.text)
    dict = r.json()
    return dict
my_function()

Here is the printed response from r.text

{"subscriptionId":"8530989","profile":{"customerProfileId":"507869879","customerPaymentProfileId":"512588514"},"refId":"12345","messages":{"resultCode":"Ok","message":[{"code":"I00001","text":"Successful."}]}}

MEDevel.com: Open-source for Healthcare and Education

Collecting and validating open-source software for healthcare, education, enterprise, development, medical imaging, medical records, and digital pathology.

Visit Medevel

And the error

   raise JSONDecodeError("Unexpected UTF-8 BOM (decode using utf-8-sig)",
json.decoder.JSONDecodeError: Unexpected UTF-8 BOM (decode using utf-8-sig): line 1 column 1 (char 0)

I have tried both json.loads(r.text) and r.json()

I see that the error wants me to decode, but where?

>Solution :

As the error points out, the data should be decoded using utf-8-sig instead of the default utf-8 decoding.

something like this:

decoded_data = r.text.encode().decode('utf-8-sig')
return json.loads(decoded_data)

would probably work to load this data into a dictionary.

Edit:

Setting

r.encoding = 'utf-8-sig' 

before running r.json() should also fix this.

Add a comment

Leave a Reply

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

Discover more from Dev solutions

Subscribe now to keep reading and get access to the full archive.

Continue reading