how to deal with pandas read_html gracefully when it fails to find a table?

pandas read_html is a great and quick way of parsing tables; however, if it fails to find the table with the attributes specified, it will fail and cause the whole code to fail.

I am trying to scrape thousands of web pages, and it is very annoying if it causes an error and termination of the whole program just because one table was not found. Is there a way to capture these errors and let the code continue without termination?

link = 'https://en.wikipedia.org/wiki/Barbados'  
req = requests.get(pink)
wiki_table = pd.read_html(req, attrs = {"class":"infobox vcard"})
df = wiki_table[0]

this causes the whole code to fail. How can I deal with this?
I think it should be something related to exception handling or error capturing, but I am not familiar with python and how to do this.

>Solution :

embed the pd.read_html in a try ... except ... exception handler

import requests
import pandas as pd

link = 'https://en.wikipedia.org/wiki/Barbados'
req = requests.get(link)

wiki_table = None 
try:
    wiki_table = pd.read_html(req, attrs = {"class":"infobox vcard"})
except TypeError as e: # to limit the catched exception to a minimal
    print(str(e)) # optional but useful

if wiki_table:
    df = wiki_table[0]
    
    # rest of your code

Leave a Reply