Follow

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Contact

optimal way of seperation data from list

based on one website , which contains statistical information, i have implemented basic web scrapping code and here it is :

import re
import requests
from bs4 import BeautifulSoup
content =requests.get("https://www.geostat.ge/ka/modules/categories/26/samomkhmareblo-fasebis-indeksi-inflatsia")
content =BeautifulSoup(content.content,'html.parser')
# print(content.prettify())
information =[]
for row in content.select('tbody tr'):
    for data in row.find_all('td'):
        if len(data.text.strip())!=0:
            information.append(data.text.strip())
print(information)

it returns following information :

['2012', '2013', '2014', '2015', '2016', '2017', '2018', '2019', '2020', '2021', '2022', '2023', 'საშუალო წლიური წინა წლის საშუალო წლიურთან', '99.1', '99.5', '103.1', '104.0', '102.1', '106.0', '102.6', '104.9', '105.2', '109.6', '111.9', '102.5', 'დეკემბერი წინა წლის დეკემბერთან', '98.6', '102.4', '102.0', '104.9', '101.8', '106.7', '101.5', '107.0', '102.4', '113.9', '109.8', '100.4'

now first part before the text containing ‘საშუალო’ is year, rest of them are inflations between two text, so i have implemented this very manual code :

MEDevel.com: Open-source for Healthcare and Education

Collecting and validating open-source software for healthcare, education, enterprise, development, medical imaging, medical records, and digital pathology.

Visit Medevel

years  =[]
average_annual =[]
december =[]


first_index =information.index('საშუალო წლიური წინა წლის საშუალო წლიურთან')
second_index =information.index('დეკემბერი წინა წლის დეკემბერთან')
for i in range(0,first_index):
    years.append(int(information[i]))
print(years)
for  i in range(first_index+1,second_index):
    average_annual.append(float(information[i]))
print(average_annual)
for i in range(second_index+1,len(information)):
    december.append(float(information[i]))
print(december)

it shows correct seperation :

[2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021, 2022, 2023]
[99.1, 99.5, 103.1, 104.0, 102.1, 106.0, 102.6, 104.9, 105.2, 109.6, 111.9, 102.5]
[98.6, 102.4, 102.0, 104.9, 101.8, 106.7, 101.5, 107.0, 102.4, 113.9, 109.8, 100.4]

my question is : is there more optimal way of doing this?thanks in advance

Edited:

i have tried this version :

data =pd.DataFrame(pd.read_html("https://www.geostat.ge/ka/modules/categories/26/samomkhmareblo-fasebis-indeksi-inflatsia",encoding='utf-8')[0])
# data.drop(0,axis=0,inplace=True)
# data =data.droplevel(level=0,axis=1)
print(data)

and returns this result :

                                          0       1   ...      11      12
0                                        NaN  2012.0  ...  2022.0  2023.0
1  საშუალო წლიური წინა წლის საშუალო წლიურთან    99.1  ...   111.9   102.5
2            დეკემბერი წინა წლის დეკემბერთან    98.6  ...   109.8   100.4

[3 rows x 13 columns]

how can i handle this case?

>Solution :

For this site I recommend using pandas.read_html to read the table into a dataframe. But first you can rename first row as header (<th>) to get correct column names:

from io import StringIO

import pandas as pd
import requests
from bs4 import BeautifulSoup

url = '"https://www.geostat.ge/ka/modules/categories/26/samomkhmareblo-fasebis-indeksi-inflatsia"'
content = requests.get(url).content
soup = BeautifulSoup(content, "html.parser")

for td in soup.tr.select("td"):
    td.name = "th"

df = pd.read_html(StringIO(str(soup)))[0]
df = df.set_index(df.columns[0])
df.index.name = None

print(df)

Prints:

                                           2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023
საშუალო წლიური წინა წლის საშუალო წლიურთან  99.1   99.5  103.1  104.0  102.1  106.0  102.6  104.9  105.2  109.6  111.9  102.5
დეკემბერი წინა წლის დეკემბერთან            98.6  102.4  102.0  104.9  101.8  106.7  101.5  107.0  102.4  113.9  109.8  100.4
Add a comment

Leave a Reply

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

Discover more from Dev solutions

Subscribe now to keep reading and get access to the full archive.

Continue reading