Follow

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Contact

Using find_all in bs4

When I parse for more than 1 class I get an error on line 12 (when I add all to find)
Error: ResultSet object has no attribute ‘find’. You’re probably treating a list of elements like a single element

import requests
from bs4 import BeautifulSoup

heroes_page_list=[]


url = f'https://dota2.fandom.com/wiki/Dota_2_Wiki'
q = requests.get(url)
result = q.content
soup = BeautifulSoup(result, 'lxml')

heroes = soup.find_all('div', class_= 'heroentry').find('a')
for hero in heroes:
    hero_url = heroes.get('href')
    heroes_page_list.append("https://dota2.fandom.com" + hero_url)
# print(heroes_page_list)

with open ('heroes_page_list.txt', "w") as file:
    for line in heroes_page_list:
        file.write(f'{line}\n')

>Solution :

MEDevel.com: Open-source for Healthcare and Education

Collecting and validating open-source software for healthcare, education, enterprise, development, medical imaging, medical records, and digital pathology.

Visit Medevel

You are searching a tag inside a list of div tags you need to do like this,

heroes = soup.find_all('div', class_= 'heroentry')
a_tags = [hero.find('a') for hero in heroes]
for a_tag in a_tags:
    hero_url = a_tag.get('href')
    heroes_page_list.append("https://dota2.fandom.com" + hero_url)

heroes_page_list look like this,

['https://dota2.fandom.com/wiki/Abaddon',
 'https://dota2.fandom.com/wiki/Alchemist',
 'https://dota2.fandom.com/wiki/Axe',
 'https://dota2.fandom.com/wiki/Beastmaster',
 'https://dota2.fandom.com/wiki/Brewmaster',
 'https://dota2.fandom.com/wiki/Bristleback',
 'https://dota2.fandom.com/wiki/Centaur_Warrunner',
 ....
Add a comment

Leave a Reply

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

Discover more from Dev solutions

Subscribe now to keep reading and get access to the full archive.

Continue reading