Obtaining a list of donations from a GoFundMe campaign

It’s a long story as to why I need this information, but I’m looking to obtain a list of the donations on a GoFundMe campaign. I would have thought this would have been possible through an API, but much to my distress GoFundMe doesn’t seem to have an API. This led me down a rabbit… Read More Obtaining a list of donations from a GoFundMe campaign

This Version Of ChromeDriver Only Supports 114

keep getting an error in console saying "selenium.common.exceptions.SessionNotCreatedException: Message: session not created: This version of ChromeDriver only supports Chrome version 114" I downloaded chrome driver version 114 from the official website and set the executable path like this: service = Service(executable_path= r"C:\Python 3rd Party Installs\Selenium Driver(s)\114\chromedriver.exe") Still the same message. It is saying this as… Read More This Version Of ChromeDriver Only Supports 114

Is there a way to extract regular expression patterns from links in a Pandas dataframe?

I am trying to extract regex patterns from links in a Pandas table generated from the page. The code generating the Pandas data frame is given below: import pandas as pd import re url = ‘https://www.espncricinfo.com/records/year/team-match-results/2005-2005/twenty20-internationals-3’ base_url = ‘https://www.espncricinfo.com’ table = pd.read_html(url, extract_links = "body")[0] table = table.apply(lambda col: [link[0] if link[1] is None else… Read More Is there a way to extract regular expression patterns from links in a Pandas dataframe?

selenium.common.exceptions.TimeoutException Error

I aimed to scrape address data from the website that indicated in the code below and i encountered with this error. File "c:\Users\efede\OneDrive\Masaüstü\Scrapping\pythonfiles\gCharge.py", line 15, in <module> WebDriverWait(driver, 30).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR,’#map_canvas > div > div > div:nth-child(2) > div:nth-child(2) > div > div:nth-child(4) > div > div > div > div.gm-style-iw.gm-style-iw-c > div > div > table… Read More selenium.common.exceptions.TimeoutException Error

NoSuchElement exception even when element appears in Inspect

I’m running into an error trying to scrape data from a website with python and selenium. The website has plenty of time to load, and I can see the element I want to grab via the webdriver browser’s Inspect. But find_element still fails. from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait #waiting properly for things… Read More NoSuchElement exception even when element appears in Inspect

Selenium Index out of range

from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By elemental_list = [] chrome_driver = Service(executable_path="Users\David\Desktop\Python\chromedriver_win32\chromedriver.exe") driver = webdriver.Chrome(service=chrome_driver) for page in range(21): page_url = "https://www.fastexpert.com/top-real-estate-agents/florida/?page=&quot; + str(page) driver.get(page_url) title = driver.find_elements(By.XPATH, "//h3/a") location = driver.find_elements(By.XPATH, "//div[contains(@class, ‘RTLOCATION’)]/p/a") for i in range(len(title)): elemental_list.append([title[i].text, location[i].text]) for element in elemental_list: print(element) driver.close() This is my… Read More Selenium Index out of range

When scraping data, I get only first result despite using Array.from

I want to scrap data from a webpage. Here’s the code I have. It is supposed to get all the authors, but it only gets a first one (‘Simon Butler’). Array.from(document.querySelectorAll(‘#author-group’)) .map(e => e.querySelector(‘[class="button-link workspace-trigger button-link-primary"]’)) .map(e => e.querySelector(‘[class="button-link-text"]’)) .map(e => e.querySelector(‘[class="react-xocs-alternative-link"]’)) .map(e => e.querySelector(‘[class="given-name"]’).textContent + ‘ ‘ + e.querySelector(‘[class="text surname"]’).textContent) .join(‘, ‘) As I… Read More When scraping data, I get only first result despite using Array.from