DataAnalysis/웹크롤링
-
[python] 코로나 홈페이지 웹 크롤링 및 데이터전처리 : 정적 페이지 웹크롤링, beautifulsoup 사용DataAnalysis/웹크롤링 2022. 4. 25. 19:53
코로나 홈페이지에 있는 데이터 크롤링 하기 import requests from bs4 import BeautifulSoup import pandas as pd import matplotlib.pyplot as plt import matplotlib result = [] url = 'http://ncov.mohw.go.kr/bdBoardList_Real.do?brdId=1&brdGubun=13&ncvContSeq=&contSeq=&board_id=&gubun=' response = requests.get(url) if response.status_code == 200: html = response.text soup = BeautifulSoup(html,'html.parser') tbody=soup.fin..
-
subway 서브웨이 전국매장 웹크롤링DataAnalysis/웹크롤링 2022. 4. 25. 00:58
import requests from bs4 import BeautifulSoup import pandas as pd import matplotlib.pyplot as plt import matplotlib result = [] #서브웨이 for pageNum in range(1,8): url = f'https://www.subway.co.kr/storeSearch?page={pageNum}&rgn1Nm=&rgn2Nm=#storeList' response = requests.get(url) print('-------------------------- " ', pageNum) if response.status_code == 200: html = response.content soup = BeautifulS..
-
쿠우쿠우 웹크롤링 전국매장 정보 가져오기 (beautifulsoup)DataAnalysis/웹크롤링 2022. 4. 22. 23:29
import requests from bs4 import BeautifulSoup import pandas as pd import matplotlib.pyplot as plt import matplotlib result = [] #쿠우쿠우 for pageNum in range(1,8): url = f'http://www.qooqoo.co.kr/bbs/board.php?bo_table=storeship&&page={pageNum}' response = requests.get(url) print('-------------------------- " ', pageNum) if response.status_code == 200: html = response.content soup = BeautifulSoup(h..
-
카페베이 coffeebay 웹크롤링 매장정보 가져오기 (beautifulsoup)DataAnalysis/웹크롤링 2022. 4. 22. 22:55
import requests from bs4 import BeautifulSoup import pandas as pd import matplotlib.pyplot as plt import matplotlib result = [] #커피베이 for pageNum in range(1,34): url = 'https://www.coffeebay.com/home/store/store_area?sido=&gugun=&sc_column=&sc_string=&svc_wifi_at=&svc_catering_at=&svc_event_at=&svc_24hr_at=&svc_smok_at=&svc_park_at=&svc_kidsrm_at=&svc_bookrm_at=&svc_pwdrrm_at=&svc_meetrm_at=&svc..
-
파스쿠찌 매장정보 크롤링 (Beautifulsoup,지역별 열합계 구하기)DataAnalysis/웹크롤링 2022. 4. 19. 14:48
파스쿠찌 매장정보를 가져오는 전체코드 import requests from bs4 import BeautifulSoup import pandas as pd result = [] for pageNum in range(1,54): url = f'https://www.caffe-pascucci.co.kr/store/storeList.asp?page={pageNum}' #f string %로 만들어도 됨 response = requests.get(url) print('-------------------------- " ', pageNum) if response.status_code == 200: html = response.text soup = BeautifulSoup(html, 'html.parser') tb..
-
Python 교촌치킨 국내매장 크롤링(Selenium 동적 웹크롤링)DataAnalysis/웹크롤링 2022. 4. 17. 00:15
from selenium import webdriver import time driver = webdriver.Chrome('chromedriver.exe') driver.implicitly_wait(10) url = 'http://www.kyochon.com/shop/domestic.asp' driver.get(url) sido1 = driver.find_element_by_css_selector('#sido1') sido1.click() sido_list = driver.find_elements_by_css_selector('#sido1 > option') length= len(sido_list) time.sleep(0.2) sido1.click() for sido in range(1,length):..