因为爬取的网站用到了AJAX,所以我想要使用selenium抓取数据。但是用selenium速度太慢,于是使用PhantomJS来加快速度。但是使用时会报错,各位帮忙看看吧。
爬取的网站是这个http://www.ncbi.nlm.nih.gov/pubmed?term=(%222013%22%5BDate%20-%20Publication%5D%20%3A%20%222013%22%5BDate%20-%20Publication%5D)
爬的内容是每篇论文的链接。
ps:selenium用的是pip install -U selenium
安装的,应该是最新版吧。Phantomjs用的是1.9.7版
#coding=utf-8
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
import time
import re
domain = "http://www.ncbi.nlm.nih.gov/"
url_tail = "pubmed?term=(%222013%22%5BDate%20-%20Publication%5D%20%3A%20%222013%22%5BDate%20-%20Publication%5D)"
url = domain + url_tail
browser = webdriver.PhantomJS()
browser.get(url)
def extract_data(browser):
links = browser.find_elements_by_css_selector("div.rprt div.rslt p.title a")
return [link.get_attribute("href") for link in links]
page_start, page_end = 1, 3
'''每次都从page_start开始'''
page_number_box = browser.find_element_by_xpath("//*[@id='pageno']").clear()
page_number_box_cleared = browser.find_element_by_xpath("//*[@id='pageno']")
page_number_box_cleared.send_keys(str(page_start) + Keys.RETURN)
'''建立文件links.txt,模式为append'''
with open('links.txt', 'a') as f:
for page in range(page_start, page_end+1):
f.write("page" + str(page) + '\n')
WebDriverWait(browser, 20).until(
EC.presence_of_element_located((By.CSS_SELECTOR, "div.rprt p.title"))
)
'''写入链接尾部的数字'''
for line in extract_data(browser):
line_re = re.findall('\d+', line)
f.write(str(line_re) + '\n')
print "page %d" % page
time.sleep(5)
'''点击下一页'''
browser.find_element_by_css_selector("div.pagination a.next").click()
browser.close()
报错:
Traceback (most recent call last):
File "scrape_url.py", line 25, in <module>
page_number_box = browser.find_element_by_xpath("//*[@id='pageno']").clear()
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 232, in find_element_by_xpath
return self.find_element(by=By.XPATH, value=xpath)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 664, in find_element
{'using': by, 'value': value})['value']
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 175, in execute
self.error_handler.check_response(response)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/errorhandler.py", line 166, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: {"errorMessage":"Unable to find element with xpath '//*[@id='pageno']'","request":{"headers":{"Accept":"application/json","Accept-Encoding":"identity","Connection":"close","Content-Length":"101","Content-Type":"application/json;charset=UTF-8","Host":"127.0.0.1:46413","User-Agent":"Python-urllib/2.7"},"httpVersion":"1.1","method":"POST","post":"{\"using\": \"xpath\", \"sessionId\": \"99f43900-cb27-11e4-99bd-f371f24cc826\", \"value\": \"//*[@id='pageno']\"}","url":"/element","urlParsed":{"anchor":"","query":"","file":"element","directory":"/","path":"/element","relative":"/element","port":"","host":"","password":"","user":"","userInfo":"","authority":"","protocol":"","source":"/element","queryKey":{},"chunks":["element"]},"urlOriginal":"/session/99f43900-cb27-11e4-99bd-f371f24cc826/element"}}
Screenshot: available via screen
先谢过各位了。
这是一个专为移动设备优化的页面(即为了让你能够在 Google 搜索结果里秒开这个页面),如果你希望参与 V2EX 社区的讨论,你可以继续到 V2EX 上打开本讨论主题的完整版本。
V2EX 是创意工作者们的社区,是一个分享自己正在做的有趣事物、交流想法,可以遇见新朋友甚至新机会的地方。
V2EX is a community of developers, designers and creative people.