admin管理员组

文章数量:1355694

I am attempting to extract data from an HTML table, but it appears that the HTML isn't loading correctly when using requests.get(). Instead, a line in the source reads:

"JavaScript is not enabled and therefore this page may not function correctly."

When I navigate to the page in Google Chrome, the HTML appears as it should.

How do I get a Python script to load the proper HTML?

I am attempting to extract data from an HTML table, but it appears that the HTML isn't loading correctly when using requests.get(). Instead, a line in the source reads:

"JavaScript is not enabled and therefore this page may not function correctly."

When I navigate to the page in Google Chrome, the HTML appears as it should.

How do I get a Python script to load the proper HTML?

Share Improve this question edited Jun 7, 2014 at 2:43 alecxe 474k127 gold badges1.1k silver badges1.2k bronze badges asked Jun 1, 2014 at 5:32 toolshedtoolshed 2,0299 gold badges41 silver badges50 bronze badges 2
  • 2 It's most likely retrieving the exact same HTML. It's just that in the browser, Javascript runs and hides this line or replaces this line with something else. – user2357112 Commented Jun 1, 2014 at 5:35
  • Have you solved the problem? Has any of the answers helped? – alecxe Commented Jun 7, 2014 at 2:44
Add a ment  | 

2 Answers 2

Reset to default 9

Wele to the wonderful world of web-crawling. The problem you are experiencing is that requests.get() would just get you the initial page that the browser receives at the beginning of a page load. But, this is not the page you see in the browser since there could be so much involved in forming the web page: javascript function calls, AJAX calls etc.

If you want to programmatically get the HTML you see when you click "Show source" in a web browser after the page was loaded - you would need a real browser. This is there selenium could be a good option:

from selenium import webdriver

browser = webdriver.Firefox()
browser.get(url)
print browser.page_source

Note that selenium itself is very powerful in terms of locating elements - you don't need a separate HTML parser for extracting the data out of the page.

Hope that helps.

If you are sure that you have to deal with JavaScript, webdriver will handle better and saves your life.

from selenium.mon.exceptions import NoSuchElementException
from selenium import webdriver
from time import sleep

browser = webdriver.Firefox()
browser.get("http://yourwebsite./html-table")
browser.find_element_by_id("some-js-triggering-elem").click()
while 1:
    try:
        browser.find_element_by_id("elem-that-makes-you-know-that-table-is-loaded")
    except NoSuchElementException:
        sleep(1)
html = browser.find_element_by_xpath("//*").get_attribute("outerHTML")
# Use PyQuery or something else to parse the html and get data from table

本文标签: javascriptWhy is requestsget() retrieving different HTML using Python than browserStack Overflow