admin管理员组文章数量:1302403
Can anyone please tell me if there is any way to ignore or bypass robots.txt while crawling. Is there any way to modify script in such way that it ignores robots.txt and go on with crawling?
Or is there any other way to achieve the same?
User-agent: *
Disallow: /
User-agent: Googlebot
Disallow:
Can anyone please tell me if there is any way to ignore or bypass robots.txt while crawling. Is there any way to modify script in such way that it ignores robots.txt and go on with crawling?
Or is there any other way to achieve the same?
User-agent: *
Disallow: /
User-agent: Googlebot
Disallow:
Share
asked Jan 21, 2015 at 15:00
PratikPratik
791 gold badge1 silver badge11 bronze badges
2
-
7
robots.txt
is a suggestion, not a requirement. If you want to ignore it, you just ignore it. – Blazemonger Commented Jan 21, 2015 at 15:01 -
Maybe you have problems with Cross-Origin-Requests, not with
robots.txt
? – Boldewyn Commented Jan 21, 2015 at 15:03
2 Answers
Reset to default 4if you are writing Crawler in mechanize (Python) and have an interface with robot.txt then use the following mand:
import mechanize
br = mechanize.Browser()
br.set_handle_robots(False)
If you are writing a crawler then you have plete control of it. You can make it behave nicely or you can make it behave badly.
If you don't want your crawler to respect robots.txt then just write it so it doesn't. You might be using a library that respects robots.txt automatically, if so then you will have to disable that (which will usually be an option you pass to the library when you call it).
There is no way to use client side JavaScript to cause a crawler reading the page embedding the JS to stop respecting robots.txt.
本文标签: javascripthow to bypass robotstxt while crawlingStack Overflow
版权声明:本文标题:javascript - how to bypass robots.txt while crawling - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1741708558a2393704.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论