关于用 scrapy 翻页问题 - V2EX
推荐学习书目
Learn Python the Hard Way
Python Sites
PyPI - Python Package Index
http://diveintopython.org/toc/index.html
Pocoo
值得关注的项目
PyPy
Celery
Jinja2
Read the Docs
gevent
pyenv
virtualenv
Stackless Python
Beautiful Soup
结巴中文分词
Green Unicorn
Sentry
Shovel
Pyflakes
pytest
Python 编程
pep8 Checker
Styles
PEP 8
Google Python Style Guide
Code Style from The Hitchhiker's Guide
Ewig

关于用 scrapy 翻页问题

  •  
  •   Ewig Jan 16, 2019 3378 views
    This topic created in 2676 days ago, the information mentioned may be changed or developed.
    def get_page_content(self,response):
    next_page = response.xpath('//div[@class="page"]/a[contains(@ka,"page-next")]/@href').extract()
    print(response.url)
    linkList=response.xpath('//div[@class="sub-li"]/a[contains(@class,"company-info")]/@href').extract()
    linkList=[response.urljoin(link) for link in linkList]
    if linkList :
    for link in linkList:
    yield scrapy.Request(url=link,callback=self.final_parsre,dont_filter=True)

    next_page = response.xpath('//div[@class="page"]/a[contains(@ka,"page-next")]/@href').extract()
    print(next_page)
    if next_page is not None:
    next_page=response.urljoin(next_page[0])
    yield scrapy.Request(url=next_page,callback=self.get_page_content,dont_filter=True)

    https://www.zhipin.com/gongsi/_zzz_c101200100_iy100101_t801_s302/?page=1&ka=page-1

    这个网站翻页是通过点击 next page 不知道有多少页,因为我要点很多按钮 城市 融资,然后点入每个详情页 抓数据,现在不知道多少页,只能通过点击下一页找,如何写?
    14 replies    2019-01-18 16:03:04 +08:00
    layorlayor
        1
    layorlayor  
       Jan 16, 2019
    if len(next_page) != 0: yield xxxx ???
    layorlayor
        2
    layorlayor  
       Jan 16, 2019
    用 class="next"貌似要好些,因为没有下一页就没这个标签了
    Ewig
        3
    Ewig  
    OP
       Jan 16, 2019
    @layorlayor https://www.zhipin.com/gongsi/_zzz_c101200100_iy100101_t801_s302/

    这个网站我先进入每个详情页,然后再翻页 进入详情页抓数据
    Ewig
        4
    Ewig  
    OP
       Jan 16, 2019
    这个不好处理
    xpresslink
        5
    xpresslink  
       Jan 16, 2019
    try: 获取下一页;yeild 下一页; except: pass
    largecat
        6
    largecat  
       Jan 16, 2019 via Android
    递归下一页。
    获取的数据返回在顶层打包丢给 pipeline
    Ewig
        7
    Ewig  
    OP
       Jan 16, 2019
    @xpresslink 为啥 try
    kr380709959
        8
    kr380709959  
       Jan 16, 2019
    我爬过拉钩的,类似也是分页的,我记得我是 page += 1,
    if item:
    break
    else:
    items.append(item)

    //item 是职位信息
    quere
        9
    quere  
       Jan 16, 2019
    scripy-redis 这个框架里面有一个自动去重的,可以用这个框架抓取
    xpresslink
        10
    xpresslink  
       Jan 16, 2019
    @Ewig 因为下页的 url 在最后一页肯定是获取不到或得到 None 的。yield 下一页的 Request 对象就会报错。scrapy 本身也有异常处理机制并不会影响其它 Request 对象执行,只是输出错误信息到日志里面。

    直接使用捕获异常,或是先检测再使用是两种哲学。
    python 和一些动态语言倾向使用第一种哲学
    houzhimeng
        11
    houzhimeng  
       Jan 16, 2019
    方法挺多,1.先爬列表页所有,判断有没有内容了,if not room_list :return,
    2.然后解析详情页。
    ls 那种方法去重也行,或者 CrawlSpider
    Ewig
        12
    Ewig  
    OP
       Jan 18, 2019
    @houzhimeng 这个页面你多少页都有内容啊,因为都是最后一页内容
    Ewig
        13
    Ewig  
    OP
       Jan 18, 2019
    @xpresslink 我那个写的有问题吗?
    Ewig
        14
    Ewig  
    OP
       Jan 18, 2019
    @quere 和去重啥关系
    About     Help     Advertise     Blog     API     FAQ     Solana     1640 Online   Highest 6679       Select Language
    创意工作者们的社区
    World is powered by solitude
    VERSION: 3.9.8.5 60ms UTC 16:31 PVG 00:31 LAX 09:31 JFK 12:31
    Do have faith in what you're doing.
    ubao msn snddm index pchome yahoo rakuten mypaper meadowduck bidyahoo youbao zxmzxm asda bnvcg cvbfg dfscv mmhjk xxddc yybgb zznbn ccubao uaitu acv GXCV ET GDG YH FG BCVB FJFH CBRE CBC GDG ET54 WRWR RWER WREW WRWER RWER SDG EW SF DSFSF fbbs ubao fhd dfg ewr dg df ewwr ewwr et ruyut utut dfg fgd gdfgt etg dfgt dfgd ert4 gd fgg wr 235 wer3 we vsdf sdf gdf ert xcv sdf rwer hfd dfg cvb rwf afb dfh jgh bmn lgh rty gfds cxv xcv xcs vdas fdf fgd cv sdf tert sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf shasha9178 shasha9178 shasha9178 shasha9178 shasha9178 liflif2 liflif2 liflif2 liflif2 liflif2 liblib3 liblib3 liblib3 liblib3 liblib3 zhazha444 zhazha444 zhazha444 zhazha444 zhazha444 dende5 dende denden denden2 denden21 fenfen9 fenf619 fen619 fenfe9 fe619 sdf sdf sdf sdf sdf zhazh90 zhazh0 zhaa50 zha90 zh590 zho zhoz zhozh zhozho zhozho2 lislis lls95 lili95 lils5 liss9 sdf0ty987 sdft876 sdft9876 sdf09876 sd0t9876 sdf0ty98 sdf0976 sdf0ty986 sdf0ty96 sdf0t76 sdf0876 df0ty98 sf0t876 sd0ty76 sdy76 sdf76 sdf0t76 sdf0ty9 sdf0ty98 sdf0ty987 sdf0ty98 sdf6676 sdf876 sd876 sd876 sdf6 sdf6 sdf9876 sdf0t sdf06 sdf0ty9776 sdf0ty9776 sdf0ty76 sdf8876 sdf0t sd6 sdf06 s688876 sd688 sdf86