Scrapy爬虫入门(一)---爬取猫眼榜单

image description

Scrapy爬虫入门(一)---爬取猫眼榜单

安装Scrapy

pip3 install scrapy

新建工程

scrapy startapp maoyan

目录结构

scrapy.cfg:配置文件 spiders:存放你Spider文件,也就是你爬取的py文件 items.py:相当于一个容器,和字典较像 middlewares.py:定义Downloader Middlewares(下载器中间件)和Spider Middlewares(蜘蛛中间件)的实现 pipelines.py: 定义Item Pipeline的实现,实现数据的清洗,储存,验证。 settings.py:全局配置

定义Item

修改items.py,添加如下内容

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class MaoyanItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    index = scrapy.Field()
    title = scrapy.Field()
    star = scrapy.Field()
    releasetime = scrapy.Field()
    score = scrapy.Field()

定义spider

在spiders文件夹下创建一个maoyan.py文件

也可以按住shift-右键-在此处打开命令窗口,输入:scrapy genspider maoyan 要爬取的网址

# -*- coding: utf-8 -*-
import scrapy
from maoyan.items import MaoyanItem


class MaoyanSpider(scrapy.Spider):
    name = 'maoyan'
    allowed_domains = ['maoyan.com']
    start_urls = ['https://maoyan.com/board/7/']

    def parse(self, response):
        dl = response.css('.board-wrapper dd')
        for dd in dl:
            item = MaoyanItem()
            item['index'] = dd.css('.board-index::text').extract_first()
            item['title'] = dd.css('.name a::text').extract_first()
            item['star'] = dd.css('.star::text').extract_first()
            item['releasetime'] = dd.css('.releasetime::text').extract_first()
            item['score'] = dd.css('.integer::text').extract_first() + dd.css('.fraction::text').extract_first()
            yield item

关于页面元素分析,简单说一下吧

 有兴趣的还是看一下官方文档https://scrapy-chs.readthedocs.io/zh_CN/1.0/topics/selectors.html

Scrapy提供了两个实用的快捷方式: response.xpath() 及 response.css()

.xpath()及 .css() 方法返回一个类 SelectorList 的实例, 它是一个新选择器的列表。这个API可以用来快速的提取嵌套数据

为了提取真实的原文数据,你需要调用 .extract()

运行

在spider目录下,输入命令,scrapy crawl 项目名,就是MaoyanSpider这个类中定义的name的值

scrapy crawl maoyan

此时运行的结果应该是403,说明猫眼有一点的反爬手段

修改settings.py,把这一行的注释去掉,改成

USER_AGENT = 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36' 继续运行,就ok了

保存

scrapy crawl maoyan -o maoyan.csv

scrapy crawl maoyan -o maoyan.xml

scrapy crawl maoyan -o maoyan.json
    ArithmeticJia         0         91         Python         3    

David Ramon

ArithmeticJia

www.guanacossj.com

Life is Short,You need Python

Related Posts

You may like these post too

image description

python中的GIL锁

为什么我们说python中无法实现真正的多线程呢,这是因为在C语言写的python解释器中存在全局解释器锁,由于全局解释器锁的存在,在同一时间内,python解释器只能运行一个线程的代码,这大大影响了python多线程的性能。而这个解释器锁由于历史原因,现在几乎无法消除。 python GIL

image description

python多进程和进程池

##写在最前面: linux下可使用 fork 函数 通常使用 multiprocessing更常见 我们分别使用单进程和多进程处理run函数 ```python import time,os from multiprocessing import P

Comments on this post

0 comments

Leave a comment

it’s easy to post a comment

image description
image description
image description
image description
image description
image description
image description
image description
image description

Copyright © 2019.Company name All rights reserved.苏ICP备19007197号