Scrapy突破反爬虫的限制

爬虫和反爬的对抗过程以及策略

爬虫和反爬虫基本概念

  • 爬虫:自动获取网站数据的程序,关键是批量的获取。
  • 反爬虫:使用技术手段防止爬虫程序的方法。
  • 误伤:反爬虫技术将普通用户识别为爬虫,如果误伤过高,效果再高也不能用。
  • 成本:反爬虫需要的人力和机器成本。
  • 拦截:成功拦截爬虫,一般拦截率越高,误伤率越高。

反爬虫的目的

  • 初级爬虫—-简单粗暴,不管服务器压力,容易弄挂网站。
  • 数据保护
  • 失控的爬虫—-由于某些情况下,忘记或者无法关闭的爬虫。
  • 商业竞争对手

爬虫和反爬虫的过程

scrapy架构源码分析

scrapy架构

通过downloadmiddleware随机更换user-agent

方式一:headers设置随机更好user-agent

1
2
3
4
5
6
7
8
import random
random_index = random.randint(0,len(user_agent_list)-1)
random_agent = user_agent_list[random_index]
headers = {
"HOST": "www.zhihu.com",
"Referer": "https://www.zhizhu.com",
'User-Agent': random_agent
}

方式二:downloadmiddleware随机更换user-agent

该随机模块可以拿来直接用,需要安装包:fake_useragent

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
#middlewares.py
from fake_useragent import UserAgent ##这是一个随机UserAgent的包,里面有很多UserAgent

class RandomUserAgentMiddlware(object):
#随机更换user-agent
def __init__(self, crawler):
super(RandomUserAgentMiddlware, self).__init__()
self.ua = UserAgent()
self.ua_type = crawler.settings.get("RANDOM_UA_TYPE", "random")

@classmethod
def from_crawler(cls, crawler):
return cls(crawler)

def process_request(self, request, spider):
def get_ua():
return getattr(self.ua, self.ua_type)

request.headers.setdefault('User-Agent', get_ua())

#settings.py
DOWNLOADER_MIDDLEWARES = {
'scrapy.downloadermiddlewares.useragent.userAgentMiddleware':None,#这里要设置原来的scrapy的useragent为None,否者会被覆盖掉
'ArticleSpider.middlewares.RandomUserAgentMiddlware':542,
}
RANDOM_UA_TYPE='random'

scrapy实现ip代理池

`scrapy`实现ip代理池

爬取西刺的免费ip代理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
import requests
from scrapy.selector import Selector
import MySQLdb

conn = MySQLdb.connect(host="127.0.0.1", user="root", passwd="meiyi8013", db="article_spider", charset="utf8")
cursor = conn.cursor()

def crawl_ips():
#爬取西刺的免费ip代理
headers = {"User-Agent":"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Firefox/52.0"}
for i in range(1568):
re = requests.get("http://www.xicidaili.com/nn/{0}".format(i), headers=headers)

selector = Selector(text=re.text)
all_trs = selector.css("#ip_list tr")


ip_list = []
for tr in all_trs[1:]:
speed_str = tr.css(".bar::attr(title)").extract()[0]
print(speed_str)
if speed_str:
speed = float(speed_str.split("秒")[0])
all_texts = tr.css("td::text").extract()

ip = all_texts[0]
port = all_texts[1]
proxy_type = all_texts[5]

ip_list.append((ip, port, proxy_type, speed))

for ip_info in ip_list:
cursor.execute(
"insert proxy_ip(ip, port, speed, proxy_type) VALUES('{0}', '{1}', {2}, 'HTTP')".format(
ip_info[0], ip_info[1], ip_info[3]
)
)

conn.commit()


class GetIP(object):
def delete_ip(self, ip):
#从数据库中删除无效的ip
delete_sql = """
delete from proxy_ip where ip='{0}'
""".format(ip)
cursor.execute(delete_sql)
conn.commit()
return True

def judge_ip(self, ip, port):
#判断ip是否可用
http_url = "http://www.baidu.com"
proxy_url = "http://{0}:{1}".format(ip, port)
try:
proxy_dict = {
"http":proxy_url,
}
response = requests.get(http_url, proxies=proxy_dict)
except Exception as e:
print ("invalid ip and port")
self.delete_ip(ip)
return False
else:
code = response.status_code
if code >= 200 and code < 300:
print ("effective ip")
return True
else:
print ("invalid ip and port")
self.delete_ip(ip)
return False


def get_random_ip(self):
#从数据库中随机获取一个可用的ip
random_sql = """
SELECT ip, port FROM proxy_ip
ORDER BY RAND()
LIMIT 1
"""
result = cursor.execute(random_sql)
for ip_info in cursor.fetchall():
ip = ip_info[0]
port = ip_info[1]

judge_re = self.judge_ip(ip, port)
if judge_re:
return "http://{0}:{1}".format(ip, port)
else:
return self.get_random_ip()



# print (crawl_ips())
if __name__ == "__main__":
crawl_ips()
# get_ip = GetIP()
# get_ip.get_random_ip()

sql语言取出随机记录:在此是随机取出一条记录是ip和端口组成代理IP

1
2
3
SELECT ip, port FROM proxy_ip
ORDER BY RAND()
LIMIT 1

可以使用scrapy中的selector,代码如下:

1
2
3
4
from scrapy.selector import Selector
html=requests.get(url)
Selector=Selector(text=html.text)
Selector.xpath()

动态设置ip代理模版以后直接拿来用即可:

1
2
3
4
5
6
7
8
9
10
11
12
13
#middlewares.py
from tools.crawl_xici_ip import GetIP

class RandomProxyMiddleware(object):
#动态设置ip代理
def process_request(self, request, spider):
get_ip = GetIP()
request.meta["proxy"] = get_ip.get_random_ip()

#settings.py
DOWNLOADER_MIDDLEWARES = {
'ArticleSpider.middlewares.RandomProxyMiddleware':541,
}

云打码实现验证码识别

  • 编码实现(tesseract-ocr)
  • 在线打码—-打码平台(云打码、若快)
  • 人工打码

cookie禁用、自动限速、自定义spidersettings

如果用不到cookies的,就不要让对方知道你的cookies(如不需要登录的网站)。

1
2
#settings.py
COOKIES_ENABLED = False

custom_settings :对框架中的内容进行覆盖,比如我想覆盖setting中的headers的内容,那么只要将header的内容写入custom_settings中,然后改变headers的值即可, 当程序再次运行时会覆盖以前settingheaders值,而运行你修改之后的内容。

自定义spidersettings

1
2
3
4
5
6
7
class ZhihuSpider(scrapy.Spider):
custom_settings = {
'DEFAULT_REQUEST_HEADERS' : {
'User-Agent': None,
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
}

本博客所有文章除特别声明外,均采用 CC BY-SA 3.0协议 。转载请注明出处!