爬虫(网页采集)

2025年02月28日 Python爬虫 Python51

源代码:

import requests

# step1:指定url url = 'https://www.sogou.com/web' kw = input('input what you need') param = {'query': kw} header = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.81 Safari/537.36 Edg/104.0.1293.47'} # step2:发起请求 response = requests.get(url=url, params=param, headers=header) # step3:获取响应数据,text返回的是字符串形式的响应数据 page_text = response.text # step4:存储 path = kw + '.html' with open(path, 'w', encoding='utf-8') as fp: fp.write(page_text)

 编写流程:

1.指定url

url = 'https://www.sogou.com/web'

2.发起请求

response = requests.get(url=url, params=param, headers=header)

3.获取响应数据

page_text = response.text

4.存储

with open(path, 'w', encoding='utf-8') as fp:
    fp.write(page_text)

解析:

header = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.81 Safari/537.36 Edg/104.0.1293.47'}

 header用来破解ua反爬机制,在不使用header情况下爬取的页面为不存在。

 param = {'query': kw}

 param用字典形式存储需要搜索的网页内容

本文链接:http://so.lmcjl.com/news/24025/

展开阅读全文