您的位置:首页 > 编程语言 > Python开发

【python爬虫学习笔记】07 股票数据定向爬虫实例

2018-12-13 21:22 651 查看

功能描述

目标:获取上交所和深交所所有股票的名称和交易信息
输出:保存到文件中
技术路线:requests-bs4-re

候选数据网站的选择

新浪股票:http://finance.sina.com.cn/stock/
百度股票:http://gupiao.baidu.com/stock/
东方财富网:http://quote.eastmoney.com/stocklist.html

  1. 选取原则:
    股票信息静态存在于HTML页面中,非js代码生成,没有Robots协议限制

  2. 选取方法:
    浏览器F12,源代码查看等

程序的结构设计

import requests
from bs4 import BeautifulSoup
import traceback
import re

def getHTMLText(url,code = 'utf-8'):
try:
r = requests.get(url,timeout=30)
r.raise_for_status()
r.encoding = code
return r.text
except:
return ""

def getStockList(lst,stockURL):
html = getHTMLText(stockURL,'GB2312')
soup = BeautifulSoup(html,'html.parser')
a = soup.find_all('a')
for i in a:
try:
href = i.attrs['href']
lst.append(re.findall(r"[s][hz]\d{6}",href)[0])
except:
continue

def getStockInfo(lst,stockURL,fpath):
count = 0
for stock in lst:
url = stockURL + stock +'.html'
html = getHTMLText(url)
try:
if html == "":
continue
#字典形式
infoDict = {}
soup = BeautifulSoup(html,'html.parser')
stockInfo = soup.find('div',attrs={'class':'stock-bets'})

name = stockInfo.find_all(attrs={'class':'bets-name'})[0]
infoDict.update({'股票名称':name.text.split()[0]})

keyList = stockInfo.find_all('dt')
valueList = stockInfo.find_all('dd')
for i in range(len(keyList)):
key = keyList[i].text
val = valueList[i].text
infoDict[key] = val
#以追加的方式打开文件
with open(fpath,'a',encoding='utf-8') as f:
f.write(str(infoDict) + '\n')
count = count + 1
print('\r当前速度:{:.2f}%'.format(count*100/len(lst)),end='')
except:
count = count + 1
print('\r当前速度:{:.2f}%'.format(count * 100 / len(lst)), end='')
continue

if __name__ == '__main__':
stock_list_url = 'http://quote.eastmoney.com/stocklist.html'
stock_info_url = 'http://gupiao.baidu.com/stock/'
output_file = 'E://BaiduStockInfo.txt'
slist = []
getStockList(slist,stock_list_url)
getStockInfo(slist,stock_info_url,output_file)
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: