python爬虫——青大安全教育考试
2018-01-09 21:58
148 查看
原理很简单,利用python下载我更新的txt题库,然后转化为字典,最后对应答案输出。
# -*- coding: utf-8 -*-
"""
Created on Tue Jan 9 12:05:34 2018
@author: admin
"""
info = []
from bs4 import BeautifulSoup
import requests
import os
print('========Welcome to use QduSafeExamHelper========')
print('| |')
print('| Made with python3.6 |')
print('| |')
print('=========== Verison: 1.0.6 by ARCHer ===========\n')
print('请将考试页面保存(Ctrl+s)在当前程序同一目录下并按照下方面的说明操作!')
print('1.请确认保存的网页名为 青岛大学安全知识考试系统.html ')
print('2.html文件与本程序在同一目录')
print('3.确认前两项无误后按回车确认')
input()
result = {}
update_url = 'https://kaoshidaan.oss-cn-qingdao.aliyuncs.com/result.txt'
r = requests.get(update_url)
with open("result.txt",'wb') as f:
f.write(r.content)
file = open( 'result.txt' , 'r' )
lines = file.readlines()
file.close()
for line in lines:
result_data = line.split('----', 1 )
result[result_data[0]] = result_data[1].strip()
#print(result)
with open( '青岛大学安全知识考试系统.html',encoding = 'utf-8') as wb_data:
soup = BeautifulSoup( wb_data , 'lxml' )
numbers = soup.select('dd > a')
questions = soup.select( 'dd > a' )
for question,number in zip( questions, numbers ):
data = {
'question':question.get('href'),
'number':number.get_text(),
'answer':result.get(question.get('href'))
}
info.append(data)
os.remove('result.txt')
for i in info:
print('题目序号:',i['number'] ,'题目答案:',i['answer'] )
input('Finished')
# -*- coding: utf-8 -*-
"""
Created on Tue Jan 9 12:05:34 2018
@author: admin
"""
info = []
from bs4 import BeautifulSoup
import requests
import os
print('========Welcome to use QduSafeExamHelper========')
print('| |')
print('| Made with python3.6 |')
print('| |')
print('=========== Verison: 1.0.6 by ARCHer ===========\n')
print('请将考试页面保存(Ctrl+s)在当前程序同一目录下并按照下方面的说明操作!')
print('1.请确认保存的网页名为 青岛大学安全知识考试系统.html ')
print('2.html文件与本程序在同一目录')
print('3.确认前两项无误后按回车确认')
input()
result = {}
update_url = 'https://kaoshidaan.oss-cn-qingdao.aliyuncs.com/result.txt'
r = requests.get(update_url)
with open("result.txt",'wb') as f:
f.write(r.content)
file = open( 'result.txt' , 'r' )
lines = file.readlines()
file.close()
for line in lines:
result_data = line.split('----', 1 )
result[result_data[0]] = result_data[1].strip()
#print(result)
with open( '青岛大学安全知识考试系统.html',encoding = 'utf-8') as wb_data:
soup = BeautifulSoup( wb_data , 'lxml' )
numbers = soup.select('dd > a')
questions = soup.select( 'dd > a' )
for question,number in zip( questions, numbers ):
data = {
'question':question.get('href'),
'number':number.get_text(),
'answer':result.get(question.get('href'))
}
info.append(data)
os.remove('result.txt')
for i in info:
print('题目序号:',i['number'] ,'题目答案:',i['answer'] )
input('Finished')
相关文章推荐
- python魔法方法以及私有化 (来自潭州教育python爬虫的一枚小学员)
- python 爬虫 常见安全措施
- Python爬虫——2017高校网络信息安全管理运维挑战赛:快速计算
- Python爬虫——2017高校网络信息安全管理运维挑战赛:随机数
- 利用Python爬虫每天获取最新的CVE安全漏洞,存放至mysql数据库中
- python3使用urllib模块制作网络爬虫
- 芝麻HTTP: Python爬虫利器之Requests库的用法
- python简单爬虫,抓取邮箱
- python GIL锁(python的那些原子性操作)保障线程安全?
- Python 简单爬虫的样例(获取拉钩网Python的职位)
- Python 爬虫 爬取安智网应用信息
- Python3网络爬虫:Scrapy入门实战之爬取动态网页图片
- Python爬虫之正则表达式
- Python豆瓣爬虫,指定文件行数写入到文件中
- Python使用爬虫猜密码
- Python 爬虫如何入门学习?
- Python爬虫之URLError异常处理
- 从零开始写Python爬虫 --- 1.3 BS4库的解析器
- Python-网络爬虫之BeautifulSoup(2)
- Python3 爬虫的基本原理