机器学习基石笔记 Lecture 2: Learning to Answer Yes/No
2015-12-19 20:31
411 查看
Lecture 2: Learning to Answer Yes/No
Perceptron
A Simple Hypothesis Set: the ‘Perceptron’
感知器类比神经网络,threshold类比考试60分及格
Vector Form of Perceptron Hypothesis
each ‘tall’ W represents a hypothesis h & is multiplied with ‘tall’ X —will use tall versions to simplify notation
Perceptrons in R2
Fun time
Select g from H
遍历是不现实的,所以还是迭代吧Perceptron Learning Algorithm
A fault confessed is half redressed.因为wTtxn(t)=∥wt∥∥xn(t)∥cos(wt,xn(t)),所以当二者夹角大于90°的时候,内积为-,反之为+
Fun time
说明了什么含义 ? ② 为什么不对?
Implementation
start from some w0 (say, 0,并不是随机的初始化), and ‘correct’ its mistakes on D next can follow naïve cycle (1, · · · , N) or precomputed random cycle
(note: made xi≫x0=1 for visual purpose) Why ?
Issues of PLA
Linear Separability
assume linear separable D,does PLA always halt?
halts!
因为 wTfwT∥wf∥∥wT∥<=1,所以T肯定有上限
PLA Fact: wt Gets More Aligned with wf
wt appears more aligned with wf after update really?
PLA Fact: wt Does Not Grow Too FastwTfwT≥wTfwT−1+minnynwTfxn≥wfw0+TminnynwTfxn≥TminnynwTfxn≥ρT∥wf∥2(A)
∥wT∥2≤∥wT−1∥2+maxn∥ynxn∥≤∥w0∥2+Tmax∥ynxn∥2≤Tmax∥ynxn∥2≤TR2(B)
推导过程中需要注意的是,w0=0,然后将(A)、(B)代入即可得答案为②
得到是上限,而且无法准确求出,因为wf未知
即使w0≠0也是能证明有上限的
特性
Learning with Noisy Data
NP难问题Pocket Algorithm
modify PLA algorithm (black lines) by keeping best weights in pocket相关文章推荐
- AS3自写类整理笔记 ClassLoader类第1/2页
- AS3自写类整理笔记 Dot类第1/2页
- DB2新手使用的一些小笔记:新建实例、数据库路径不存在、客户端连接 .
- perl脚本学习指南--读书笔记
- 《C++ primer plus》读书笔记(三)
- 《C++ primer plus》读书笔记(二)
- jquery 笔记 事件
- VPS 配置优化笔记
- 一千行的MySQL学习笔记汇总
- 两千行代码的PHP学习笔记汇总
- 用来记笔记的软件 EverNote 2.2.1.386提供下载
- Hibernate的学习笔记(3)
- CentOS6.X下Docker安装笔记
- CentOS下Redis高可用安装笔记
- notes on python
- SSH无需密码密钥登录
- 生成树协议(STP PVST CST RSTP MSTP)笔记
- Oracle11g安装笔记(一)
- 手册-ESX 配置向导的读书笔记(待续)
- VMware Cookbook 读书笔记