批梯度下降的 python 实现
2015-08-02 11:43
441 查看
版权声明:本文为博主原创文章,遵循 CC 4.0 by-sa 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/brian_gong/article/details/47205791
# -*- coding: utf-8 -*-
"""
Created on Sun Aug 02 09:51:35 2015
@author: brian
"""
import numpy as np
if __name__ == "__main__":
x = []
y = []
fx = open('ex2x.dat','r')
for line in fx:
x.append(float(line))
fx.close()
fy = open('ex2y.dat','r')
for line in fy:
y.append(float(line))
fy.close()
m = len(y)
y = np.array(y)
y.shape = (50,1)
# Gradient descent
x = np.array([x])
x.shape = (m,1)
x = np.column_stack([np.ones([m,1]) ,x])
theta = np.zeros([2,1])
MAX_ITR = 1500
alpha = 0.07
i = 0
while i < MAX_ITR:
grad = (float(1)/m) * np.dot(x.T , ( np.dot(x , theta) - y))
theta = theta - alpha * grad
i = i+1
print theta
"""
Created on Sun Aug 02 09:51:35 2015
@author: brian
"""
import numpy as np
if __name__ == "__main__":
x = []
y = []
fx = open('ex2x.dat','r')
for line in fx:
x.append(float(line))
fx.close()
fy = open('ex2y.dat','r')
for line in fy:
y.append(float(line))
fy.close()
m = len(y)
y = np.array(y)
y.shape = (50,1)
# Gradient descent
x = np.array([x])
x.shape = (m,1)
x = np.column_stack([np.ones([m,1]) ,x])
theta = np.zeros([2,1])
MAX_ITR = 1500
alpha = 0.07
i = 0
while i < MAX_ITR:
grad = (float(1)/m) * np.dot(x.T , ( np.dot(x , theta) - y))
theta = theta - alpha * grad
i = i+1
print theta
相关文章推荐
- 梯度下降原理及Python实现
- 梯度下降法及其Python实现
- python 实现梯度下降
- 领近点梯度下降法、交替方向乘子法、次梯度法使用实例(Python实现)
- 线性回归、梯度下降原理介绍与案例python代码实现
- 梯度下降法 线性回归 多项式回归 python实现
- 梯度下降法介绍及利用Python实现的方法示例
- [机器学习]逻辑回归公式推导及其梯度下降法的Python实现
- 梯度下降原理及Python实现
- 梯度下降和逻辑回归例子(Python代码实现)
- Python梯度下降实现非线性回归
- 梯度下降的python实现
- 梯度下降法求解线性回归之python实现
- 梯度下降原理及Python实现
- 梯度下降法的三种形式BGD、SGD、MBGD及python实现
- 梯度下降(Gradient Descent)原理和Python实现
- 梯度下降法快速教程 | 第二章:冲量(momentum)的原理与Python实现
- 梯度下降法快速教程 | 第三章:学习率衰减因子(decay)的原理与Python实现
- python+numpy+matplotalib实现梯度下降法
- 逻辑回归python实现(随机增量梯度下降,变步长)