【Stanford Machine Learning Open Course】15. week6编程题解
2012-10-24 12:20
375 查看
这里是斯坦福大学机器学习网络课程的学习笔记。课程地址是:https://class.coursera.org/ml-2012-002/lecture/index
编程题地址:https://class.coursera.org/ml-2012-002/assignment/index
本周题目包括:
1. 线性回归问题的成本函数和梯度(正则化的)
2. 学习曲线 ( learning curve)
3. 特征扩充 (poly features)
4. 验证曲线(validation curve)
实现:( 可到此下载 )
1. 线性回归问题的成本函数和梯度(正则化的)
2. 学习曲线 ( learning curve)
3. 特征扩充 (poly features)
4. 验证曲线(validation curve)
编程题地址:https://class.coursera.org/ml-2012-002/assignment/index
本周题目包括:
1. 线性回归问题的成本函数和梯度(正则化的)
2. 学习曲线 ( learning curve)
3. 特征扩充 (poly features)
4. 验证曲线(validation curve)
实现:( 可到此下载 )
1. 线性回归问题的成本函数和梯度(正则化的)
function [J, grad] = linearRegCostFunction(X, y, theta, lambda) % Initialize some useful values m = length(y); % number of training examples % You need to return the following variables correctly J = 0; grad = zeros(size(theta)); J = 1/2/m*sum( (X*theta-y).^2 ) + lambda/2/m*sum( theta(2:end,1).^2 ); grad = 1/m*((X*theta-y)'*X)' + lambda/m*[zeros(1); theta(2:end,1)]; grad = grad(:); end
2. 学习曲线 ( learning curve)
function [error_train, error_val] = ... learningCurve(X, y, Xval, yval, lambda) m = size(X, 1); error_train = zeros(m, 1); error_val = zeros(m, 1); X=[ones(m,1) X]; mval=size(Xval,1); Xval=[ones(mval,1) Xval]; for i = 1 : m X1 = X(1:i,:); y1 = y(1:i,:); theta = trainLinearReg(X1, y1, lambda); error_train(i) = 1/2/i*sum( (X1*theta-y1).^2 ); error_val(i) = 1/2/mval*sum( (Xval*theta-yval).^2 ); end end
3. 特征扩充 (poly features)
function [X_poly] = polyFeatures(X, p) X_poly = zeros(numel(X), p); for i = 1:p X_poly(:,i) = [X.^i]; end end
4. 验证曲线(validation curve)
function [lambda_vec, error_train, error_val] = ... validationCurve(X, y, Xval, yval) % Selected values of lambda (you should not change this) lambda_vec = [0 0.001 0.003 0.01 0.03 0.1 0.3 1 3 10]'; % You need to return these variables correctly. error_train = zeros(length(lambda_vec), 1); error_val = zeros(length(lambda_vec), 1); m=size(X,1); X=[ones(m,1) X]; mval=size(Xval,1); Xval=[ones(mval,1) Xval]; for i=1:length(lambda_vec) lambda = lambda_vec(i); theta = trainLinearReg(X, y, lambda); error_train(i) = 1/2/m*sum( (X*theta-y).^2 ); error_val(i) = 1/2/mval*sum( (Xval*theta-yval).^2 ); end end
相关文章推荐
- 【Stanford Machine Learning Open Course】6. week2编程题解
- 【Stanford Machine Learning Open Course】13. week5编程题解
- 【Stanford Machine Learning Open Course】9. week3编程题解
- 【Stanford Machine Learning Open Course】12. week4编程题解
- 【Stanford Machine Learning Open Course】14. 分析和改进机器学习模型
- 【Stanford Machine Learning Open Course】4. 特征优化
- 【Stanford Machine Learning Open Course】编程题解(全)
- 【Stanford Machine Learning Open Course】机器学习中的大数据处理
- 【Stanford Machine Learning Open Course】5. Octave入门
- 【Stanford Machine Learning Open Course】1. 机器学习介绍
- 【Stanford Machine Learning Open Course】2. 线性回归问题介绍
- 【Stanford Machine Learning Open Course】7. 分类问题 & logistic回归方法
- 【Stanford Machine Learning Open Course】8. 过拟合问题解决以及在回归问题和分类问题上的应用
- 【Stanford Machine Learning Open Course】3. 线性回归问题两种解法:正规方程组解法 & 梯度下降法
- Stanford Machine Learning Open Course
- Machine Learning Open Source Software
- 线性回归——Stanford Machine Learning (1)
- Stanford Machine Learning: (2). Logistic_Regression
- Stanford Machine Learning Note1 - Linear Regression
- 单维与多维线性回归代码( machine-learning-ex1 ) Stanford machine learning