吴恩达Machine Learning week 3 review答案: Logistic Regression
2017-09-25 10:50
2823 查看
[Y]Our estimate for P(y=0|x;θ) is
0.6.
[Y]Our estimate for P(y=1|x;θ) is
0.4.
Our estimate for P(y=1|x;θ) is
0.6.
Our estimate for P(y=0|x;θ) is
0.4.
Which of the following are true? Check all that apply.
J(θ) will be a convex function, so gradient descent should converge to the global minimum.
[b]【right】 Adding polynomial features (e.g., instead using hθ(x)=g(θ0+θ1x1+θ2x2+θ3x21+θ4x1x2+θ5x22) ) could increase how well we can fit the training data.[/b]
【right】The positive and negative examples cannot be separated using a straight line. So, gradient descent will fail to converge.
WRONG Because the positive and negative examples cannot be separated using a straight line, linear regression will perform as well as logistic regression on this data.
θj:=θj−α1m∑mi=1(hθ(x(i))−y(i))x(i) (simultaneously
update for all j).
[Y]
θj:=θj−α1m∑mi=1(11+e−θTx(i)−y(i))x(i)j (simultaneously
update for all j).
θ:=θ−α1m∑mi=1(θTx−y(i))x(i).
θj:=θj−α1m∑mi=1(hθ(x(i))−y(i))x(i)j (simultaneously
update for all j).
[Y]The cost function J(θ) for
logistic regression trained with m≥1 examples
is always greater than or equal to zero.
For logistic regression, sometimes gradient descent will converge to a local minimum (and fail to find the global minimum). This is the reason we prefer more advanced optimization algorithms such as fminunc (conjugate gradient/BFGS/L-BFGS/etc).
Since we train one classifier when there are two classes, we train two classifiers when there are three classes (and we do one-vs-all classification).
[Y]The one-vs-all technique allows you to use logistic regression for problems in which each y(i) comes
from a fixed, discrete set of values.
Figure:
Figure:
Figure:
[Y]Figure:
0.6.
[Y]Our estimate for P(y=1|x;θ) is
0.4.
Our estimate for P(y=1|x;θ) is
0.6.
Our estimate for P(y=0|x;θ) is
0.4.
Which of the following are true? Check all that apply.
J(θ) will be a convex function, so gradient descent should converge to the global minimum.
[b]【right】 Adding polynomial features (e.g., instead using hθ(x)=g(θ0+θ1x1+θ2x2+θ3x21+θ4x1x2+θ5x22) ) could increase how well we can fit the training data.[/b]
【right】The positive and negative examples cannot be separated using a straight line. So, gradient descent will fail to converge.
WRONG Because the positive and negative examples cannot be separated using a straight line, linear regression will perform as well as logistic regression on this data.
θj:=θj−α1m∑mi=1(hθ(x(i))−y(i))x(i) (simultaneously
update for all j).
[Y]
θj:=θj−α1m∑mi=1(11+e−θTx(i)−y(i))x(i)j (simultaneously
update for all j).
θ:=θ−α1m∑mi=1(θTx−y(i))x(i).
θj:=θj−α1m∑mi=1(hθ(x(i))−y(i))x(i)j (simultaneously
update for all j).
[Y]The cost function J(θ) for
logistic regression trained with m≥1 examples
is always greater than or equal to zero.
For logistic regression, sometimes gradient descent will converge to a local minimum (and fail to find the global minimum). This is the reason we prefer more advanced optimization algorithms such as fminunc (conjugate gradient/BFGS/L-BFGS/etc).
Since we train one classifier when there are two classes, we train two classifiers when there are three classes (and we do one-vs-all classification).
[Y]The one-vs-all technique allows you to use logistic regression for problems in which each y(i) comes
from a fixed, discrete set of values.
Figure:
Figure:
Figure:
[Y]Figure:
相关文章推荐
- 吴恩达Machine Learning week 3 review答案: Regularization
- Coursera Machine Learning Week 1.2: Linear Regression.one variable
- Coursera Machine Learning Week 3 - Programming Exercise 2: Logistic Regression
- Machine Learning Week 2
- Machine Learning Week 3
- Machine Learning week 1 quiz: Introduction
- Coursera Machine Learning 第一周 quiz Linear Regression with One Variable 习题答案
- Machine Learning Week One
- Machine Learning week 1 note cont.
- Machine Learning week 5 programming exercise Neural Network Learning
- 吴恩达 机器学习 笔记 some tips on applying machine Learning
- Machine Learning Review 1
- Machine Learning Week 4
- Coursera Machine Learning 第一周 quiz Linear Algebra习题答案
- Machine Learning week 4 quiz: programming assignment-Multi-class Classification and Neural Networks
- Machine Learning week 7 quiz: Support Vector Machines
- Machine Learning week 8 quiz: Principal Component Analysis
- Course 3-Structuring Machine Learning Projects--Week 2
- Machine Learning Foundations(机器学习基石) By Hsuan-Tien Lin (林轩田) week1 笔记
- (原创)Stanford Machine Learning (by Andrew NG) --- (week 8) Clustering & Dimensionality Reduction