Coursera Week 1 - Linear Regression Cost Function & Gradient descent   2016-09-28


Linear Regression Cost Function & Gradient descent

1. Linear Regression

How to choose parameters

2. Cost Function

Choose $\theta_0,\theta_1$ so that $h_{\theta} (x) $ is close to $y$ for our training examples ${(x, y)}$

Title fmt
Hypothesis $h_{\theta} (x) = \theta_0 + \theta_1 x$
Parameters $\theta_0 、\theta_1$
Cost Function $J(\theta_0,\theta_1) = {\frac {1} {2m}} \sum_{i=1}^m (h_{\theta} (x^{i}) - (y^{i}))^2$
Goal $minimize J(\theta_0,\theta_1)$

3. Simplified Fmt

$\theta_0$ = 0

hypothesis function $h_{\theta} (x)$ cost function $J(\theta_1)$

cost

4. Cost function visable

cost

把 x, y 想象成向量,确定的向量,向量再想象为一个确定的数,总之它是一个二次函数,抽象的想一下,会不会理解

  • contour plots
  • contour figures

cost

5. Gradient descent target

Gradient descent

6. Gradient descent visable

Local optimization

Convex function

Global optimization

7. Gradient descent algorithm

$ \alpha $ : learning rate

Gradient descent

8. Gradient descent only $ \theta_{1} $

Gradient descent for one param : $ \theta\_{1} $

Gradient descent

Gradient descent

Gradient descent

9. Linear Regression Model

Gradient descent

9.1 Batch Gradient Descent

Batch : Each step of gradient descent uses all the training examples

Gradient descent

Coursera Learning Notes


分享到:


  如果您觉得这篇文章对您的学习很有帮助, 请您也分享它, 让它能再次帮助到更多的需要学习的人. 您的支持将鼓励我继续创作 !
本文基于署名4.0国际许可协议发布,转载请保留本文署名和文章链接。 如您有任何授权方面的协商,请邮件联系我。

Contents

  1. 1. Linear Regression
  2. 2. Cost Function
  3. 3. Simplified Fmt
  4. 4. Cost function visable
  5. 5. Gradient descent target
  6. 6. Gradient descent visable
  7. 7. Gradient descent algorithm
  8. 8. Gradient descent only $ \theta_{1} $
  9. 9. Linear Regression Model
    1. 9.1 Batch Gradient Descent