Now, as I am trying to understand the mathematical side of the different Algorithms, it is getting messy, blur, and cloudy in my head. That’s why I decided to write all the formulae in one place and keep on updating as and when I learn a new one.
Hypothesis Function
Linear Regression:
ŷ = hθ(x) = θ0 + θ1×1 + θ2×2 + … + θnxn = θTx
Logistic Regression:
hθ(x) = g(z) where z = θTx
g(z) = 1/(1 + e -(z))
Neural Network:
hθ(x) = a1(3) = g(θ10(2)a0(2) + θ11(2)a1(2) + θ12(2)a2(2) + θ13(2)a3(2))
Cost Function
Linear Regression:
J(θ) = 1/2m ∑ (ŷi – yi)2 = 1/2m ( (hθ(xi) – yi)2
Regularized Linear Regression:
J(θ) = 1/2m ( (hθ(xi) – yi)2 + λ ∑θ2
Logistic Regression:
J(θ) = -1/m ∑ [y(i)log(hθ(x(i)) + (1-y(i)) log(1 – hθ(x(i)))]
Regularized Logistic Regression:
J(θ) = -1/m ∑ [y(i)log(hθ(x(i)) + (1-y(i)) log(1 – hθ(x(i)))] + λ/2m ∑θ2
Neural Network:
J(θ) = -1/m ∑∑ [y(i)log(hθ(x(i)) + (1-y(i)) log(1 – hθ(x(i)))] + λ/2m ∑∑∑θ2
Gradient Descent
Linear /Logistic Regression:
Regularized Linear/Logistic Regression:
I will keep on updating this post as and when I learn a new one.
Thank You!
0