Cancel Preloader

Multiple Linear Regression: Explained, Coded Special Cases

 Multiple Linear Regression: Explained, Coded  Special Cases
Multiple Linear Regression: Explained, Coded & Special Cases

This article was first published by IBM Developer at developer.ibm.com, but authored by Casper Hansen. Here is the Direct link.

Linear Regression is famously known for being a simple algorithm and a good baseline to compare more complex models to. In this article, we explore the algorithm and turn the math into code, and then we run the code on a dataset, to get predictions on new data.

Table Of Contents (Click To Scroll)

  1. What Is Linear Regression?
  2. Multiple Linear Regression
  3. Special Case 1: Simple Linear Regression
  4. Special Case 2: Polynomial Regression

What Is Linear Regression?

The Linear Regression model consists of one equation of linearly increasing variables (also called parameters or features), along with a coefficient estimation algorithm called least squares, which attempts to figure out the best possible coefficient given a variable.

Linear regression models are known to be simple and easy to implement, because there is no advanced mathematical knowledge needed, except for a bit of linear algebra. For this reason, many people choose to use a linear regression model as a baseline model, to compare if another model can outperform such a simple model.

Multiple Linear Regression

Multiple linear regression is a model that can capture the a linear relationship between multiple variables/features – assuming that there is one. The general formula for multiple linear regression looks like the following:

$$ y = beta_0 + beta_1 x_1 + beta_2 x_2 + ... + beta_i x_i + varepsilon $$
  • $beta_0$ is known as the intercept
  • $beta_1$ to $beta_i$ are known as coefficients
  • $x_1$ to $x_i$ are the features of our dataset
  • $varepsilon$ are the residual terms

We can also represent the formula for linear regression in vector notation. When representing the formula in vector notation, we have the advantage of using some operations from linear algebra, which in turn makes it easier to code.

$$ mathbf {y} = {begin{bmatrix} y_{1}\y_{2}\vdots \y_{n} end{bmatrix}},quad {displaystyle X={ begin{bmatrix} mathbf {x} _{0}^{mathsf {T}}\ mathbf {x} _{1}^{mathsf {T}}\ vdots \ mathbf {x} _{n}^{mathsf {T}} end{bmatrix}} ,quad } {displaystyle {boldsymbol {beta }}={ begin{bmatrix} beta _{0}\ beta _{1}\ beta _{2}\ vdots \ beta _{n} end{bmatrix}},quad {boldsymbol {varepsilon }}={ begin{bmatrix} varepsilon _{1}\ varepsilon _{2}\ vdots \ varepsilon _{n} end{bmatrix}}
[...]

Source - Continue Reading: https://mlfromscratch.com/linear-regression-from-scratch/

webmaster

Related post