A 'Vector' has only one direction



If we were to talk about Linear Algebra, literally every concept in it, we might end up learning a ton of concepts barely used in Machine Learning. Lets narrow it down and try to gulp as much Math needed to better understand the ML concepts than focus on pure Math and be lost in the super big universe..... What am i even talking? Lets just get started.

Vector : A geometric object with a magnitude and direction.

Suppose A is a vector as shown below,
A = (a,b), then the magnitude/length of A = √a² + b²


If we consider not just 2, but "N" dimensions, then the length of the vector would be :
N
∑ √ai2 = √a12 + a22 + .... + aN2
i=1
Since we have a "THETA" in the figure shown above, we can use a little bit of trigonometry and define the following:

Tanθ = b/a ⇒ θ = arctan(b/a) = tan-1 (b/a)

Another interesting thing to note in Cartesian Plane is the distance between any two points calculated using our favorite "Pythagoras Theorem" :


Say Luffy's friend asked him the distance between their current location at  a point (b1,b2) and his home located at (a1,a2). Now Luffy is a smart and creative guy so he draws a Cartesian plane to flaunt his Math skill to find the distance. Smart as he is, using Pythagoras Theorem, he says the distance is equal to SQUARE ROOT of THE SUM of SQUARES of the ADJACENT and the OPPOSITE sides. 

√(a1-b1)2 + (a2-b2)2
 
Similarly if its an N-Dimensional space then the distance would be calculated using the "Super Pythagoras Theorem" as:
 
n
∑ √(ai-bi)2
i=1
 
What do you call a Vector with length / magnitude "1" ? A UNIT vector
 
 A "1 x N" matrix is called as a Row Vector and a "N x 1" matrix a Column Vector.


 Vector Addition: 


 Two vectors P and Q ca be added as follows:
 
P = (a1,a2.....an) , Q = (b1,b2.....bn)
P+Q = [(a1+b1),...,(an+bn)]
 
Think of SCALAR as a constant, so if we multiply a scalar and a vector, the magnitude changes but not the direction. That makes sense. Because, each value in the vector is then scaled up while moving in the same direction as before. 

Dot Product:

Now the sum of vectors isn't all fancy, but MULTIPLICATION is. Its a frequently used operation in Machine Learning making it worth understanding. Its got a fancy name (swaaaag) - The DOT PRODUCT. 
P = (a1,a2) , Q = (b1,b2)
P.Q = (a1*b1)+(a2+b2)
The corresponding Matrix representation is:

P.Q = PQT =
 
We have seen Matrix definition of the dot product. Now lets see how it can be understood using Geometry. 



Hence if the vectors are PERPENDICULAR, that is THETA = 90 (cos90 = 0) then the DOT PRODUCT will be ZERO.
Note:
The Dot Product follows Commutative Law

Cross Product:
 
P = (a1,a2) , Q = (b1,b2)
P*Q = (a1*b2) - (a2*b1)
 
Note:
Cross Product doesn't follow Commutative Law .
 
Vector Projection:
 From the figure, the projection of vector "a" is "p".
 

 

Comments