


Therefore, that is the compact representation of the projection of X onto this lower dimensional subspace. Our projected vector is still a three-dimensional vector, but we can represent it using two coordinates if we use the basis defined by B1 and B2. This result makes sense because our projected point has as a third component the zero and our subspace requires that a third component is always zero. In our diagram over here, this would correspond to this vector here.
Orthogonal vector 2d plus#
And this implies our projection of X onto the space spanned by the two B vectors is minus one times B1 plus three times B2 which is two one zero. Using Gaussian elimination, we arrive at lambda equals minus one three.

Now we solve for lambda as B transpose B inverse times B transpose X which means we find lambda such that B transpose B lambda equals B transpose X. B transpose B is a two-by-two matrix which is five three three two. B transpose times X is given as four three vector. The Orthogonal projection was given as pi U of X is B times lambda and we define B now to be B1 and B2 concatenated which is one two zero one one zero and lambda was given as B transpose B inverse times B transpose X. So that means U which is spanned by B1 and B2 is going to be effectively the plane and its extension. We're going to define X to be a three-dimensional vector given by two one one which is over here, and we define two basis vectors for our two-dimensional subspace, B1 to be one two zero, and B2 to be one one zero. In this video, we'll run through a simple example. The last video we derived orthogonal projections of vectors onto m-dimensional subspaces. However, this type of abstract thinking, algebraic manipulation and programming is necessary if you want to understand and develop machine learning algorithms. Basic knowledge in python programming and numpyĭisclaimer: This course is substantially more abstract and requires more programming than the other two courses of the specialization. Basic background in multivariate calculus (e.g., partial derivatives, basic optimization)Ĥ. Good background in linear algebra (e.g., matrix and vector algebra, linear independence, basis)ģ. The lectures, examples and exercises require:Ģ. If you are already an expert, this course may refresh some of your knowledge. If you’re struggling, you'll find a set of jupyter notebooks that will allow you to explore properties of the techniques and walk you through what you need to do to get on track. Using all these tools, we'll then derive PCA as a method that minimizes the average squared reconstruction error between data points and their reconstruction.Īt the end of this course, you'll be familiar with important mathematical concepts and you can implement PCA all by yourself. We'll cover some basic statistics of data sets, such as mean values and variances, we'll compute distances and angles between vectors using inner products and derive orthogonal projections of data onto lower-dimensional subspaces. This intermediate-level course introduces the mathematical foundations to derive Principal Component Analysis (PCA), a fundamental dimensionality reduction technique.
