The Kernel PCA

The Kernel PCA is an extension of the PCA algorithm. In particular, we desire to (i) transform our existing dataset to another high-dimensional space and then (ii) perform PCA on the data in that space.

In this blog post, I will perform a very quick, non-rigorous overview of Kernel PCA and demonstrate some connections between other forms of dimension-reduction.

Why would this be useful? Consider this dataset (stolen from the scikit-learn documentation):

As we can see, the original PCA projection is fairly useless. Applying a kernel produces a much better projection.

Like kernels in several problems, the trick is to avoid transforming data-points and leveraging dot-products.

The technique is as follows:

$ \Phi $ is a mapping from our existing point-space to a higher-dimensional space $ \mathcal{F} $.

After a certain amount of linear algebra, the PCA in space $ \mathcal{F} $ can be expressed as a PCA on the kernel matrix.

So, the algorithm is expressible as follows:

  • Compute the kernel matrix $ K $ where $ K_{ij} = \Phi(x_i) \cdot \Phi(x_j) $.

  • This matrix is efficiently constructed since we can obtain the constituent dot products in the original space.

  • The SVD of $ K $ gives you $ USV^{T} $ - $ U $ and $ S $ can be used to construct a reduced dimension dataset.

Now that the intro is out of the way, I wanted to demonstrate some simple connections between algorithms I’ve covered recently:

MDS with Euclidean Distances

In previous blog posts, we covered that MDS and PCA are equivalent. A simple proof exists to show that MDS and Kernel PCA are the same thing:

  • The proximity matrix in the MDS algorithm is built with distances. We can express distances between vectors $ x $ and $ y $ as:

$$ d(x,y) = x \cdot x + y \cdot y – 2x \cdot y $$

  • Thus, distances can be expressed as a kernel. The upshot of this is that the MDS algorithm itself is an instance of Kernel PCA.

Isomap

The Isomap algorithm (covered in a previous post) trades the Euclidean distance with edge weights in a nearest neighbor graph. The entries in this proximity matrix are surrogates for distances and thus the Isomap algorithm is an instance of Kernel PCA as well.

It is amazing how several different approaches to dimension-reduction are variants of a single theme.


Multidimensional Scaling and PCA are the Same Thing

There is a very simple argument that shows that MDS and PCA achieve the same results.

This argument has 2 important components. The first of these shows that an eigendecomposition of a gram matrix can be used for dimension-reduction.

PCA leverages the singular value decomposition (SVD). Given a matrix $ X $, the SVD is $ X = USV^{T} $.

We drop columns from X by using $ US_{t} $ where we drop some rows and columns from S.

This is also conviently obtained using an eigendecomposition of the covariance matrix.

Working with the gram matrix, we have $ XX^{T} $ and when expressed in terms of $ U $, $ S $ and $ V $, we have $ XX^{T} $ = $ (USV^{T})(VSU^{T}) $.

Simple algebra tells us that this is equal to $ US^{2}U^{T} $. The spectral theorem tells us that this the eigendecomposition of the gram matrix will return this decomposition. $ U $ and $ S $ can be retrieved and a dataset with fewer dimensions can be obtained.

The second part of the argument involves proving that a matrix of distances is indeed a gram matrix. This argument was discussed in a previous post.


Next Step

Last week, I dropped out of the PhD program at LTI, SCS, Carnegie Mellon University with a masters degree. The specifics of this decision will remain private. If you are going through issues in your PhD program, I will be happy to talk to you and give you advice (should you desire any). You can email me at: shriphanip@gmail.com. You can also call me at 425–623–2604.


Implementing Truncated Matrix Decompositions for Core.Matrix

Eigendecompositions and Singular Value Decompositions appear in a variety of settings in machine learning and data mining. The eigendecomposition looks like so:

$$ \mathbf{A}=\mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{-1} $$

$ \mathbf{Q} $ contains the eigenvectors of $ \mathbf{A} $ and $ \mathbf{\Lambda} $ is a diagonal matrix containing the eigenvalues.

The singular value decomposition looks like:

$$ \mathbf{A} = \mathbf{U} \boldsymbol{\Sigma} \mathbf{V}^* $$

$ \mathbf{U} $ contains the eigenvectors of the covariance matrix $ \mathbf{A}\mathbf{A^T} $. $ \mathbf{V} $ contains the eigenvectors of the gram matrix $ \mathbf{A^T}\mathbf{A} $.

The truncated variants of these decompositions allow us to compute only a few eigenvalues(vectors) or singular values (vectors).

This is important since (i) a lot of times, the smaller eigenvalues are discarded, and (ii) you don’t want to compute the entire decomposition and retain only a few of the rows and columns of the computed matrices each time.

For core.matrix, I implemented these truncated decompositions in Kublai. Details below.


The Isomap Algorithm

This is part of a series on a family of dimension-reduction algorithms called non-linear dimension reduction. The goal here is to reduce the dimensions of a dataset (i.e. discard some columns in your data)

In previous posts, I discussed the MDS algorithm and presented some key ideas. In this post, I will describe how those ideas are leveraged in the Isomap algorithm. A clojure implementation based on core.matrix is also included.


Powerful Ideas in Manifold Learning

In a previous post, I described the MDS (multidimensional scaling) algorithm. This algorithm operates on a proximity matrix which is a matrix of distances between the points in a dataset. From this matrix, a configuration of points is retrieved in a lower dimension.

The MDS strategy is:

  • We have a matrix $ D $ for distances between points in the data. This matrix is symmetric.
  • We express distances as dot-products (using a proof from Schonberg). This means that $ D $ is expressed as $ X^T X $. (Observe that $ X^T X $ is a matrix of dot-products).
  • Once we have $ X^T X $, dimension reduction is trivial. Running an eigendecomposition on this matrix will produced centered coordinates. The low-dimension embedding is recovered by discarding eigenvalues (eigenvectors).

Thus, assuming that we work with euclidean distances between points, we retrieve an embedding that PCA itself would produce. Thus, MDS with euclidean distances is identical to PCA.

Then what exactly is the value of running MDS on a dataset?

First, the PCA is not the most powerful approach. For certain datasets, euclidean distances do not capture the shape of the underlying manifold. Running the steps of the MDS on a different distance matrix (at least one that doesn’t contain euclidean distances) can lead to better results - a technique that the Isomap algorithm exploits.

Second, the PCA requires a vector-representation for points. In several situations, the objects in the dataset are not points in a metric space (like strings). We can retrieve distances between objects (say edit-distance for strings) and then obtain a vector-representation for the objects using MDS.

In the next blog post, I will describe and implement the Isomap algorithm that leverages the ideas in the MDS strategy. Isomap constructs a distance matrix that attempts to do a better job at recovering the underlying manifold.

PROOFS:


Low Dimension Embeddings for Visualization

Representation learning is a hot area in machine learning. In natural language processing (for example), learning long vectors for words has proven quite effective on several tasks. Often, these representations have several hundred dimensions. To perform a qualitative analysis of the learned representations, it helps to visualize them. Thus, we need a principled approach to drop from this high dimensional space to a lower dimensional space (like $ \mathbb{R}^2 $ for instance).

In this blog post, I will discuss the Multidimensional scaling (MDS) algorithm - a manifold learning algorithm that recovers (or at least tries to recover) the underlying manifold that contains the data.

MDS aims to find a configuration in a lower-dimension space that preserves distances between points in the data. In this post, I will provide intuition for this algorithm and an implementation for clojure (incanter).


The Rothko

No. 14 at SF Moma

For over five centuries the Sistine Chapel ceiling has been among the greatest things man has produced. I would give two limbs for a magnum opus of its caliber.

In contrast, the first time I saw a Rothko in my early teens, I concluded that this was the outcome of giving a child with severe OCD a set of crayons.

Over the last few weeks, amidst a very tough and frustrating period (this is far too complex for this one post) in my life, I had a chance to reflect on one of Rothko’s signature pieces and study the underlying process through a MOMA video [1]. I felt a new sense of respect for Rothko’s works. A Rothko is quite literally a metaphor for life. Our visible exterior is the product of several layers that comprise our experiences.

Rothko had a famous quote:

The people who weep before my pictures are having the same religious experience I had when I painted them.

It took eight years for me to have this experience. I am better off for it.

[1] The Painting Techniques of Mark Rothko: No. 16 (Red, Brown, and Black)



Twitter: @shriphani
Instagram: @life_of_ess
Fortior Per Mentem
(c) Shriphani Palakodety 2013-2020