Also see:
Note that this list may not be updated as frequently as others, see my Arxiv page and Google Scholar for the latest.
2021
Generative Locally Linear Embedding
2021.
Locally Linear Embedding (LLE) is a nonlin- ear spectral dimensionality reduction and manifold learning method. It has two main steps which are linear reconstruc- tion and linear embedding of points in the input space and embedding space, respectively. In this work, we propose two novel generative versions of LLE, named Generative LLE (GLLE), whose linear reconstruction steps are stochastic rather than deterministic. GLLE assumes that every data point is caused by its linear reconstruction weights as latent factors. The proposed GLLE algorithms can generate various LLE embeddings stochastically while all the generated embeddings relate to the original LLE embedding. We propose two versions for stochastic linear reconstruction, one using expectation maximization and another with direct sampling from a derived distribution by optimization. The proposed GLLE methods are closely related to and inspired by variational inference, factor analysis, and probabilistic principal component analysis. Our simulations show that the proposed GLLE methods work effectively in unfolding and generating submanifolds of data.
Factor Analysis, Probabilistic Principal Component Analysis, Variational Inference, and Variational Autoencoder: Tutorial and Survey
arXiv preprint arXiv:2101.00734.
2021.
Laplacian-Based Dimensionality Reduction Including Spectral Clustering, Laplacian Eigenmap, Locality Preserving Projection, Graph Embedding, and Diffusion Map: Tutorial and Survey
arXiv preprint arXiv:2106.02154.
2021.
Reproducing Kernel Hilbert Space, Mercer’s Theorem, Eigenfunctions, Nystr\backslash" om Method, and Use of Kernels in Machine Learning: Tutorial and Survey
arXiv preprint arXiv:2106.08443.
2021.
2020
Fisher Discriminant Triplet and Contrastive Losses for Training Siamese Networks
arXiv preprint arXiv:2004.04674.
2020.
Locally Linear Embedding and its Variants: Tutorial and Survey
arXiv preprint arXiv:2011.10925.
2020.
Multidimensional scaling, Sammon mapping, and Isomap: Tutorial and survey
arXiv preprint arXiv:2009.08136.
2020.
Roweisposes, Including Eigenposes, Supervised Eigenposes, and Fisherposes, for 3D Action Recognition
arXiv preprint arXiv:2006.15736.
2020.
Sampling algorithms, from survey sampling to Monte Carlo methods: Tutorial and literature review
arXiv preprint arXiv:2011.00901.
2020.
Stochastic neighbor embedding with Gaussian and Student-t distributions: Tutorial and survey
arXiv preprint arXiv:2009.10301.
2020.
2019
Feature selection and feature extraction in pattern analysis: A literature review
2019.
The theory behind overfitting, cross validation, regularization, bagging, and boosting: tutorial
2019.
Linear and Quadratic Discriminant Analysis: Tutorial
2019.
Unsupervised and supervised principal component analysis: Tutorial
2019.
Fisher and kernel fisher discriminant analysis: Tutorial
2019.
Quantized Fisher Discriminant Analysis
2019.
Distributed Voting in Beep Model
2019.
Hidden Markov Model: Tutorial
engrXiv,
2019.
Eigenvalue and Generalized Eigenvalue Problems: Tutorial
2019.
Fitting A Mixture Distribution to Data: Tutorial
2019.
Addressing the Mystery of Population Decline of the Rose-Crested Blue Pipit in a Nature Preserve using Data Visualization
2019.
Roweis Discriminant Analysis: A Generalized Subspace Learning Method
oct,
2019.
We present a new method which generalizes subspace learning based on eigenvalue and generalized eigenvalue problems. This method, Roweis Discriminant Analysis (RDA), is named after Sam Roweis to whom the field of subspace learning owes significantly. RDA is a family of infinite number of algorithms where Principal Component Analysis (PCA), Supervised PCA (SPCA), and Fisher Discriminant Analysis (FDA) are special cases. One of the extreme special cases, which we name Double Supervised Discriminant Analysis (DSDA), uses the labels twice; it is novel and has not appeared elsewhere. We propose a dual for RDA for some special cases. We also propose kernel RDA, generalizing kernel PCA, kernel SPCA, and kernel FDA, using both dual RDA and representation theory. Our theoretical analysis explains previously known facts such as why SPCA can use regression but FDA cannot, why PCA and SPCA have duals but FDA does not, why kernel PCA and kernel SPCA use kernel trick but kernel FDA does not, and why PCA is the best linear method for reconstruction. Roweisfaces and kernel Roweisfaces are also proposed generalizing eigenfaces, Fisherfaces, supervised eigenfaces, and their kernel variants. We also report experiments showing the effectiveness of RDA and kernel RDA on some benchmark datasets.