Modern applications of statistical learning often revolves around high-dimensional data sets, whether they be genomic data, images, texts, high-resolution time series, etc. This raises challenges not only in term of computational resources, but also in term of designing efficient algorithms with solid, established guarantees. 

Manifold Learning encompasses a series of non-linear dimension reduction technics that grew particularly popular during the past two decades and that are now commonly used as a pipeline in Machine Learning or Deep Learning algorithms, but also in all branches of science as a powerful way to visualize and interpret high-dimensional datasets. The analysis of these algorithms relies on the assumption that the observed data are gathered around thin, low-dimensional structures of the ambient space, called submanifolds.

Here's a short overview of the topic, here's the scikit-learn's presentation page of its Manifold Learning algorithms with figures and real-data applications, and here's the Allen Brain Cell Atlas, a UMAP 2-dimensional representation of a 4.3 million single-cell transcriptomes from adult mouse brain labeled by source brain region.