SCIENCE CHINA Information Sciences, Volume 62, Issue 9: 192101(2019) https://doi.org/10.1007/s11432-018-9702-7

## Applying Ricci flow to high dimensional manifold learning

Yangyang LI1,2,*,
More info
• ReceivedMar 21, 2018
• AcceptedSep 11, 2018
• PublishedMay 14, 2019
Share
Rating

### Abstract

In machine learning, a high dimensional data set such as the digital image of a human face is often viewed as a point set distributed on a differentiable manifold. In many cases, the intrinsic dimension of this manifold is low but the representation dimension of the data points is high. To ease data processing requirements, manifold learning (ML) techniques can be used to reduce a high dimensional manifold (HDM) to a low dimensional one while keeping the essential geometric properties, such as relative distances between points, unchanged. Traditional ML algorithms often assume that the local neighborhood of any point on an HDM is roughly equal to the tangent space at that point. This assumption leads to the disadvantage that the neighborhoods of points on the manifold, though they have a very different curvature, will be treated equally and will be projected to a lower dimensional space. The curvature is a different way of manifold processing, where traditional dimension reduction is ineffective at preserving the neighborhood. To overcome this obstacle, we perform an “operation" on the HDM using Ricci flow before a manifold's dimension reduction. More precisely, with the Ricci flow, we transform each local neighborhood of the HDM to a constant curvature patch. The HDM, as a whole, is then transformed into a subset of a sphere with constant positive curvature. We compare the proposed algorithm with other traditional manifold learning algorithms. Experimental results have shown that the proposed method outperforms other ML algorithms with a better neighborhood preserving rate.

### Acknowledgment

This work was supported by National Key Research and Development Program of China (Grant No. 2016YFB1000902), National Natural Science Foundation of China (Grant Nos. 61472412, 61621003), Beijing Science and Technology Project, and Tsinghua-Tencent-AMSS-Joint Project.

### References

[1] Roweis S T, Saul L K. Nonlinear dimensionality reduction by locally linear embedding.. Science, 2000, 290: 2323-2326 CrossRef PubMed Google Scholar

[2] Tenenbaum J B, Silva V D, Langford J C. A global geometric framework for nonlinear dimensionality reduction.. Science, 2000, 290: 2319-2323 CrossRef PubMed Google Scholar

[3] Lin B, He X, Ye J. A geometric viewpoint of manifold learning. Appl Inform, 2015, 2: 3 CrossRef Google Scholar

[4] Zhu T, Shen F R, Zhao J X, et al. Topology learning embedding: a fast and incremental method for manifold learning. In: Proceedings of International Conference on Neural Information Processing, 2007. 43--52. Google Scholar

[5] Belkin M, Niyogi P. Laplacian eigenmaps and spectral technique for embedding and clustering. Neural Information Processing Systems, 2001, 14: 585-591. Google Scholar

[6] He X F, Partha N. Locality Preserving Projections. In: Proceedings of International Conference Advances in Neural Information Processing Systems, 2003. Google Scholar

[7] Li Y, Lu R. Locality preserving projection on SPD matrix Lie group: algorithm and analysis. Sci China Inf Sci, 2018, 61: 092104 CrossRef Google Scholar

[8] Zhang Z, Zha H B. Principal Manifolds and Nonlinear Dimension Reduction via Local Tangent Space Alignment. SIAM J Scientific Computing, 2005, 26: 313-338. Google Scholar

[9] Donoho D L, Grimes C. Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data. Proc Natl Acad Sci USA, 2003, 100: 5591-5596 CrossRef ADS Google Scholar

[10] Cox T F, Cox M A A. Multidimensional Scaling. Boca Raton: CRC Press, 1990. Google Scholar

[11] Jolliffe I T. Principal Component Analysis. Berlin: Springer, 1989. Google Scholar

[12] Hettiarachchi R, Peters J F. Multi-manifold LLE learning in pattern recognition. Pattern Recognition, 2015, 48: 2947-2960 CrossRef Google Scholar

[13] Mahapatra S, Chandola V. S-Isomap+: multi manifold learning from streaming data. In: Proceedings of IEEE International Conference on Big Data, 2017. Google Scholar

[14] Cao H D, Zhu X P. A Complete Proof of the Poincaré and Geometrization Conjectures - application of the Hamilton-Perelman theory of the Ricci flow. Asian J Math, 2006, 10: 165-492 CrossRef Google Scholar

[15] Kwang I K, James T, Christian T. Curvature-aware regularization on Riemannian submanifolds. In: Proceedings of IEEE International Conference on Computer Vision, 2013. Google Scholar

[16] Xu W, Hancock E R, Wilson R C. Ricci flow embedding for rectifying non-Euclidean dissimilarity data. Pattern Recognition, 2014, 47: 3709-3725 CrossRef Google Scholar

[17] Xu W P, Hancock E R, Wilson R C. Regularising the Ricci flow embedding. In: Proceedings of Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR), 2010. 579--588. Google Scholar

[18] Xu W P, Hancock E R, Wilson R C. Rectifying non-Euclidean similarity data using Ricci flow embedding. In: Proceedings of the 20th International Conference on Pattern Recognition, 2010. Google Scholar

[19] Zeng W, Dimitris S, Gu D. Ricci flow for 3D shape analysis. IEEE Trans Pattern Anal Mach Intell, 2010, 32: 662--677. Google Scholar

[20] Zeng W, Yin X T, Zeng Y, et al. 3D face matching and registration based on hyperbolic Ricci flow. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2008. Google Scholar

[21] Peter P. Riemannian Geometry. Berlin: Springer, 1998. Google Scholar

[22] John M. Riemannian Manifolds. Berlin: Springer, 1997. Google Scholar

[23] Cao H D, Chow B. Recent developments on the Ricci flow. Bull Amer Math Soc, 1999, 36: 59-75 CrossRef Google Scholar

[24] Simon B, Richard S. Riemannian manifolds of positive curvature. In: Proceedings of the International Congress of Mathematicians, 2000. Google Scholar

[25] Brendle S, Schoen R M. Classification of manifolds with weakly 1/4-pinched curvatures. Acta Math, 2008, 200: 1-13 CrossRef Google Scholar

[26] Gorelick L, Blank M, Shechtman E. Actions as space-time shapes.. IEEE Trans Pattern Anal Mach Intell, 2007, 29: 2247-2253 CrossRef PubMed Google Scholar

[27] Wittman T. Mani Matlab demo. 2005. http://www.math.umn.edu/~wittman/mani/. Google Scholar

[28] Sanguinetti G. Dimensionality reduction of clustered data sets.. IEEE Trans Pattern Anal Machine Intell, 2008, 30: 535-540 CrossRef PubMed Google Scholar

• Figure 1

The intuitive process of the RF-ML algorithm. The sub-images from left to right are input data points distributed on manifold $\mathcal{M}$, overlapping patches, overlapping patches flowing into discrete spherical patches using Ricci flow, and alignment of spherical patches to a subset of a sphere, respectively. On the bottom is the representation of data points in a low dimensional Euclidean space.

• Figure 2

(Color online) The neighborhood preserving rate of two-dimensional Ellipsoid embedded in 3-dimensional Euclidean space. Compute the NPRs under different neighborhood size parameter values $K$ with five manifold learning algorithms.

• Figure 3

(Color online) The intuitive embedding of puncture sphere database under different algorithms. (a) The original database; (b) Isomap; (c) LLE; (d) LEP; (e) LTSA; (f) RF-ML.

• Figure 4

(Color online) The intuitive embedding of Gaussian database under different algorithms. (a) The original database; (b) Isomap; (c) LLE; (d) LEP; (e) LTSA; (f) RF-ML.

• Figure 5

(Color online) The curvature distributions of the (a) ORL face database, (b) Yale face database, (c) YaleB face database, and (d) USPS database.

• Figure 6

(Color online) The classification accuracies of different manifold learning algorithms under different low dimensions $d$.

•

Algorithm 1 Applying Ricci flow to manifold learning (RF-ML)

series Input: Training data points $\{x_1,x_2,\ldots~,x_N\}\in~\mathbb{R}^D$, neighbor size parameter $K$.1.for $i=1$ to $N$ do2. Find $K$-nearest neighbors of $x_i$;3. Compute tangent space $T_{x_i}~\mathcal{M}$;4.end for5.Repeat6. Update the Ricci flow equations using Eqs. (11)–(13);7.Until convergence;8.Align the deformed $\{Y_1,~Y_2,\ldots,~Y_N\}$ to a complete subset $P$ of sphere;9.Use traditional manifold learning algorithms to reduce the dimension of spherical data points.

series Output: Low dimensional representations $\{z_1,z_2,\ldots~z_N\}~\in~\mathbb{R}^d$.

• Table 1   Number of neighborhoods in each dimension$^{\rm~a)}$
 Databases ORL Face Yale Face YaleB Face Weizmann Swiss Roll Sphere Org-dim 1024* 1024* 1024* 200* 3* 3* 1 0 0 0 0 0 0 2 0 0 0 90 1000 984 3 0 0 38 317 – 16 4 0 0 238 366 – – 5 0 0 362 315 – – 6 6 0 415 251 – – 7 214 9 772 349 – – 8 182 156 583 387 – – 9 – – 6 – – – Total 400* 165* 2414* 2075* 1000* 1000* $K$ 10 10 10 10 10 10 Ratio 0.95 0.95 0.95 0.95 0.95 0.95

a

• Table 2   Neighborhood preserving ratio. In this experiment, we fix the neighborhood-size parameter $K~=~10$$^{\rm~a)}$
 Methods PCA Isomap LLE LEP LTSA TLE RF-ML Swiss Roll $0.5137$ $0.8594$ $0.6187$ $0.3981$ $0.6121$ $\textbf{0.8701}$ $0.8594$ Ellipsoid $0.4399$ $0.6914$ $0.6205$ $0.7506$ $0.4390$ $0.7246$ $\textbf{0.8702}$ Sphere $0.4815$ $0.6467$ $0.5213$ $0.7720$ $0.5465$ $0.6871$ $\textbf{0.8684}$ Gaussian $0.9969$ $0.9261$ $0.9359$ $0.6406$ $\textbf{0.9970}$ $0.9421$ $0.9909$

a) The blod fonts represent the highest value of neighborhood preserving ratio.

• Table 3   Neighborhood preserving ratio$^{\rm~a)}$
 Method PCA LPP LLE LEP LTSA RF-ML ORL DB $52.50~\pm~1.2$ $62.50~\pm~2.1$ $61.87~\pm~1.6$ $63.82\pm~2.3$ $60.31~\pm~1.6$ $65.04~\pm~1.3$ Yale DB $35.73~\pm~1.4$ $71.24~\pm~1.8$ $69.05~\pm~2.2$ $72.06\pm~1.5$ $70.43~\pm~1.4$ $73.28~\pm~1.6$ YaleB DB $63.72\pm~1.7$ $67.26\pm~1.2$ $60.46~\pm~1.7$ $72.62\pm~1.8$ $65.76~\pm~1.3$ $73.95~\pm~2.1$ USPS DB $86.69\pm~0.9$ $91.61\pm~1.3$ $84.62\pm~1.5$ $92.53\pm~1.1$ $86.12~\pm~1.4$ $93.01\pm~1.7$

a

Citations

• #### 0

Altmetric

Copyright 2020 Science China Press Co., Ltd. 《中国科学》杂志社有限责任公司 版权所有