Table of Links
-
Some recent trends in theoretical ML
2.1 Deep Learning via continuous-time controlled dynamical system
2.2 Probabilistic modeling and inference in DL
-
3.1 Kuramoto models from the geometric point of view
3.2 Hyperbolic geometry of Kuramoto ensembles
3.3 Kuramoto models with several globally coupled sub-ensembles
-
Kuramoto models on higher-dimensional manifolds
4.1 Non-Abelian Kuramoto models on Lie groups
4.2 Kuramoto models on spheres
4.3 Kuramoto models on spheres with several globally coupled sub-ensembles
-
5.1 Statistical models over circles and tori
5.2 Statistical models over spheres
5.3 Statistical models over hyperbolic spaces
5.4 Statistical models over orthogonal groups, Grassmannians, homogeneous spaces
-
6.1 Training swarms on manifolds for supervised ML
6.2 Swarms on manifolds and directional statistics in RL
6.3 Swarms on manifolds and directional statistics for unsupervised ML
6.4 Statistical models for the latent space
6.5 Kuramoto models for learning (coupled) actions of Lie groups
6.6 Grassmannian shallow and deep learning
6.7 Ensembles of coupled oscillators in ML: Beyond Kuramoto models
-
Examples
7.2 Linked robot’s arm (planar rotations)
7.3 Linked robot’s arm (spatial rotations)
7.4 Embedding multilayer complex networks (Learning coupled actions of Lorentz groups)
6.6 Grassmannian shallow and deep learning
Some data sets are naturally embedded into Grassmannian manifolds [131]. This fact motivated research efforts in optimization and learning over Grassmannian manifolds. Applications studied so far are mainly in image classification and recognition.
We refer to [132] for some applications of shallow and deep Grassmannian learning. The problems of shallow learning are typically stated as optimization problems over Grassmannian manifolds. These include classification, spectral clustering of high-dimensional data, and low rank matrix completion. The DL problems solved by Grassmannian methods include transfer learning (generalizing the knowledge from one field to another) and feature extraction. Applications are in Image-set/Video based recognition and classification, MIMO communication, recommender systems [132].
The paper [133] proposes deep NN architecture working with the Grassmannian input data. Weights of such NN are matrices and the training is implemented by backpropagation over the matrix groups.
Systems of Riccati ODE’s on Grassmannians [82, 7] (see Section 4.3) can be used for ML on these manifolds. Such an approach has been reported in [88] with applications to some problems of shallow Grassmannian learning.
Author:
(1) Vladimir Jacimovic, Faculty of Natural Sciences and Mathematics, University of Montenegro Cetinjski put bb., 81000 Podgorica Montenegro ([email protected]).
This paper is
