This project develops a numerically stable and scalable hyperbolic deep learning framework based on Proper Velocity (PV) space and extends it to large-scale transformer architectures for scientific applications. While hyperbolic neural networks provide natural inductive bias for hierarchical and structured data, existing approaches based on Poincaré and Lorentz models suffer from numerical instabilities when scaled to large models. The project addresses this limitation by establishing a complete Riemannian operator toolkit in PV space and deriving basic neural network layers, including linear, convolutional, normalization, and classification layers. Building on this stable infrastructure, the team constructs intrinsic hyperbolic transformers, enabling large-scale hyperbolic models. The framework will be validated on electroencephalography (EEG) decoding tasks, where hierarchical and structured dynamics make non-Euclidean representations particularly suitable.
90000 Awarded Resources (in node hours)
Leonardo BOOSTER System Partition
July 2026 -January 2027 Allocation Period