Geometry Meets Attention: Interpretable Transformers via SVD Inspiration

Self-attention is a cornerstone of modern deep learning, yet its dense dot-product formulation offers limited interpretability and lacks explicit structural constraints. We propose SVD-inspired Attention (SVDA), a novel self-attention mechanism that introduces normalized query/key projections and a...

Full description

Saved in:
Bibliographic Details
Main Authors: Vasileios Arampatzakis, George Pavlidis, Nikolaos Mitianoudis, Nikos Papamarkos
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11072340/
Tags: Add Tag
No Tags, Be the first to tag this record!