Optimizing the Learnable RoPE Theta Parameter in Transformers
Rotary Position Embedding (RoPE) enhances Transformer models by encoding relative positions through a frequency parameter <inline-formula> <tex-math notation="LaTeX">$\theta $ </tex-math></inline-formula>, but conventional implementations fix <inline-formula>...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/11084811/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Rotary Position Embedding (RoPE) enhances Transformer models by encoding relative positions through a frequency parameter <inline-formula> <tex-math notation="LaTeX">$\theta $ </tex-math></inline-formula>, but conventional implementations fix <inline-formula> <tex-math notation="LaTeX">$\theta $ </tex-math></inline-formula>, constraining adaptability. We conduct the first systematic study of learnable RoPE <inline-formula> <tex-math notation="LaTeX">$\theta $ </tex-math></inline-formula>, introducing four optimization strategies—separate learning rates, layer-wise initialization, cosine annealing scheduling, and sigmoid-based constraints—to stabilize and refine positional learning. Our approach demonstrates modest but consistent benefits across multiple datasets including Tiny Shakespeare, WikiText-103, and IWSLT’14, achieving measurable gains in validation loss, perplexity, and BLEU scores relative to a fixed-<inline-formula> <tex-math notation="LaTeX">$\theta $ </tex-math></inline-formula> baseline while maintaining high inference throughput and requiring minimal architectural modifications. Ablation experiments quantify each strategy’s contribution and offer practical integration guidelines. This adaptive position encoding framework provides a foundation for large-scale pretraining and diverse sequence modeling applications. |
---|---|
ISSN: | 2169-3536 |