Knowledge distillation for spiking neural networks: aligning features and saliency
Spiking neural networks (SNNs) are renowned for their energy efficiency and bio-fidelity, but their widespread adoption is hindered by challenges in training, primarily due to the non-differentiability of spiking activations and limited representational capacity. Existing approaches, such as artific...
Saved in:
Main Authors: | Yifan Hu, Guoqi Li, Lei Deng |
---|---|
Format: | Article |
Language: | English |
Published: |
IOP Publishing
2025-01-01
|
Series: | Neuromorphic Computing and Engineering |
Subjects: | |
Online Access: | https://doi.org/10.1088/2634-4386/ade821 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Spiking Neural Networks for Multimodal Neuroimaging: A Comprehensive Review of Current Trends and the NeuCube Brain-Inspired Architecture
by: Omar Garcia-Palencia, et al.
Published: (2025-06-01) -
Implementing Holographic Reduced Representations for Spiking Neural Networks
by: Vidura Sumanasena, et al.
Published: (2025-01-01) -
Improving Low-Resource Neural Machine Translation With Teacher-Free Knowledge Distillation
by: Xinlu Zhang, et al.
Published: (2020-01-01) -
SpiNeRF: direct-trained spiking neural networks for efficient neural radiance field rendering
by: Xingting Yao, et al.
Published: (2025-07-01) -
Neuromorphic touch for robotics—a review
by: Tianyi Liu, et al.
Published: (2025-01-01)