A Comprehensive Survey of Explainable Artificial Intelligence Techniques for Malicious Insider Threat Detection
Malicious insider threats remain a persistent and formidable challenge for organizations, primarily due to their covert nature and the severe impact they can have on critical systems and sensitive data. Traditional detection mechanisms often struggle to uncover such threats, underscoring the need fo...
Saved in:
Main Authors: | Khuloud Saeed Alketbi, Abid Mehmood |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/11075748/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Explainability and Interpretability in Concept and Data Drift: A Systematic Literature Review
by: Daniele Pelosi, et al.
Published: (2025-07-01) -
Pre Hoc and Co Hoc Explainability: Frameworks for Integrating Interpretability into Machine Learning Training for Enhanced Transparency and Performance
by: Cagla Acun, et al.
Published: (2025-07-01) -
Federated XAI IDS: An Explainable and Safeguarding Privacy Approach to Detect Intrusion Combining Federated Learning and SHAP
by: Kazi Fatema, et al.
Published: (2025-05-01) -
Deriving equivalent symbol-based decision models from feedforward neural networks
by: Sebastian Seidel, et al.
Published: (2025-07-01) -
Exploration and practice of human-machine trustworthy mechanism in XAI
by: LUO Zhongyan, et al.
Published: (2025-07-01)