Evaluating Adversarial Robustness of No-Reference Image and Video Quality Assessment Models with Frequency-Masked Gradient Orthogonalization Adversarial Attack

Neural-network-based models have made considerable progress in many computer vision areas over recent years. However, many works have exposed their vulnerability to malicious input data manipulation—that is, to adversarial attacks. Although many recent works have thoroughly examined the adversarial...

Full description

Saved in:
Bibliographic Details
Main Authors: Khaled Abud, Sergey Lavrushkin, Dmitry Vatolin
Format: Article
Language:English
Published: MDPI AG 2025-06-01
Series:Big Data and Cognitive Computing
Subjects:
Online Access:https://www.mdpi.com/2504-2289/9/7/166
Tags: Add Tag
No Tags, Be the first to tag this record!