Classification-Friendly Sparse Encoder and Classifier Learning

Sparse representation (SR) and dictionary learning (DL) have been extensively used for feature encoding, aiming to extract the latent classification-friendly feature of observed data. Existing methods use sparsity penalty and learned dictionary to enhance discriminative capability of sparse codes. H...

Full description

Saved in:
Bibliographic Details
Main Authors: Chunyu Yang, Weiwei Wang, Xiangchu Feng, Shuisheng Zhou
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9040573/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1839608041789456384
author Chunyu Yang
Weiwei Wang
Xiangchu Feng
Shuisheng Zhou
author_facet Chunyu Yang
Weiwei Wang
Xiangchu Feng
Shuisheng Zhou
author_sort Chunyu Yang
collection DOAJ
description Sparse representation (SR) and dictionary learning (DL) have been extensively used for feature encoding, aiming to extract the latent classification-friendly feature of observed data. Existing methods use sparsity penalty and learned dictionary to enhance discriminative capability of sparse codes. However, training dictionary for SR is time consuming and the resulted discriminative capability is limited. Rather than learning dictionary, we propose to employ the dictionary at hand, e.g., the training set as the class-specific synthesis dictionary to pursue an ideal discriminative property of SR of the training samples: each data can be represented only by data-in-class. In addition to the discriminative property, we also introduce a smoothing term to enforce the representation vectors to be uniform within class. The discriminative property helps to separate the data from different classes while the smoothing term tends to group the data from the same class and further strengthen the separation. The SRs are used as new features to train a sparse encoder and a classifier. Once the sparse encoder and the classifier are learnt, the test stage is very simple and highly efficient. Specifically, the label of a test sample can be easily computed by multiplying the test sample with the sparse encoder and the classifier. We call our method Classification-Friendly Sparse Encoder and Classifier Learning (CF-SECL). Extensive experiments show that our method outperforms some state-of the-art model-based methods.
format Article
id doaj-art-fb0aed51e18a4b91a5d4f20c05a3ce4d
institution Matheson Library
issn 2169-3536
language English
publishDate 2020-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-fb0aed51e18a4b91a5d4f20c05a3ce4d2025-07-31T23:00:23ZengIEEEIEEE Access2169-35362020-01-018544945450510.1109/ACCESS.2020.29816179040573Classification-Friendly Sparse Encoder and Classifier LearningChunyu Yang0https://orcid.org/0000-0001-8492-1111Weiwei Wang1https://orcid.org/0000-0002-6985-2784Xiangchu Feng2https://orcid.org/0000-0002-3463-2060Shuisheng Zhou3School of Mathematics and Statistics, Xidian University, Xi’an, ChinaSchool of Mathematics and Statistics, Xidian University, Xi’an, ChinaSchool of Mathematics and Statistics, Xidian University, Xi’an, ChinaSchool of Mathematics and Statistics, Xidian University, Xi’an, ChinaSparse representation (SR) and dictionary learning (DL) have been extensively used for feature encoding, aiming to extract the latent classification-friendly feature of observed data. Existing methods use sparsity penalty and learned dictionary to enhance discriminative capability of sparse codes. However, training dictionary for SR is time consuming and the resulted discriminative capability is limited. Rather than learning dictionary, we propose to employ the dictionary at hand, e.g., the training set as the class-specific synthesis dictionary to pursue an ideal discriminative property of SR of the training samples: each data can be represented only by data-in-class. In addition to the discriminative property, we also introduce a smoothing term to enforce the representation vectors to be uniform within class. The discriminative property helps to separate the data from different classes while the smoothing term tends to group the data from the same class and further strengthen the separation. The SRs are used as new features to train a sparse encoder and a classifier. Once the sparse encoder and the classifier are learnt, the test stage is very simple and highly efficient. Specifically, the label of a test sample can be easily computed by multiplying the test sample with the sparse encoder and the classifier. We call our method Classification-Friendly Sparse Encoder and Classifier Learning (CF-SECL). Extensive experiments show that our method outperforms some state-of the-art model-based methods.https://ieeexplore.ieee.org/document/9040573/Sparse representationdiscriminative sparse encoderpattern classification
spellingShingle Chunyu Yang
Weiwei Wang
Xiangchu Feng
Shuisheng Zhou
Classification-Friendly Sparse Encoder and Classifier Learning
IEEE Access
Sparse representation
discriminative sparse encoder
pattern classification
title Classification-Friendly Sparse Encoder and Classifier Learning
title_full Classification-Friendly Sparse Encoder and Classifier Learning
title_fullStr Classification-Friendly Sparse Encoder and Classifier Learning
title_full_unstemmed Classification-Friendly Sparse Encoder and Classifier Learning
title_short Classification-Friendly Sparse Encoder and Classifier Learning
title_sort classification friendly sparse encoder and classifier learning
topic Sparse representation
discriminative sparse encoder
pattern classification
url https://ieeexplore.ieee.org/document/9040573/
work_keys_str_mv AT chunyuyang classificationfriendlysparseencoderandclassifierlearning
AT weiweiwang classificationfriendlysparseencoderandclassifierlearning
AT xiangchufeng classificationfriendlysparseencoderandclassifierlearning
AT shuishengzhou classificationfriendlysparseencoderandclassifierlearning