Particle swarm optimization-based NLP methods for optimizing automatic document classification and retrieval.
Text classification plays an essential role in natural language processing and is commonly used in tasks like categorizing news, sentiment analysis, and retrieving relevant information. [0pc][-9pc]Please check and confirm the inserted city and country name for affiliation 1 is appropriate.However, e...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Public Library of Science (PLoS)
2025-01-01
|
Series: | PLoS ONE |
Online Access: | https://doi.org/10.1371/journal.pone.0325851 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Text classification plays an essential role in natural language processing and is commonly used in tasks like categorizing news, sentiment analysis, and retrieving relevant information. [0pc][-9pc]Please check and confirm the inserted city and country name for affiliation 1 is appropriate.However, existing models often struggle to perform well on multi-class tasks or complex documents. To overcome these limitations, we propose the PBX model, which integrates both deep learning and traditional machine learning techniques. By utilizing BERT for text pre-training and combining it with the ConvXGB module for classification, the model significantly boosts performance. Hyperparameters are optimized using Particle Swarm Optimization (PSO), enhancing overall accuracy. We tested the model on several datasets, including 20 Newsgroups, Reuters-21578, and AG News, where it outperformed existing models in accuracy, precision, recall, and F1 score. In particular, the PBX model achieved a remarkable 95.0% accuracy and 94.9% F1 score on the AG News dataset. Ablation experiments further validate the contributions of PSO, BERT, and ConvXGB. Future work will focus on improving performance for smaller or ambiguous categories and expanding its practical use across various applications. |
---|---|
ISSN: | 1932-6203 |