Assessing BERT-based models for Arabic and low-resource languages in crime text classification
The bidirectional encoder representations from Transformers (BERT) has recently attracted considerable attention from researchers and practitioners, demonstrating notable effectiveness in various natural language processing (NLP) tasks, including text classification. This efficacy can be attributed...
Saved in:
Main Authors: | Njood K. Al-harbi, Manal Alghieth |
---|---|
Format: | Article |
Language: | English |
Published: |
PeerJ Inc.
2025-07-01
|
Series: | PeerJ Computer Science |
Subjects: | |
Online Access: | https://peerj.com/articles/cs-3017.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Text classification by CEFR levels using machine learning methods and BERT language model
by: Nadezhda S. Lagutina, et al.
Published: (2023-09-01) -
MSA K-BERT: A Method for Medical Text Intent Classification
by: Yujia Yuan, et al.
Published: (2025-06-01) -
User Opinion Mining on the Maxim Application Reviews Using BERT-Base Multilingual Uncased
by: Sindy Eka Safitri, et al.
Published: (2025-07-01) -
Transformers for Domain-Specific Text Classification: A Case Study in the Banking Sector
by: Samer Murrar, et al.
Published: (2025-01-01) -
Semantic-BERT and semantic-FastText model for education question classification
by: Teotino Gomes Soares, et al.
Published: (2025-05-01)