HCCD: A handwritten camera-captured dataset for document enhancement under varied degradation conditionsMendeley Data

Enhancing degraded handwritten documents captured with smartphone cameras remains a significant challenge in document analysis. Although deep learning-based enhancement techniques have shown promise, the performance of deep learning models largely relies on the availability of meticulously labeled g...

Full description

Saved in:
Bibliographic Details
Main Authors: K.S. Koushik, Bipin Nair B J, N. Shobha Rani
Format: Article
Language:English
Published: Elsevier 2025-08-01
Series:Data in Brief
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2352340925005761
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Enhancing degraded handwritten documents captured with smartphone cameras remains a significant challenge in document analysis. Although deep learning-based enhancement techniques have shown promise, the performance of deep learning models largely relies on the availability of meticulously labeled ground truth datasets. To address this gap, in this study, the Handwritten Camera-Captured Dataset (HCCD) is introduced to support document enhancement and recognition tasks specific to real-world scenarios. Unlike existing datasets, which are captured in controlled environments with scanners or smartphone cameras, HCCD features real-time, camera-captured handwritten documents exhibiting a range of natural degradations. The degradation issues encompass motion blur, shadow artifacts, and uneven lighting, which reflect challenges incurred in the real-life document digitization process.In the proposed dataset, each handwritten document is paired with a high-quality enhanced image created through a combination of computer vision-based imaging techniques. The documents are in Roman script and were contributed by multiple individuals with varying handwriting styles. The dataset is valuable for machine learning/ deep learning-based training for image restoration, denoising, and OCR applications. Each sample is annotated with rich metadata for further targeted research, including degradation type, severity level, and writer-specific demographics.
ISSN:2352-3409