Recognizing Textual Entailment in Indonesian Using Individual Biplet Head-Dependent and Multi-Head Attention Mechanism
Recognizing Textual Entailment (RTE) has become essential to determine inferential relationships between sentences in text-understanding systems. Traditionally, RTE models have addressed textual inferences at both syntactic and semantic levels. However, the development of RTE models for the Indonesi...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/11045921/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1839644555651055616 |
---|---|
author | I Made Suwija Putra Daniel Siahaan Ahmad Saikhu |
author_facet | I Made Suwija Putra Daniel Siahaan Ahmad Saikhu |
author_sort | I Made Suwija Putra |
collection | DOAJ |
description | Recognizing Textual Entailment (RTE) has become essential to determine inferential relationships between sentences in text-understanding systems. Traditionally, RTE models have addressed textual inferences at both syntactic and semantic levels. However, the development of RTE models for the Indonesian language has mainly been limited to lexical sentence-level analysis, overlooking syntactic dependencies among words in the Premise (P) and Hypothesis (H) sentences. This limitation often leads to the capture of irrelevant information within Premise sentences. In this study, we proposed Indo-Biplet Entailment Model (Indo-BiEnt), a deep learning architecture for RTE capable of learning sequential information from sentence pairs transformed into individual Head-Dependent word pairs (Biplet (h-d)). These Biplets are derived from a word-pair-dependency process to capture syntactic relationships between words. Each Biplet pair undergoes direct comparison within the pair segments. Additionally, we employed a BiLSTM network augmented with multi-head attention to emphasize critical words more effectively. Through experiments carried out using the SNLI Indo dataset, our approach achieved a test accuracy of 79%, demonstrating its effectiveness in exploiting syntactic dependencies to enhance the performance of RTE in Indonesian. |
format | Article |
id | doaj-art-8d2c0f5fb61b47cfb79ca29070f89cc3 |
institution | Matheson Library |
issn | 2169-3536 |
language | English |
publishDate | 2025-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj-art-8d2c0f5fb61b47cfb79ca29070f89cc32025-07-02T00:06:06ZengIEEEIEEE Access2169-35362025-01-011311015111016510.1109/ACCESS.2025.358195411045921Recognizing Textual Entailment in Indonesian Using Individual Biplet Head-Dependent and Multi-Head Attention MechanismI Made Suwija Putra0https://orcid.org/0000-0002-9136-6379Daniel Siahaan1https://orcid.org/0000-0001-6560-2975Ahmad Saikhu2https://orcid.org/0000-0001-8753-263XDepartment of Information Technology, Udayana University, Bali, IndonesiaDepartment of Informatics, Institut Teknologi Sepuluh Nopember, Surabaya, East Java, IndonesiaDepartment of Informatics, Institut Teknologi Sepuluh Nopember, Surabaya, East Java, IndonesiaRecognizing Textual Entailment (RTE) has become essential to determine inferential relationships between sentences in text-understanding systems. Traditionally, RTE models have addressed textual inferences at both syntactic and semantic levels. However, the development of RTE models for the Indonesian language has mainly been limited to lexical sentence-level analysis, overlooking syntactic dependencies among words in the Premise (P) and Hypothesis (H) sentences. This limitation often leads to the capture of irrelevant information within Premise sentences. In this study, we proposed Indo-Biplet Entailment Model (Indo-BiEnt), a deep learning architecture for RTE capable of learning sequential information from sentence pairs transformed into individual Head-Dependent word pairs (Biplet (h-d)). These Biplets are derived from a word-pair-dependency process to capture syntactic relationships between words. Each Biplet pair undergoes direct comparison within the pair segments. Additionally, we employed a BiLSTM network augmented with multi-head attention to emphasize critical words more effectively. Through experiments carried out using the SNLI Indo dataset, our approach achieved a test accuracy of 79%, demonstrating its effectiveness in exploiting syntactic dependencies to enhance the performance of RTE in Indonesian.https://ieeexplore.ieee.org/document/11045921/BiLSTMBiplet head-dependentIndonesianmulti-head attentionrecognizing textual entailmentword-pair-dependency |
spellingShingle | I Made Suwija Putra Daniel Siahaan Ahmad Saikhu Recognizing Textual Entailment in Indonesian Using Individual Biplet Head-Dependent and Multi-Head Attention Mechanism IEEE Access BiLSTM Biplet head-dependent Indonesian multi-head attention recognizing textual entailment word-pair-dependency |
title | Recognizing Textual Entailment in Indonesian Using Individual Biplet Head-Dependent and Multi-Head Attention Mechanism |
title_full | Recognizing Textual Entailment in Indonesian Using Individual Biplet Head-Dependent and Multi-Head Attention Mechanism |
title_fullStr | Recognizing Textual Entailment in Indonesian Using Individual Biplet Head-Dependent and Multi-Head Attention Mechanism |
title_full_unstemmed | Recognizing Textual Entailment in Indonesian Using Individual Biplet Head-Dependent and Multi-Head Attention Mechanism |
title_short | Recognizing Textual Entailment in Indonesian Using Individual Biplet Head-Dependent and Multi-Head Attention Mechanism |
title_sort | recognizing textual entailment in indonesian using individual biplet head dependent and multi head attention mechanism |
topic | BiLSTM Biplet head-dependent Indonesian multi-head attention recognizing textual entailment word-pair-dependency |
url | https://ieeexplore.ieee.org/document/11045921/ |
work_keys_str_mv | AT imadesuwijaputra recognizingtextualentailmentinindonesianusingindividualbipletheaddependentandmultiheadattentionmechanism AT danielsiahaan recognizingtextualentailmentinindonesianusingindividualbipletheaddependentandmultiheadattentionmechanism AT ahmadsaikhu recognizingtextualentailmentinindonesianusingindividualbipletheaddependentandmultiheadattentionmechanism |