Textual Data Selection For Language Modelling In The Scope Of Automatic Speech Recognition

The language model is an important module in many applications that produce natural language text, in particular speech recognition. Training of language models requires large amounts of textual data that matches with the target domain. Selection of target domain (or in-domain) data has been invest...

Full description

Saved in:
Bibliographic Details
Main Authors: Freha Mezzoudj, David Langois, Denis Jouvet
Format: Article
Language:Arabic
Published: Scientific and Technological Research Center for the Development of the Arabic Language 2016-05-01
Series:Al-Lisaniyyat
Subjects:
Online Access:https://www.crstdla.dz/ojs/index.php/allj/article/view/370
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The language model is an important module in many applications that produce natural language text, in particular speech recognition. Training of language models requires large amounts of textual data that matches with the target domain. Selection of target domain (or in-domain) data has been investigated in the past. For example [1] has proposed a criterion based on the difference of cross-entropy between models representing in-domain and non-domain-specific data. However evaluations were conducted using only two sources of data, one corresponding to the in-domain, and another one to generic data from which sentences are selected. In the scope of broadcast news and TV shows transcription systems, language models are built by interpolating several language models estimated from various data sources. This paper investigates the data selection process in this context of building interpolated language models for speech transcription. Results show that, in the selection process, the choice of the language models for representing in-domain and non-domain-specific data is critical. Moreover, it is better to apply the data selection only on some selected data sources. This way, the selection process leads to an improvement of 8.3 in terms of perplexity and 0.2% in terms of word-error rate on the French broadcast transcription task.
ISSN:1112-4393
2588-2031