Enhancing performance of transformer-based models in natural language understanding through word importance embedding

Seung Kyu Hong, Jae Seok Jang, Hyuk Yoon Kwon

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

Transformer-based models have achieved state-of-the-art performance on natural language understanding (NLU) tasks by learning important token relationships through the attention mechanism. However, we observe that attention can become overly distributed during fine-tuning, failing to preserve the dependencies between meaningful tokens adequately. This phenomenon negatively affects the learning of token relationships in sentences. To overcome this issue, we propose a methodology that embeds the feature of word importance (WI) in the transformer-based models as a new layer, weighting the words according to their importance. Our simple yet powerful approach offers a general technique to boost transformer model capabilities on NLU tasks by mitigating the risk of attention dispersion during fine-tuning. Through extensive experiments on GLUE, SuperGLUE, and SQuAD benchmarks for pre-trained models (BERT, RoBERTa, ELECTRA, and DeBERTa), and MMLU, Big Bench Hard, and DROP benchmarks for the large language model, Llama2, we validate the effectiveness of our method in consistently enhancing performance across models with negligible overhead. Furthermore, we validate that our WI layer better preserves the dependencies between important tokens than standard fine-tuning by introducing a model classifying dependent tokens from the learned attention weights. The code is available at https://github.com/bigbases/WordImportance.

Original languageEnglish
Article number112404
JournalKnowledge-Based Systems
Volume304
DOIs
StatePublished - 25 Nov 2024

Keywords

  • Natural language understanding
  • Transformer
  • Word dependency
  • Word importance

Fingerprint

Dive into the research topics of 'Enhancing performance of transformer-based models in natural language understanding through word importance embedding'. Together they form a unique fingerprint.

Cite this