Abstract
This study addresses the challenge of improving the downstream performance of pretrained language models for morphologically rich languages, with a focus on Turkish. Traditional BERT models use one-dimensional absolute positional embeddings, which, while effective, have limitations when dealing with complex languages. We propose BERT2D, which is a novel BERT-based model that contributes to positional embedding systems. This approach introduces a dual embedding system that targets all the words and their subwords. Remarkably, this modification, coupled with whole word masking, resulted in a significant increase in performance despite a negligible increase in the parameters. Our experiments showed that BERT2D consistently outperformed the leading Turkish-focused BERT model, BERTurk, in terms of various performance metrics in text classification, token classification, and question-answering downstream tasks. For a fair comparison, we pretrained our BERT2D language model on the same dataset as that of BERTurk. The results demonstrate that two-dimensional positional embeddings can significantly improve the performance of encoder-only models in Turkish and other morphologically rich languages, suggesting a promising direction for future research in natural language processing.
| Original language | English |
|---|---|
| Pages (from-to) | 77429-77441 |
| Number of pages | 13 |
| Journal | IEEE Access |
| Volume | 12 |
| DOIs | |
| Publication status | Published - 2024 |
| Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2013 IEEE.
Keywords
- BERT
- BERT2D
- NLP
- Transformer models
- Turkish
- named entity recognition
- positional embeddings
- positional encoding
- question answering
- sentiment analysis