Abstract
Since clutter encountered in ground-penetrating radar (GPR) systems deteriorates the performance of target detection algorithms, clutter removal is an active research area in the GPR community. In this letter, instead of convolutional neural network (CNN) architectures used in the recently proposed deep-learning-based clutter removal methods, we introduce declutter vision transformers (DC-ViTs) to remove the clutter. Transformer encoders in DC-ViT provide an alternative to CNNs which has limitations to capture long-range dependencies due to its local operations. In addition, the implementation of a convolutional layer instead of multilayer perceptron (MLP) in the transformer encoder increases the capturing ability of local dependencies. While deep features are extracted with blocks consisting of transformer encoders arranged sequentially, losses during information flow are reduced using dense connections between these blocks. Our proposed DC-ViT was compared with low-rank and sparse methods such as robust principle component analysis (RPCA), robust nonnegative matrix factorization (RNMF), and CNN-based deep networks such as convolutional autoencoder (CAE) and CR-NET. In comparisons made with the hybrid dataset, DC-ViT is 2.5% better in peak signal-to-noise ratio (PSNR) results than its closest competitor. As a result of the tests, we conducted using our experimental GPR data, and the proposed model provided an improvement of up to 20%, compared with its closest competitor in terms of signal-to-clutter ratio (SCR).
Original language | English |
---|---|
Article number | 3505105 |
Pages (from-to) | 1-5 |
Number of pages | 5 |
Journal | IEEE Geoscience and Remote Sensing Letters |
Volume | 21 |
DOIs | |
Publication status | Published - 2024 |
Bibliographical note
Publisher Copyright:© 2004-2012 IEEE.
Keywords
- Clutter removal
- deep learning
- ground-penetrating radar (GPR)
- vision transformers (ViTs)