Özet
Data are the essential component in the pipeline of training a model that determines the performance of the model. However, there may not be enough data that meet the requirements of some tasks. In this paper, we introduce a knowledge distillation-based approach that mitigates the disadvantages of data scarcity. Specifically, we propose a method that boosts the pixel domain performance of a model, by utilizing compressed domain knowledge via cross distillation between these two modalities. To evaluate our approach, we conduct experiments on two computer vision tasks which are object detection and recognition. Results indicate that compressed domain features can be utilized for a task in the pixel domain via our approach, where data are scarce or not completely available due to privacy or copyright issues.
Orijinal dil | İngilizce |
---|---|
Ana bilgisayar yayını başlığı | ISCAS 2024 - IEEE International Symposium on Circuits and Systems |
Yayınlayan | Institute of Electrical and Electronics Engineers Inc. |
ISBN (Elektronik) | 9798350330991 |
DOI'lar | |
Yayın durumu | Yayınlandı - 2024 |
Etkinlik | 2024 IEEE International Symposium on Circuits and Systems, ISCAS 2024 - Singapore, Singapore Süre: 19 May 2024 → 22 May 2024 |
Yayın serisi
Adı | Proceedings - IEEE International Symposium on Circuits and Systems |
---|---|
ISSN (Basılı) | 0271-4310 |
???event.eventtypes.event.conference???
???event.eventtypes.event.conference??? | 2024 IEEE International Symposium on Circuits and Systems, ISCAS 2024 |
---|---|
Ülke/Bölge | Singapore |
Şehir | Singapore |
Periyot | 19/05/24 → 22/05/24 |
Bibliyografik not
Publisher Copyright:© 2024 IEEE.