Abstract
With the rapid development of machine learning and artificial intelligence, the protection of intellectual property (IP) in models has become a critical concern. Malicious users can extract a model's knowledge via knowledge distillation techniques. A recent concept, known as the 'undistillable teacher', focuses on designing models that safeguard their knowledge from being replicated by external student models. To contribute to this field, we introduce the Corrector Student in this paper. Our framework leverages an online distillation scheme, where the misleading predictions from undistillable teacher models are corrected before being transferred to the student model, by leveraging student's feedback. We demonstrate that our approach can recover the perturbed dark knowledge of undistillable teachers, via quantitative experiments. The results point out that the Corrector Student excels at extracting the dark knowledge from the perturbed predictions of undistillable teachers, outperforming the state-of-the-art techniques, where the dark knowledge refers to the information encoded in the teacher's predictions, that reflect relationships between classes.
| Original language | English |
|---|---|
| Pages (from-to) | 1392-1396 |
| Number of pages | 5 |
| Journal | International Conference on Computer Science and Engineering, UBMK |
| Issue number | 2025 |
| DOIs | |
| Publication status | Published - 2025 |
| Externally published | Yes |
| Event | 10th International Conference on Computer Science and Engineering, UBMK 2025 - Istanbul, Turkey Duration: 17 Sept 2025 → 21 Sept 2025 |
Bibliographical note
Publisher Copyright:© 2025 IEEE.
Keywords
- knowledge distillation
- model IP protection
- Model stealing
Fingerprint
Dive into the research topics of 'Corrector Student: An Online Distillation-Based Framework for Distilling the Undistillable Teachers'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver