Abstract
Today, some companies release their black-box model as a service for users, where users can see the model’s output corresponding to their input. However, these models can be stolen via knowledge distillation by malicious users. Recently, undistillable teacher (Ma et al., 2021) is introduced in order to prevent the knowledge leakage. In this study, with the aim of contributing to solutions for model intellectual property (IP) protection, we propose a novel method which improves the distillation from an undistillable teacher whose goal is make the distillation difficult for students, with the purpose of model protection. The codes are released at https://github.com/rkevser/AveragerStudent.
| Original language | English |
|---|---|
| Publication status | Published - 2023 |
| Event | 1st Tiny Papers at 11th International Conference on Learning Representations, Tiny Papers @ ICLR 2023 - Kigali, Rwanda Duration: 5 May 2023 → 5 May 2023 |
Conference
| Conference | 1st Tiny Papers at 11th International Conference on Learning Representations, Tiny Papers @ ICLR 2023 |
|---|---|
| Country/Territory | Rwanda |
| City | Kigali |
| Period | 5/05/23 → 5/05/23 |
Bibliographical note
Publisher Copyright:© 2023 1st Tiny Papers Track at ICLR 2023 - Tiny Papers @ ICLR 2023. All rights reserved.
Fingerprint
Dive into the research topics of 'AVERAGER STUDENT: DISTILLATION FROM UNDISTILLABLE TEACHER'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver