Abstract
Recently, one of the popular areas that uses machine learning is the dynamic spam detection. They use it to upgrade their detection models with newly collected data against various attacks. On the other hand, many methods have been developed to reduce the success rate of the security layer of target systems that use machine learning algorithms for detection. Specifically, attackers insert poisoned data samples that contain trigger words or a sentence into the training dataset of a target system, which reduces the learning rate of the machine learning model. In this case, the number of false-positives increases when a spam sentence contains this trigger, which is called a backdoor in machine learning. In this research, we have focused on the clean-label backdoor attack, which has correctly labeled poisoned data samples. We propose an approach where these samples lead the machine learning model to learn the trigger words when the triggers occur. We empirically analyze the proposed approach with an SMS spam dataset. Our experimental results show that with a correct setting and specially crafted clean-label poisoning data samples, predictions of an LSTM model can be successfully deceived.
Original language | English |
---|---|
Title of host publication | Proceedings - 2021 14th International Conference on Security of Information and Networks, SIN 2021 |
Editors | Andrei Petrovski, Naghmeh Moradpoor, Atilla Elci |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
ISBN (Electronic) | 9781728192666 |
DOIs | |
Publication status | Published - 2021 |
Event | 14th International Conference on Security of Information and Networks, SIN 2021 - Virtual, Online, United Kingdom Duration: 15 Dec 2021 → 17 Dec 2021 |
Publication series
Name | Proceedings - 2021 14th International Conference on Security of Information and Networks, SIN 2021 |
---|
Conference
Conference | 14th International Conference on Security of Information and Networks, SIN 2021 |
---|---|
Country/Territory | United Kingdom |
City | Virtual, Online |
Period | 15/12/21 → 17/12/21 |
Bibliographical note
Publisher Copyright:© 2021 IEEE.
Keywords
- Adversarial Machine Learning
- Backdoor Attack
- Cyber-Security
- Data Poisoning
- LSTM
- Text Classification