인공지능 모델 대상 백도어 공격을 응용한 저작권 보호 목적의 워터마킹 연구

Vol. 34, No. 6, pp. 1537-1544, 12월. 2024
10.13089/JKIISC.2024.34.6.1537, Full Text:
Keywords: Backdoor attack and defense, Watermarking, Dataset protection
Abstract

With the advancement of artificial intelligence(AI) technology, backdoor attacks have emerged as one of various cybersecurity threats, causing damage in various AI application domains, including autonomous driving. In this paper, we propose a backdoor attack detection method specialized to ensemble model training environments, achieving quick detection by separating the processes of model construction and attack detection. Additionally, we propose a method to prevent data abuse and protect data copyrights by utilizing backdoors as an image watermarking technique.

Statistics
Show / Hide Statistics

Statistics (Cumulative Counts from December 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.


Cite this article
[IEEE Style]
김석희 and 한창희, "Watermarking for Copyright Protection Using Backdoor Attacks on Artificial Intelligence Models," Journal of The Korea Institute of Information Security and Cryptology, vol. 34, no. 6, pp. 1537-1544, 2024. DOI: 10.13089/JKIISC.2024.34.6.1537.

[ACM Style]
김석희 and 한창희. 2024. Watermarking for Copyright Protection Using Backdoor Attacks on Artificial Intelligence Models. Journal of The Korea Institute of Information Security and Cryptology, 34, 6, (2024), 1537-1544. DOI: 10.13089/JKIISC.2024.34.6.1537.