Vol. 34, No. 6, pp. 1567-1577,
12월.
2024
10.13089/JKIISC.2024.34.6.1567,
Full Text:
Keywords:
AI Voice,
DeepFake Audio,
DeepFake Voice Detection,
watermark
Abstract
With the rapid advancement of deep learning-based generative models, AI-generated synthetic voice models have evolved to a point where it is becoming increasingly difficult for humans to distinguish between real and synthetic voices. Various voice generation services using these models have also emerged. However, with the misuse of such cutting-edge AI technology in voice phishing, investment fraud, and fake election news, AI-generated DeepFake voices have become a new and significant threat. In response to this emerging threat, this paper provides a detailed analysis of legal countermeasures implemented by different countries and the detection technologies required to enforce them, while examining the current state of these technologies. Additionally, it introduces adversarial attacks that attempt to bypass AI synthetic voice detection and the countermeasures against such attacks. Finally, the paper explores real-time watermarking technologies that indicate whether the audio or video content is AI-generated, and presents comprehensive solutions for detecting and labeling AI-synthesized voices.
Statistics
Show / Hide Statistics
Statistics (Cumulative Counts from December 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.
Cite this article
[IEEE Style]
홍기훈, "Research Trends on Threats and Countermeasures to DeepFake Voices," Journal of The Korea Institute of Information Security and Cryptology, vol. 34, no. 6, pp. 1567-1577, 2024. DOI: 10.13089/JKIISC.2024.34.6.1567.
[ACM Style]
홍기훈. 2024. Research Trends on Threats and Countermeasures to DeepFake Voices. Journal of The Korea Institute of Information Security and Cryptology, 34, 6, (2024), 1567-1577. DOI: 10.13089/JKIISC.2024.34.6.1567.