딥페이크 탐지 모델의 검증 방법론 불일치에 따른 성능 편향 분석 연구

Vol. 34, No. 5, pp. 885-893, 10월. 2024
10.13089/JKIISC.2024.34.5.885, Full Text:
Keywords: Deepfake detection, Evaluation Methodology, Performance Bias, Deepfake Dataset
Abstract

As deepfake technology advances, its increasing misuse has spurred extensive research into detection models. These models’ performance evaluations, which include selecting train and test datasets, data preprocessing, and data augmentation, are often compromised by arbitrarily chosen validation methodologies in existing studies. This leads to biases under standardized conditions. This paper reviews these methodologies to pinpoint what diminishes evaluation reliability. Experiments in standardized environments reveal the difficulties in comparing performance absolutely. The findings highlighted the need for a consistent validation methodology to boost evaluation reliability and enable fair comparisons.

Statistics
Show / Hide Statistics

Statistics (Cumulative Counts from December 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.


Cite this article
[IEEE Style]
김현준, 권태경, 박래현, 안홍은, "On the Performance Biases Arising from Inconsistencies in Evaluation Methodologies of Deepfake Detection Models," Journal of The Korea Institute of Information Security and Cryptology, vol. 34, no. 5, pp. 885-893, 2024. DOI: 10.13089/JKIISC.2024.34.5.885.

[ACM Style]
김현준, 권태경, 박래현, and 안홍은. 2024. On the Performance Biases Arising from Inconsistencies in Evaluation Methodologies of Deepfake Detection Models. Journal of The Korea Institute of Information Security and Cryptology, 34, 5, (2024), 885-893. DOI: 10.13089/JKIISC.2024.34.5.885.