FuzzGPT: 테스트 케이스 최적화를 위한 LLM 기반 프롬프트 엔지니어링 퍼징 기법

Vol. 34, No. 6, pp. 1517-1525, 12월. 2024
10.13089/JKIISC.2024.34.6.1517, Full Text:
Keywords: Fuzzing, Large Language Model, prompt engineering
Abstract

As the Software Defined Service(SDS) environment expands to all industries, the demand for technologies that can efficiently detect software vulnerabilities is increasing. Conventional general-purpose random fuzzing techniques have excellent detection performance and coverage, but they share a common limitation in that it is difficult to generate meaningful test cases, and the overhead increases as the complexity of the target program for fuzzing grows. Therefore, in this paper, we propose an LLM prompt engineering-based fuzzing technique that uses the LLM model to identify the context and structure of the target program for fuzzing and generates optimal test cases. Experimental results demonstrate that the proposed method enhances the crash detection rate by 48% compared to traditional random fuzzers utilizing random data.

Statistics
Show / Hide Statistics

Statistics (Cumulative Counts from December 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.


Cite this article
[IEEE Style]
김연진, 이일구, 이연지, "FuzzGPT: LLM Prompt Engineering-Based Fuzzing Technique for Test Case Optimization," Journal of The Korea Institute of Information Security and Cryptology, vol. 34, no. 6, pp. 1517-1525, 2024. DOI: 10.13089/JKIISC.2024.34.6.1517.

[ACM Style]
김연진, 이일구, and 이연지. 2024. FuzzGPT: LLM Prompt Engineering-Based Fuzzing Technique for Test Case Optimization. Journal of The Korea Institute of Information Security and Cryptology, 34, 6, (2024), 1517-1525. DOI: 10.13089/JKIISC.2024.34.6.1517.