The Advances in Applied Intelligence Research (AAIRJ) stands as an international, peer-reviewed, open-access journal committed to facilitating the exchange of high-quality research outcomes across all facets of Computer Science, Computer Engineering, and Information Technology. The determination of whether a paper is complete, and novel ultimately lies with the reviewers and editors on a case-by-case basis. In general, a paper is expected to encompass a compelling motivational discussion, clearly articulate the relevance of the research, elucidate what is novel and forecast the scientific impact of the work, present all pertinent proofs and/or experimental data, and engage in a thorough discussion of connections with the existing literature.

AAIRJ aspires to serve as a leading resource, facilitating researchers and professionals worldwide to foster, disseminate, and engage in discussions on key research issues and advancements in the realm of information processing systems and related fields.

First decision is provided to authors approximately 21 days after submission; acceptance to publication is undertaken in 2-3 days.

Latest Publication   (Vol. 2, No. 2, Oct.  2025)

Whole is Better: Examination on the Gold Training Data for Automatic Post-Editing 
Hyeonseok Moon  Sugyeong Eo  Jaehyung Seo  Chanjun Park
Automatic Post-Editing (APE) research field aims to automatically rectifying errors in machine translation outputs with minimizing human intervention. The implementation of this requires a triplet dataset composed of the source sentence (src), the translated sentence (mt), and the correct-ed version of the translated text (pe). Emphasizing the challenges associated with data generation, numerous studies aiming at data augmentation have arisen. However, these studies predomi-nantly utilize human-curated gold training data without sufficient investigation. In this study, we raise doubts about this trend and point out that even within gold data, there are unnecessary data for training. Our motivation stems from the nature of the APE task, which involves both the need to replace all tokens in the machine translation and cases where the machine translation is al-ready perfect and should not be revised. We define these cases as extreme cases and verify the ef-fects that can be obtained by filtering each of them. We demonstrate that even with gold data, fil-tering out these extreme cases leads to the considerable performance improvement, more than 5 BLEU score in some cases. We conducted experiments using officially released, human-curated training data from WMT20, WMT21, and WMT22 and observed a common phenomenon across all datasets.
Unsafe Tools Activation for Translation Tasks: When Asking "Translate this ..." Can Corrupt Your Computer 
Mike Luu
Large Language Models (LLMs) equipped with tool-calling capabilities present new security vulnerabilities when performing seemingly benign language processing tasks. This paper demonstrates how translation, summarization, and explanation requests containing malicious foreign-language instructions can trigger un-safe tool activations in LLMs. Using a benchmark dataset of 150 unsafe instructions across Korean, Japanese, and Vietnamese, we evalu- ate tool activation rates across different task types. Our findings reveal that translation tasks achieve the highest vulnerability with 86.67% tool activation rate, followed by summarization (86.00%) and explanation (69.33%). Surprisingly, prompt engineering safeguards ("make sure to only translate the content") prove highly effective for explanation tasks, reducing activation rates from 69.33% to 0.00%, while remaining less effective for translation and summarization tasks. This research highlights security considerations for deploying tool-enabled LLMs in multilingual environments and demonstrates varying effectiveness of mitigation strategies across different task types. The benchmark dataset is available at Hugging Face Datasets and our experimental code is available at GitHub.