AI integration in cybersecurity software: Threat detection and response
Abstract
The rapid digitization of critical infrastructure has significantly increased the complexity and frequency of cybersecurity threats. Traditional threat detection and response mechanisms are often insufficient to address evolving cyberattacks in real time. This meta-analysis aims to evaluate how artificial intelligence (AI) has been integrated into cybersecurity tools, particularly for threat detection and response, and to assess the effectiveness of various AI techniques across application domains. A systematic review was conducted across IEEE, Scopus, ACM, and PubMed databases, covering publications from 2015 to 2024. Out of 400 initially screened studies, 150 high-quality articles met the PRISMA inclusion criteria. The selected studies were categorized based on their use of AI techniques machine learning (ML), deep learning (DL), natural language processing (NLP), and reinforcement learning (RL) and their application areas, including malware detection, intrusion detection systems (IDS), anomaly detection, phishing prevention, and automated incident response. Statistical synthesis revealed that ML-based IDS, particularly using Random Forest and Support Vector Machine (SVM) models, improved detection accuracy by 17–35% over traditional systems. DL architectures, especially Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks, were effective in analyzing network traffic and behavioral anomalies. NLP techniques enhanced phishing detection and log analysis, while RL approaches enabled adaptive incident response and automated defense mechanisms. Overall, AI integration was found to reduce response times by up to 45% and significantly improve threat detection accuracy. AI-driven cybersecurity solutions demonstrate substantial improvements in detection accuracy and response efficiency. However, challenges such as data imbalance, lack of model explainability, vulnerability to adversarial attacks, and high computational demands persist. The study recommends the development of interpretable AI models, hybrid systems, and standardized datasets and evaluation metrics to advance future research and practical implementation.
Authors

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.