⚠ Official Notice: www.ijisrt.com is the official website of the International Journal of Innovative Science and Research Technology (IJISRT) Journal for research paper submission and publication. Please beware of fake or duplicate websites using the IJISRT name.



Reinforcement Learning for Continuous Cyber Threat Detection Rule Improvement


Authors : Nabeela Temitayo Adebola; Williams Ezebuilo Eze; Kamoru Emmanuel Umoru; Jamiu Akande; Nuhu Ezra

Volume/Issue : Volume 11 - 2026, Issue 3 - March


Google Scholar : https://tinyurl.com/ypajf5jx

Scribd : https://tinyurl.com/4y267hfv

DOI : https://doi.org/10.38124/ijisrt/26mar324

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.


Abstract : Security Information and Event Management systems are still at the core of threat monitoring in enterprises, but their rule-based detection methodology is mostly static in nature and always in need of manual tuning. As the nature of cyber-attacks becomes increasingly sophisticated, the complexity of the operating environment also increases, leading to a degradation in the accuracy of the rule-based methodology, with false positives and false negatives rising significantly. Recent studies show that adaptive learning methodologies can improve the accuracy of anomaly-based detection systems in a dynamic operating environment. Reinforcement learning is a methodology in which a learning agent learns through its interactions with its operating environment and improves its decision-making capabilities through a series of iterations. This research proposes a reinforcement learning-based framework for the continuous improvement of cyber threat detection rules in Security Information and Event Management systems. A reinforcement learning agent learns from the outcomes of the alerts generated in the system, the feedback from the Security Operations Center, and the threat intelligence available in the system to improve the thresholds and correlation values in real time for the rule-based methodology. This research uses benchmark intrusion datasets to evaluate the proposed methodology and compares its performance with static rulebased systems to show the improvement in accuracy and a reduction in false positives generated in the system.

Keywords : Reinforcement Learning, SIEM, Continuous Monitoring, False Positives, SOC Automation, Adaptive Cyber Defense.

References :

  1. G. Apruzzese, M. Colajanni, L. Ferretti, and M. Marchetti, “Addressing adversarial drift in intrusion detection systems,” IEEE Transactions on Network and Service Management, vol. 18, no. 3, pp. 2617–2631, 2021.
  2. A. S. Aref, H. S. Hamza, and M. A. Hammad, “Reinforcement learning-based intrusion detection: A survey,” IEEE Access, vol. 8, pp. 184379–184394, 2020.
  3. R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, 2nd ed. Cambridge, MA, USA: MIT Press, 2018.
  4. Y. Lin, X. Liu, and J. Zhang, “Deep reinforcement learning for cybersecurity defense: A survey,” IEEE Communications Surveys & Tutorials, vol. 23, no. 2, pp. 1016–1043, 2021
  5. K. Scarfone and P. Mell, “Guide to intrusion detection and prevention systems (IDPS),” NIST Special Publication 800-94, 2007.
  6. W. Wang, M. Zhu, J. Wang, X. Zeng, and Z. Yang, “End-to-end encrypted traffic classification with one-dimensional convolution neural networks,” IEEE Access, vol. 5, pp. 21985–21990, 2017.
  7. Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 1798–1828, 2013.
  8. V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, 2015.
  9. M. L. Puterman, Markov Decision Processes: Discrete Stochastic Dynamic Programming. Hoboken, NJ, USA: Wiley, 1994.
  10. C. Gates and C. Taylor, “challenging the anomaly detection paradigm” in proc. ACM Workshop New Security Paradigms, 2006.
  11. J. Gama et al., “A survey on concept drift adaptation” ACM computing surveys, vol. 46, no. 4, 2014.
  12. S. Minku and X. Yao, “DNN ensembles for dealing with concept drift” IEEE Trans. Knowledge and Data Engineering, vol. 24, no. 4, pp. 619-633, 2012.
  13. Ankit Thakkar and Ritika Lohiya, “A review of the advancement in intrusion detection datasets” precedia computer science 167 (2020) 636-645

Security Information and Event Management systems are still at the core of threat monitoring in enterprises, but their rule-based detection methodology is mostly static in nature and always in need of manual tuning. As the nature of cyber-attacks becomes increasingly sophisticated, the complexity of the operating environment also increases, leading to a degradation in the accuracy of the rule-based methodology, with false positives and false negatives rising significantly. Recent studies show that adaptive learning methodologies can improve the accuracy of anomaly-based detection systems in a dynamic operating environment. Reinforcement learning is a methodology in which a learning agent learns through its interactions with its operating environment and improves its decision-making capabilities through a series of iterations. This research proposes a reinforcement learning-based framework for the continuous improvement of cyber threat detection rules in Security Information and Event Management systems. A reinforcement learning agent learns from the outcomes of the alerts generated in the system, the feedback from the Security Operations Center, and the threat intelligence available in the system to improve the thresholds and correlation values in real time for the rule-based methodology. This research uses benchmark intrusion datasets to evaluate the proposed methodology and compares its performance with static rulebased systems to show the improvement in accuracy and a reduction in false positives generated in the system.

Keywords : Reinforcement Learning, SIEM, Continuous Monitoring, False Positives, SOC Automation, Adaptive Cyber Defense.

Paper Submission Last Date
31 - March - 2026

SUBMIT YOUR PAPER CALL FOR PAPERS
Video Explanation for Published paper

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe