Robustness and Adversarial Resilience of Actuarial AI/ML Models in the Face of Evolving Threats


Authors : Niha Malali; Sita Rama Praveen Madugula

Volume/Issue : Volume 10 - 2025, Issue 3 - March


Google Scholar : https://tinyurl.com/yy6tf3v6

Scribd : https://tinyurl.com/mwjyz5st

DOI : https://doi.org/10.38124/ijisrt/25mar1287

Google Scholar

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.

Note : Google Scholar may take 15 to 20 days to display the article.


Abstract : The application of artificial intelligence (AI) and machine learning (ML) in actuarial science yields data-driven financial decision-making processes, as well as transformed predictive modeling and risk assessment. Security threats that occur due to increasing AI/ML model adoption create significant risks for actuarial applications through data poisoning and both evasion techniques and model inversion attacks. Breach points in systems create substantial risks for misjudged risks, price distortions, and regulatory issues, which damage the dependability of actuarial modeling outcomes. Adversarial resilience and robustness of AI/ML models in actuarial science receive detailed exploration in this paper through assessments of existing defense mechanisms which primarily include adversarial training, anomaly detection and robust feature engineering methods as well as identification of main threat vectors. This paper covers the essential regulatory structures and ethical matters because such frameworks protect the integrity of trustable AI-driven actuarial systems. The effectiveness of various adversarial threat defenses against actuarial AI models is evaluated through experimental results. The research confirms that security measures in the actuarial domain of AI need ongoing development to protect its systems from current and future threats which require sustainable reliability and threat resistance.

Keywords : Artificial Intelligence, Machine Learning, Actuarial Science, Risk Assessment, Predictive Modeling, Robustness, Adversarial Attacks.

References :

  1. S. Mohamed, Artificial Intelligence implementations in Actuarial Science: Empirical Study for Mortality Rate Forecasting, no. September. 2024. doi: 10.52789/0302-043-164-007.
  2. S. Arora, S. R. Thota, and S. Gupta, “Artificial Intelligence-Driven Big Data Analytics for Business Intelligence in SaaS Products,” in 2024 First International Conference on Pioneering Developments in Computer Science & Digital Technologies (IC2SDT), IEEE, Aug. 2024, pp. 164–169. doi: 10.1109/IC2SDT62152.2024.10696409.
  3. R. Richman, “AI in Actuarial Science,” SSRN Electron. J., no. October, pp. 24–25, 2018, doi: 10.2139/ssrn.3218082.
  4. E. al. Nand Kumar, “Enhancing Robustness and Generalization in Deep Learning Models for Image Processing,” Power Syst. Technol., vol. 47, no. 4, pp. 278–293, 2023, doi: 10.52783/pst.193.
  5. S. Tyagi, “Analyzing Machine Learning Models for Credit Scoring with Explainable AI and Optimizing Investment Decisions,” Am. Int. J. Bus. Manag., vol. 5, no. 01, pp. 5–19, 2022.
  6. J.-P. Larochelle et al., “Predictive Analytics and Machine Learning – Practical Applications for Actuarial Modeling ( Nested Stochastic ),” Soc. Actuar., 2023.
  7. S. Tyagi, T. Jindal, S. H. Krishna, S. M. Hassen, S. K. Shukla, and C. Kaur, “Comparative Analysis of Artificial Intelligence and its Powered Technologies Applications in the Finance Sector,” in Proceedings of 5th International Conference on Contemporary Computing and Informatics, IC3I 2022, 2022. doi: 10.1109/IC3I56241.2022.10073077.
  8. J. Riley, “AI and Machine Learning Usage in Actuarial Science,” University of Akron, 2020.
  9. S. S. S. Neeli, “Optimizing Data Management and Business Intelligence: Integrating Database Engineering with AI-driven Decision Making,” Int. J. Commun. Networks Inf. Secur., vol. 17, no. 01, p. 24, 2025.
  10. H. Smith, “Applications of Machine Learning in Actuarial Pricing and Underwriting-A Review,” Acad. Master, 2025.
  11. A. Abdur, R. Khan, L. K. Tanwani, and S. Kumar, “Applications of Deep Learning Models for Insurance Pricing,” no. June, 2024, doi: 10.13140/RG.2.2.36760.40965.
  12. S. Masarath, V. Waghmare, S. Kumar, R. Joshitta, and D. Rao, “Storage Matched Systems for Single-click Photo Recognitions using CNN,” 2023 Int. Conf. Commun. Secur. Artif. Intell., pp. 1–7, 2024.
  13. S. Murri, “Optimising Data Modeling Approaches for Scalable Data Warehousing Systems,” Int. J. Sci. Res. Sci. Eng. Technol., vol. 10, no. 5, pp. 369–382, Oct. 2023, doi: 10.32628/IJSRSET2358716.
  14. A. P. A. Singh and N. Gameti, “Improving Asset Information Management in the Oil & Gas Sector: A Comprehensive Review,” in 2024 IEEE Pune Section International Conference (PuneCon), 2024, pp. 1–6. doi: 10.1109/PuneCon63413.2024.10895047.
  15. N. Stricker and G. Lanza, “The concept of robustness in production systems and its correlation to disturbances,” Procedia CIRP, vol. 19, no. C, pp. 87–92, 2014, doi: 10.1016/j.procir.2014.04.078.
  16. N. Drenkow, N. Sani, I. Shpitser, and M. Unberath, “A Systematic Review of Robustness in Deep Learning for Computer Vision : Mind the gap ?,” 2021.
  17. Suhag Pandya, “A Machine and Deep Learning Framework for Robust Health Insurance Fraud Detection and Prevention,” Int. J. Adv. Res. Sci. Commun. Technol., pp. 1332–1342, Jul. 2023, doi: 10.48175/IJARSCT-14000U.
  18. 8]                C. Chang, J. Hung, C. Tien, C. Tien, and S. Kuo, “Evaluating Robustness of AI Models against Adversarial Attacks,” pp. 47–54, 2020.
  19. K. Than, D. Phan, and G. Vu, “Gentle Local Robustness implies Generalization,” 2024.
  20. K. Ding, J. Shu, D. Meng, and Z. Xu, “Improve Noise Tolerance of Robust Loss via Noise-Awareness,” J. Latex Cl. Files, vol. 14, no. 8, pp. 1–15, 2021.
  21. J. Puigcerver, R. Jenatton, C. Riquelme, and S. Bhojanapalli, “On the Adversarial Robustness of Mixture of Experts,” NeurIPS, 2022.
  22. F. Ekundayo, “International Journal of Research Publication and Reviews Leveraging AI-Driven Decision Intelligence for Complex Systems Engineering,” vol. 5, no. 11, pp. 5489–5499, 2024.
  23. J. Lin, L. Dang, M. Rahouti, and K. Xiong, “ML Attack Models : Adversarial Attacks and Data Poisoning Attacks,” pp. 1–30, 2021.
  24. C. Hiruni, “Securing Machine Learning Models : A Comprehensive Review of Adversarial Attacks and Defense Mechanisms,” no. November, pp. 0–6, 2024, doi: 10.13140/RG.2.2.30723.92965.
  25. A. Matrawy, “Adversarial Evasion Attacks Practicality in Networks : Testing the Impact of Dynamic Learning,” pp. 1–13, 2024.
  26. S. Pandya, “Predictive Analytics in Smart Grids : Leveraging Machine Learning for Renewable Energy Sources,” Int. J. Curr. Eng. Technol., vol. 11, no. 6, pp. 677–683, 2021.
  27. S. S. Gujar, “Enhancing Adversarial Robustness in AI Systems: A Novel Defense Mechanism Using Stable Diffusion,” in 2024 2nd DMIHER International Conference on Artificial Intelligence in Healthcare, Education and Industry (IDICAIEI), 2024, pp. 1–6. doi: 10.1109/IDICAIEI61867.2024.10842888.
  28. S. Divya, A. Sathishkumar, S. J. Jemila, R. Vanitha, and S. MC, “Improving Image Generation Robustness with Adversarial Autoencoder for Enhanced Quality and Stability,” in 2024 Second International Conference Computational and Characterization Techniques in Engineering & Sciences (IC3TES), 2024, pp. 1–5. doi: 10.1109/IC3TES62412.2024.10877526.
  29. S. Tasneem and K. A. Islam, “Improve Adversarial Robustness of AI Models in Remote Sensing via Data-Augmentation and Explainable-AI Methods,” Remote Sens., vol. 16, no. 17, p. 3210, 2024, doi: 10.3390/rs16173210.
  30. B. Hannon, Y. Kumar, D. Gayle, J. J. Li, and P. Morreale, “Robust Testing of AI Language Model Resiliency with Novel Adversarial Prompts,” Electron., vol. 13, no. 5, 2024, doi: 10.3390/electronics13050842.
  31. Navita, D. S.Srinivasan, and D. Nitin, “Resilient AI Systems : Robustness and Adversarial Defense in the Face of Cyber Threats,” Int. J. Intell. Syst. Appl. Eng., vol. 12, no. 19, pp. 355–365, 2024.

The application of artificial intelligence (AI) and machine learning (ML) in actuarial science yields data-driven financial decision-making processes, as well as transformed predictive modeling and risk assessment. Security threats that occur due to increasing AI/ML model adoption create significant risks for actuarial applications through data poisoning and both evasion techniques and model inversion attacks. Breach points in systems create substantial risks for misjudged risks, price distortions, and regulatory issues, which damage the dependability of actuarial modeling outcomes. Adversarial resilience and robustness of AI/ML models in actuarial science receive detailed exploration in this paper through assessments of existing defense mechanisms which primarily include adversarial training, anomaly detection and robust feature engineering methods as well as identification of main threat vectors. This paper covers the essential regulatory structures and ethical matters because such frameworks protect the integrity of trustable AI-driven actuarial systems. The effectiveness of various adversarial threat defenses against actuarial AI models is evaluated through experimental results. The research confirms that security measures in the actuarial domain of AI need ongoing development to protect its systems from current and future threats which require sustainable reliability and threat resistance.

Keywords : Artificial Intelligence, Machine Learning, Actuarial Science, Risk Assessment, Predictive Modeling, Robustness, Adversarial Attacks.

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe