End-to-End Secure MLOps: Leveraging Generative AI for Automated Threat Simulation and Continuous Security Validation
DOI:
https://doi.org/10.5281/zenodo.16785397Keywords:
MLOps, Generative AI, Threat Simulation, Continuous Security Validation, Adversarial Machine Learning, AI Supply Chain SecurityAbstract
The increasing adoption of Machine Learning Operations (MLOps) pipelines introduces both efficiency and new security risks to enterprise AI deployments. While MLOps accelerates model development and deployment, the complexity of continuous integration and delivery (CI/CD) in AI systems leaves potential attack surfaces, including data poisoning, model inversion, and supply chain vulnerabilities. This paper explores the integration of generative AI models for automated threat simulation within secure MLOps workflows, enabling continuous validation and proactive defense against adversarial threats. We propose an architecture that combines generative adversarial models for attack emulation with automated penetration testing for MLOps pipelines, ensuring end-to-end security. The work is positioned in the context of early MLOps adoption circa, with emphasis on preemptive threat discovery.
References
Goodfellow, Ian, et al. Generative Adversarial Nets. Advances in Neural Information Processing Systems, 2014, pp. 2672–2680.
Kacheru, G., Bajjuru, R., & Arthan, N. (2023). The ROI of Software Automation: Measuring Time and Cost Savings. International Journal of Communication Networks and Information Security, 15(4), 774–785.
Lin, Pei, et al. “Adversarial Deep Learning for Cybersecurity.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 5773–5782.
Shokri, Reza, et al. “Membership Inference Attacks Against Machine Learning Models.” Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 2017, pp. 3–18.
Sculley, D., et al. “Hidden Technical Debt in Machine Learning Systems.” Advances in Neural Information Processing Systems, vol. 28, 2015, pp. 2503–2511.
Arthan, N., Kacheru, G., & Bajjuru, R. Dark Web and Cyber Scams: A Growing Threat to Online Safety. International Journal of Multidisciplinary Sciences and Arts, 2(2), 3747.
Tramèr, Florian, et al. “Stealing Machine Learning Models via Prediction APIs.” Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 2016, pp. 601–612.
Kacheru, G. (2021). The Future of Cyber Defence: Predictive Security with Artificial Intelligence. International Journal of Advanced Research in Basic Engineering Sciences and Technology (IJARBEST), 7(12), 46–55.
Goodfellow, Ian J., et al. Deep Learning. MIT Press, 2016.
Shafique, Muhammad, et al. “Generative Adversarial Networks for Privacy-Preserving Data Sharing.” Proceedings of the 2019 IEEE International Conference on Data Mining, 2019, pp. 578–587.
Alfeld, David, et al. “Adversarial Examples: Attacks and Defenses for Machine Learning.” Machine Learning Security, Springer, 2019, pp. 135–151.
Tramer, Florian, et al. “Adversarial Machine Learning: A Survey.” IEEE Access, vol. 9, 2019, pp. 10304–10326.
Kacheru, G., Bajjuru, R., & Arthan, N. (2022). Surge of Cyber Scams during the COVID19 Pandemic: Analyzing the Shift in Tactics. BULLET: Jurnal Multidisiplin Ilmu, 1(02), 192202.
Odena, Augustus, et al. “Conditional Image Synthesis with Auxiliary Classifier GANs.” Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2642–2651.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Fernanda Oliveira (Author)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.




