Current Trends in Engineering Science
[ ISSN : 2833-356X ]

Zero-Trust Architecture (ZTA): Designing an AI-Powered Cloud Security Framework for LLMs’ Black Box Problems

Review Article
Volume 4 - Issue 2 | Article DOI : 10.54026/CTES/1058

Bibhu Dash*

School of Computer and Information Sciences, University of the Cumberlands, USA

Corresponding Authors

Bibhu Dash, School of Computer and Information Sciences, University of the Cumberlands, USA


Zero Trust, LLM; Black Box; AI-Powered framework; PDP; IPP; GDPR; CCPA

Received : February 12, 2024
Published : March 12, 2024


Businesses are becoming more interested in developing and testing Large Language Models (LLMs) in their own settings to support decision-making and growth as a result of the rapid emergence of AI and cloud computing. Here’s the dilemma, though: to what extent do you believe these models and the data they were trained on? We don’t know the feature list of an LLM, which presents the first obstacle when discussing trust and the reasons why there should be zero trust. Although it may seem a bit extreme, this is accurate for two reasons. When it comes to GenAI models nowadays, the more multimodal and more capabilities they have, the better. This way of thinking is great for exploring and confirming if GenAI can address a business problem, but it’s a surefire way to run into trouble when attempting to put things into production in an organizational setting. An enterprise cybersecurity architecture known as a zero-trust architecture (ZTA) is built on the ideas of zero trust and is intended to stop data breaches, enhance privacy, and restrict internal lateral movement. This article discusses ZTA, its logical aspects, probable deployment scenarios, AI rules, threats and limitations in order to provide a detailed understanding of why enterprises must adapt a ZTA framework in a cloud-based environment for AI model deployment.