GTC LOGO

Join Fortanix at NVIDIA GTC 2026, San Jose.

Know More

Bringing Confidential Computing to AI Factories: How Fortanix, HPE, and NVIDIA Enable Trusted and Secure AI

Kristina
Kristina Avrionova
Mar 15, 2026
3mins
Share this post:
telus-and-fortanix-partnership-on-confidential-computing

There is a new flavor of data centers: AI factories. AI factories are performance environments designed to securely deploy and enable enterprise AI, to power the next level of innovation and operations.

With AI factories, organizations have greater control over training, fine-tuning, and running their AI at scale. While they are especially valuable for organizations operating under strict regulatory, sovereignty, and compliance requirements, AI factories are quickly becoming the de facto operating model for enterprise AI. 

Although this modus operandi may have a new name, the underlying challenge remains the same: protecting sensitive enterprise data. As history has shown with every major technological breakthrough, solving one problem often introduces a new one.

And the challenge de jour is enabling secure distribution of frontier AI models on premises, so they can run on sensitive enterprise data that must reside inside AI factories. 

Why Securing AI Workloads On-prem Is a Challenge?

Traditional security approaches protect data at rest and in transit, but once data is loaded into memory for computation, it becomes vulnerable. The same is valid for the AI model. During inference, the AI workload is actively processed and unencrypted, therefore exposed and vulnerable.

The risk is significant: if sensitive enterprise data is breached, organizations face severe regulatory penalties and financial loss. If proprietary model weights and parameters are stolen, years of research and millions of dollars in investment are gone.

Therefore, security today is not just about protecting AI workloads at rest and in transit. Security must also extend to data and AI models while they are actively being used. 

Fortanix and HPE Power Secure AI Factories with Confidential Computing 

To solve this challenge, HPE and Fortanix are bringing confidential computing to AI factories, creating trusted and secure architecture that protects data, models, and workloads throughout the entire AI lifecycle.
 
Confidential Computing protects the AI model while in use, meaning the moment it is actively processed by CPUs and GPUs. The hardware-protected environments, also known as Trusted Execution Environments (TEEs), ensure that: 

  • Data and AI model weights and parameters remain encrypted in memory even during computation 
  • AI workloads are isolated from administrators, operating systems, and other software layers 

A Secure AI Stack

At the foundation of the secure AI stack is HPE ProLiant Compute DL380a Gen12, designed for accelerator-optimized AI workloads. Built with HPE iLO 7 and silicon root of trust, the platform verifies system integrity during boot and continuously monitors firmware for tampering. 

NVIDIA’s Blackwell Confidential Computing extends hardware security into GPU-accelerated AI workloads. Through cryptographic attestation, GPUs verify that they are authentic and operating in a secure execution mode before AI workloads begin. This ensures that the

  • AI workloads run only on trusted accelerators 
  • GPU infrastructure cannot be spoofed or compromised 

Lastly, Fortanix Confidential AI ensures that workloads run only in trusted environments. 

  • Fortanix Confidential Computing Manager (CCM) is a single control plane to control and manage the lifecycle of trusted execution environments, verify system integrity for both CPU and GPU, and signal secure key release 
  • Fortanix Data Security Manager (DSM) control encryption keys, enforces security policies, and securely releases the key only after attestation verification is complete.

Related read: https://community.hpe.com/t5/ai-unlocked/making-ai-real-hpe-speeds-time-to-secure-agentic-value-with-a/ba-p/7263028 

Secure and Trusted Enterprise AI 

Confidential Computing is quickly becoming a strategic requirement for enterprises adopting AI at scale. It allows organizations to safely run advanced AI workloads including LLM inference, fine-tuning, and RAG without exposing sensitive information or proprietary models, while AI labs that have spent years and millions developing their proprietary frontier models can expand monetization knowing their business-critical IP is protected.

The CIO conversations now can move from can we safely use AI to how fast can we operationalize AI across the business. Leveraging HPE’s hardware-rooted security, NVIDIA Confidential Computing, and Fortanix Confidential AI, organizations can build an infrastructure that is simultaneously high performance, verifiable, and secure.

Share this post:
Fortanix-logo

4.6

star-ratingsgartner-logo

As of January 2026

SOCISOPCI DSS CompliantFIPSGartner Logo

US

Europe

India

Singapore

4500 Great America Parkway, Ste. 270
Santa Clara, CA 95054

+1 408-214 - 4760|info@fortanix.com

High Tech Campus 5,
5656 AE Eindhoven, The Netherlands

+31850608282

UrbanVault 460,First Floor,C S TOWERS,17th Cross Rd, 4th Sector,HSR Layout, Bengaluru,Karnataka 560102

+91 080-41749241

T30 Cecil St. #19-08 Prudential Tower,Singapore 049712