The future of decentralized AI infrastructure just got a whole lot more real.
At OpenGradient, we've always believed that AI inference should be verifiable, private, and open, not locked behind centralized gatekeepers. Today, we're excited to share a major upgrade to our infrastructure that puts those principles into practice: x402-native TEE inference with on-chain verification.
This update brings together two powerful technologies, Trusted Execution Environments (TEEs) and the x402 payment protocol, into a single, seamless system. Here's what that means and why it matters.
The Problem We're Solving
Running AI workloads in the cloud has always required a degree of trust: trust that the provider isn't snooping on your data, trust that the inference actually ran as advertised, and trust that the payment middleware sitting between you and compute isn't a liability. For the agentic AI economy to truly flourish, that trust needs to be verified, not assumed.
Our x402 upgrade addresses this head-on.
What's New: A Breakdown
1. A Blockchain Registry of Verified TEE Instances
We've deployed a decentralized, on-chain registry of TEE instances, each one cryptographically verified. TEEs (Trusted Execution Environments) are isolated compute environments where code runs in a sealed enclave, protected from outside interference, even from the host machine's operating system. We use AWS Nitro Enclaves, which generate attestation documents signed by AWS as a certificate authority, giving users a cryptographic guarantee that the software running inside the enclave is exactly what it claims to be.
What this means practically:
- Users can browse a decentralized registry of trusted, verified TEE nodes and choose where their workload runs, whether it's an LLM chat session, a completion request, or a complex agentic process.
- TEE node providers get compensated automatically when their instances serve inference requests, creating a permissionless compute marketplace.
- Smart contracts handle the validation of TEE attestation documents, ensuring only compliant, expected software can participate in the network.
This is a meaningful step toward a world where AI compute is as open and verifiable as any other on-chain resource.
2. x402 Built Directly Into Every TEE Instance
Perhaps the most significant architectural change: x402 is now embedded directly inside every TEE instance. There is no centralized middleware, no payment proxy sitting between a user's request and the enclave doing the work.
x402 is an open payment protocol that revives the HTTP 402 "Payment Required" status code to enable instant, internet-native payments, purpose-built for APIs, AI agents, and machine-to-machine transactions. Instead of subscriptions or API keys, clients simply pay per request on Base testnet, and inference settlement and verification happens on the OpenGradient testnet.
By integrating x402 natively at the TEE level, OpenGradient eliminates an entire class of trust assumptions. Your inference request routes directly to a verified enclave, full stop.
3. Async-Friendly Payment Settlement
One of the practical challenges with pay-per-inference models is payment latency blocking compute. We've solved this with an x402 settlement layer that allows users to pre-fund an account with tokens. Inference requests draw from this balance, meaning async workloads aren't blocked waiting for on-chain settlement to complete before computation begins.
This is critical for agentic use cases where an AI agent might be orchestrating dozens of inference calls in parallel. The economics work without the bottlenecks.
4. Enclave-Terminated TLS Connections
Security doesn't stop at the enclave boundary. Our updated TEE software now supports enclave-terminated TLS connections, meaning the TLS session is terminated inside the enclave itself, not at the host machine level.
The practical implication: no intermediary, including the enclave's own parent server, can intercept or decode the communication between a user and the enclave. End-to-end confidentiality is enforced at the cryptographic level, not just by policy.
5. On-Chain Verifiable Inference Outputs
Each TEE instance now generates its own enclave signing key. After an inference runs, the output is signed and a hash is persisted on-chain. This allows users to independently verify that their inference actually ran and was recorded, without ever exposing the actual content of the result.
Only the user can verify the output, because verification requires having the result itself to recreate the hash. Privacy is preserved by design. This is a meaningful capability for any application where auditability matters, such as compliance, research, and enterprise workflows, without compromising confidentiality.
What Do Users Need to Do?
For most users: nothing different. The OpenGradient Python SDK handles all the complexity under the hood, pulling verified nodes from the on-chain registry, routing requests appropriately, and managing payment flows. From a developer's perspective, you make normal LLM API calls with a funded wallet, and the system takes care of the rest.
Over time, users will be able to see their inference outputs persisted and verifiable on-chain, adding a new layer of transparency to every request.
For those interested in registering their own TEE nodes: that capability is on the roadmap and will become available once we've open-sourced the relevant software. Stay tuned.
Why This Matters for the Agentic AI Economy
The convergence of verifiable compute and internet-native payments is foundational infrastructure for what's coming next. AI agents that autonomously spin up compute, pay per inference, verify their own outputs, and interact with each other, all without human-in-the-loop payment flows, aren't science fiction. They're what we're building toward, right now.
OpenGradient's x402 integration positions us as native infrastructure for this economy: trustless, verifiable, and open.
What's Coming Next
- Open-source TEE software to allow permissionless node registration by the community
- On-chain inference output browsing so users can audit their own history
- Expanded node registry with performance metrics and reputation signals
Get Started
OpenGradient testnet is live. Here's how to dive in:
- Explore the SDK docs: LLM Integration Guide
- Browse example projects: OpenGradient SDK Examples on GitHub
- Explore models: hub.opengradient.ai
- Full documentation: docs.opengradient.ai



