Attestable Audits: Verifiable AI Safety Benchmarks Using Trusted Execution Environments
Published: Jun 30, 2025
Last Updated: Jun 30, 2025
Authors:Christoph Schnabl, Daniel Hugenroth, Bill Marino, Alastair R. Beresford
Abstract
Benchmarks are important measures to evaluate safety and compliance of AI models at scale. However, they typically do not offer verifiable results and lack confidentiality for model IP and benchmark datasets. We propose Attestable Audits, which run inside Trusted Execution Environments and enable users to verify interaction with a compliant AI model. Our work protects sensitive data even when model provider and auditor do not trust each other. This addresses verification challenges raised in recent AI governance frameworks. We build a prototype demonstrating feasibility on typical audit benchmarks against Llama-3.1.