REQUESTS FOR DISCUSSION

On AI infrastructure security and AI verification.

Building AI Security & Verification Infrastructure Before We Need It

In the late 1950s, negotiations for a comprehensive nuclear test ban began. Until the late 70s, a full ban was blocked because neither the U.S. nor the USSR could reliably detect small underground tests. Without credible verification, both sides feared the other could cheat undetected. Only in the 90s was the political climate and verification technology mature enough for a full test ban. That technical gap led to decades of extra nuclear testing. Each new test deepened mistrust, justified larger arsenals, and made future agreements politically radioactive. The lack of verification technology didn't just delay one treaty; it entrenched a destabilising arms race that might have been avoided.

We may be heading toward a similar inflection point with AI—but the challenge is broader than verification alone.

On the security side: As AI systems become capable of accelerating AI R&D itself, the value concentrated in model weights, training secrets, and agentic scaffolds becomes enormous. Nation-states and other well-resourced actors will have strong incentives to steal this IP. We'll need security measures that don't yet exist at the required level of robustness: tamper-evident infrastructure, hardware-enforced constraints on data movement, confidential computing environments, and systems that can monitor and constrain even sophisticated AI models attempting to exfiltrate themselves.

On the verification side: International agreements between AI superpowers will require trust in each other's development and deployment practices. This means being able to verify compute usage, workload types, and compliance with agreements—none of which is possible with today's infrastructure.

The problem is lead time. Security and verification systems take years to design, build, deploy, battle-test, and integrate into real infrastructure. The tools we'll need for powerful AI systems must be built and deployed before those systems arrive. Research papers and theoretical frameworks aren't enough—we need working systems that have been hardened through actual use. If we wait until the need is acute, we'll be years behind—and those years could see unchecked proliferation, destabilizing theft of AI capabilities, or coordination failures that might otherwise have been avoided.

There is a unique opportunity for founders and engineers: build and ship real security and verification infrastructure now—before the world urgently needs it—to enable both robust protection of AI systems and credible international coordination on the most important technology of our time.

FILTER BY TAG:
FILTER BY AUTHOR:
# TITLE AUTHOR STATE
001 Side-Channel Leakage from LLM Inference Gabriel Kulp idea
002 MoE Expert Fingerprinting Gabriel Kulp idea
003 Trained Covert Communication via Side-Channels Gabriel Kulp idea
004 Functional Invariants for Side-Channel Defense Gabriel Kulp idea
005 Network Tap for Compute Verification James Petrie idea
006 Secure Enclosure (SCIF) for AI Clusters James Petrie idea
007 Mutually Trusted Cluster for Log Verification James Petrie idea
008 Attested Logging with Existing Hardware James Petrie idea
009 Zero-Maintenance Compute for Tamper Evidence James Petrie idea
010 Trusted Kernel-Logging IP Block James Petrie idea
011 Nonconfidential Verification (Red Team/Blue Team) James Petrie idea
012 Analog Sensors for Compute Verification James Petrie idea
013 Network Exhaustion Protocol James Petrie idea
014 Memory Exhaustion Protocol James Petrie idea
015 Hardware Verification Lab James Petrie idea
016 Zero-Knowledge Proofs for LLM Inference James Petrie idea
017 Inference-Only Verification Package Romeo Dean discussing
018 Granular AI Workload Verification Romeo Dean discussing
019 Guarantee Processor Design FlexHEG idea
020 Tamper-Evident Secure Enclosure for Accelerators FlexHEG idea
021 Interlock-Based Verification Architecture FlexHEG idea
022 NIC Repurposing for Guarantee Processor FlexHEG idea
023 Compute Graph Declaration Protocol FlexHEG idea
024 Distributed FLOP Counting FlexHEG idea
025 k-of-n Guarantee Update Mechanism FlexHEG idea
026 Location Verification for AI Compute Amodo / Tim Fist idea
027 Training vs Inference Workload Discrimination Amodo idea
028 Retrofittable Tamper Detection System Amodo idea
029 NIC-Based Bandwidth Limiting & Interconnect Verification Amodo idea
030 Verification Research Testbed & Dataset Amodo idea
031 Chip Registry for AI Governance Oxford Martin AI Governance Initiative idea
032 Offline Licensing for Compute Rationing RAND Corporation idea
033 Fixed Set Cluster Size Limiting RAND Corporation idea
034 License Replay Prevention RAND Corporation idea
035 Performance Counter Integrity Protection RAND Corporation idea
036 Tamper-Respondent PUF Enclosures RAND Corporation idea
037 Secure Chip Binning and Feature Lockout RAND Corporation idea
038 Containerized Data Centers for Mobile/Military AI Oxford Martin AI Governance Initiative idea
039 Model Fingerprint Attestation Oxford Martin AI Governance Initiative idea
040 Model Tenancy Ledger Attestation Oxford Martin AI Governance Initiative idea
041 Device-Model Mating for AI-Enabled Weapons Oxford Martin AI Governance Initiative idea
042 On-Device Logging for AI Weapons Verification Oxford Martin AI Governance Initiative idea