REQUESTS FOR DISCUSSION
On AI infrastructure security and AI verification.
Building AI Security & Verification Infrastructure Before We Need It
In the late 1950s, negotiations for a comprehensive nuclear test ban began. Until the late 70s, a full ban was blocked because neither the U.S. nor the USSR could reliably detect small underground tests. Without credible verification, both sides feared the other could cheat undetected. Only in the 90s was the political climate and verification technology mature enough for a full test ban. That technical gap led to decades of extra nuclear testing. Each new test deepened mistrust, justified larger arsenals, and made future agreements politically radioactive. The lack of verification technology didn't just delay one treaty; it entrenched a destabilising arms race that might have been avoided.
We may be heading toward a similar inflection point with AI—but the challenge is broader than verification alone.
On the security side: As AI systems become capable of accelerating AI R&D itself, the value concentrated in model weights, training secrets, and agentic scaffolds becomes enormous. Nation-states and other well-resourced actors will have strong incentives to steal this IP. We'll need security measures that don't yet exist at the required level of robustness: tamper-evident infrastructure, hardware-enforced constraints on data movement, confidential computing environments, and systems that can monitor and constrain even sophisticated AI models attempting to exfiltrate themselves.
On the verification side: International agreements between AI superpowers will require trust in each other's development and deployment practices. This means being able to verify compute usage, workload types, and compliance with agreements—none of which is possible with today's infrastructure.
The problem is lead time. Security and verification systems take years to design, build, deploy, battle-test, and integrate into real infrastructure. The tools we'll need for powerful AI systems must be built and deployed before those systems arrive. Research papers and theoretical frameworks aren't enough—we need working systems that have been hardened through actual use. If we wait until the need is acute, we'll be years behind—and those years could see unchecked proliferation, destabilizing theft of AI capabilities, or coordination failures that might otherwise have been avoided.
There is a unique opportunity for founders and engineers: build and ship real security and verification infrastructure now—before the world urgently needs it—to enable both robust protection of AI systems and credible international coordination on the most important technology of our time.