Research
My goal is to help build safe, secure AGI/ASI through (hardware) verification, benchmarking/evaluations, and governance that bridges technical rigour with real-world impact. Before that, I built everything from systems used by millions to high-throughput trading platforms.
I’ve been fortunate to work with (and be supervised by) Tobin South & Ben Bucknall (Pivotal Research), Prof. Alastair R. Beresford & Daniel Hugenroth (Cambridge), and Prof. Nic Lane & Bill Marino (Cambridge). During undergrad, I worked under Prof. Georg Carle on security and anonymity of mix networks, and co-authored a climate-AI chapter with Prof. Isabell Welpe.
Selected Work
- AIReg-Bench: Benchmarking Language Models That Assess AI Regulation Compliance (link)
- Attestable Audits: Verifiable AI Safety Benchmarks Using Trusted Execution Environments (link)
- Cannot or Should Not? Automatic Analysis of Refusal Composition in IFT/RLHF Datasets and Refusal Behavior of Black-Box LLMs (link)
-
Private Group Management for Mix Networks (link)