RESEARCH AND FRONTIER
Credentialed at the edge.
Separately from client work, I lead AI alignment research at AE Studio with frontier-lab collaborators. That research keeps the systems I ship grounded in what is actually in production, not copied from last year's blog posts.

THE WORK
Applied AI alignment research
I lead small teams of data scientists and ML engineers on research focused on making frontier models more legible and safer to deploy. The work is applied: interventions that can be validated on real models, techniques that transfer to production systems. Recent collaborator output includes Anthropic's research on selective gradient masking ("Beyond Data Filtering", 2025), which acknowledges the AE Studio gradient routing team I run.
I keep the specifics of collaborator roadmaps off the public record. What clients get is a practice grounded in the current shape of the field.
WHY THIS MATTERS FOR CLIENT WORK
The field moves every quarter; I move with it
A client stack designed six months ago against last year's best practices is a stack that has already started aging. The research work keeps my default patterns current.
It also means when a client asks whether a new capability is production-ready, or whether to wait a quarter, I can answer with something better than guesswork.
COLLABORATION
Open to new research partnerships
If you are a research lab, independent researcher, or applied team working on alignment, interpretability, or evaluations and want a technical partner, reach out.
I do not publish a list of current collaborators. I do reply to serious inquiries.