We’re excited to introduce the 2025 AI cohort of Brains Fellows!
As a reminder, the Brains is a research accelerator that helps talented scientists and technologists execute on ambitious ideas that are beyond the scope of individual academic labs, startups, or large companies. This is a special “interstitial” cohort focused on AI security and governance capabilities.
If you find any of their ideas particularly exciting or intriguing, please get in touch with them via LinkedIn (linked to in their names) or email brains@spec.tech and we’ll route you correctly.
Daniel Kang
Daniel is building the first red-line benchmark for AI agents, focused on the ability of AI agents to perform complex, cybersecurity attacks on real-world systems. Assessing their products’ offensive cybersecurity capabilities is highly complex and contrary to the commercial incentives of frontier labs so no benchmark yet exists. As a result, policymakers and AI labs are flying blind on measuring the ability of AI agents’ ability to exploit complex cyber “kill-chains,” making it difficult to generate policy recommendations or determine when responsible scaling policies should be enacted.
About Daniel
Daniel is a professor of computer science at UIUC, where he studies the progress of AI. His work includes award-winning benchmarks (CVE-Bench), widely cited standards for AI benchmarks (ABC), and work on understanding reinforcement learning for LLMs. Daniel's lab has been awarded grants from the Open Philanthropy Project, Schmidt Sciences, and Google.
Lisa Thiergart
About Lisa
Lisa is an AI Security researcher who has worked on projects spanning high security datacenters, AI interpretability and AI technical governance. Lisa previously led an 18-person research team at MIRI and pioneered the AI interpretability technique of Activation Steering together with Dr. Alex Turner. Lisa holds a BSc in Computer Science from TU Munich, completed graduate research in reinforcement learning for robotics at Georgia Institute of Technology, and earned an honors degree in Technology Management and Entrepreneurship. Lisa is a Fullbright alumnus, fellow at the Foresight Institute, a Manifund regrantor and mentor at the MATS AI safety fellowship.
Mayank Kejriwal
Mayank is developing methods to forecast and prevent rare, high-impact failures in AI agents operating over long time horizons in real-world environments. Today’s AI testing methods focus on the short tail (common, easily reproducible bugs) while long-tailed risks remain largely invisible until it’s too late. This project seeks to shift those rare failure modes into the observable range, creating an “existential risk calculator” for advanced AI systems. The work will help surface edge-case errors in autonomous vehicles, military decision aids, AI scientists, financial trading agents, and long-lived systems managing critical infrastructure.
About Mayank
Mayank is a research scientist and professor at the University of Southern California, where he directs the Artificial Intelligence and Complex Systems group at USC’s Information Sciences Institute. His work bridges AI and human-computer interaction to tackle real-world challenges, from mapping health disparities and combating human trafficking, to building research copilots that help scientists write better grants. Mayank's research, funded by DARPA and NIH, has been published across 100+ peer-reviewed papers, and featured in Fast Company, The Conversation, and the San Francisco Chronicle. He is also the lead author of an MIT Press textbook on knowledge graphs.
Melissa Carraway
Melissa is developing tools and methodologies to build better AI-enabled sociotechnical systems for high-stakes expert decision-making. By developing an understanding of the kinds of decisions that domain experts must make in high-information and high-stakes environments, we can refine and generalize our understanding of the dynamics of human-machine teaming in these kinds of scenarios. By focusing on the needs of expert users and teams, we will be able to create systems that are more readily adoptable, safer, and provide the right levels of insight and accountability while enabling AI to appropriately support work in high-stakes contexts.
About Melissa
Melissa Carraway is a human factors researcher, UX strategist, and systems thinker who designs mission-critical tools. With a background spanning visual design to cognitive modeling, she brings clarity to complex systems, crafting intuitive and trustworthy interfaces for high-stakes domains like the DoD and intelligence communities. Her work has influenced AI adoption strategies across government programs, reducing cognitive load and enhancing decision-making. Currently a Research Scientist at ARLIS, Melissa integrates technical insight with user advocacy, shaping the future of human-centered AI through research, design, and cross-functional collaboration.
Michael Hsu
Michael is giving human-like memory to AI by building computing architectures inspired by neuroscience. Existing memory solutions for AI are more like notebooks than like brains, leading to severe cost and capability shortcomings. A machine that possesses biomimetic memory, on the other hand, would be able to shift the substrate of learning from fixed patterns in the model to memory structures that can be continuously updated. The result is AI that can maintain context indefinitely, learn in real-time, and dynamically leverage its internal representation of the world to optimize its performance. Together, these capabilities enable us to better understand what the machine is thinking, while supporting more robust and efficient forms of reasoning across a diverse range of tasks.
About Michael
Michael is a software engineer exploring how complex systems science and computational neuroscience can help AI think more like humans. He brings a systems perspective from designing software architectures and leading engineering teams, combined with deep curiosity about nonlinearity, emergence, and the relationship between time and space. In the past, he was Director of Engineering at Voltus, which sought to bring balance to the North American energy grid. He also worked briefly on Google Assistant. Michael studied computer science, literature, and mathematics at McGill University, and has interdisciplinary experiences across bioinformatics, biomimicry, ecology, geography, and philosophy.
Rick Goldstein
Rick is developing techniques to reverse engineer neural networks into precise, human-readable code – not just approximate explanations, but exact algorithmic reconstructions that reveal the computational principles behind AI capabilities. This enables designing more efficient AI architectures and training recipes, verifying safety through rigorous analysis of system behavior, and extracting scientific knowledge from models trained on complex real-world data. For instance, we could train a model on biological data and then extract the cellular mechanisms it discovered. This transforms AI models from mysterious black boxes into transparent and reliable scientific tools.
About Rick
Rick is the CTO of Freestyle Research, a robotics startup developing AI models for dexterous grasping. For the past three years, he has conducted AI Safety research into understanding how AI models reason, and he previously worked as a software engineer at Waymo developing routing algorithms for autonomous vehicles. Additionally, he mentors early-career AI Safety researchers. He holds a B.A. in Applied Mathematics from Harvard University and M.S. and Ph.D. in Robotics from Carnegie Mellon University.
Sonia Joseph
Sonia is reverse-engineering video and robotics models to drive unprecedented advances in both capability and security. Despite the rise of "world models" like Genie 3, Veo, Cosmos, and Sora, we still lack a foundational science of world models– we don’t yet know if they truly capture causality or physical laws. This knowledge gap carries real risks: as video models are increasingly integrated into robotics, hidden failure modes or even malicious “sleeper programs” within model weights could lead to dangerous autonomous behavior. Sonia’s research develops methods to detect and neutralize these threats while rigorously probing how these models represent the world, including their understanding of fundamental physics. Her work seeks to establish a scientific framework for interpreting, securing, and improving world models.
About Sonia
Sonia is a PhD candidate at McGill and Visiting Researcher at Meta on the JEPA video understanding team, where she is writing her thesis on the interpretability of multimodal models and physical world models. Previously, she was a researcher at Janelia Research Institute and the Princeton Neuroscience Institute, and worked for various AI startups in San Francisco.