Lessons from the Brains Accelerator
Using our brains to figure out how to improve how Brains can improve brains *inception sound*
In April, we wrapped up the Brains Research Accelerator’s first cohort with an idea-packed demo day. As a reminder, Brains is an accelerator that helps ambitious individuals hone the ideas, skills, and opportunities to execute on impactful research visions that are too early for a company but are too big for a single academic lab; the program is designed to raise their ambitions, help them build skills and refine their ideas, and then go execute on those ideas – whether by running it as a program in a foundation, becoming a program manager at a government ARPA, or starting a Focused Research Organization (FRO).
As tends to be the case with the first iteration of anything, the cohort was an adventure that taught us many lessons that seem generalizable beyond our specific context. In the spirit of institutional experiments, these lessons learned seemed worth sharing.
This piece has two major parts:
A description of the actual program, broken down by different ‘components.’ (You could think of this as a “methods” section.)
Some generalized lessons we’ve taken away from the first cohort. (You could think of this as a “discussion” section.)
Part 1: What we did
Preparation
Prep for the program required recruiting mentors, fundraising, hiring a team, and of course, recruiting fellows.
Recruiting Fellows
We took a “barbell approach” to finding and recruiting fellows. One end of the barbell was a high-touch sales process where we worked our network to get suggestions for people who might make good fellows, had initial qualifying conversations where we planted the idea in their heads, and then explicitly circled back when applications opened and would even follow up encouraging them to apply. The other end of the barbell was basically “being loud on the internet” — posting about the program frequently on Twitter, LinkedIn, and the Spectech blog. As part of this latter strategy, we also did several Q&A sessions, both on Zoom and on Twitter spaces.
The vast majority of the people that we talked to who we thought would be good fits for the program and we encouraged to apply did in fact apply, so we were some combination of effective at persuading people to apply and well-calibrated on good fits for the program going in.
Interestingly, amazing candidates came from both strategies: of the fellows who made it to demo day, the split is exactly 50-50 (or at least as close as you can get to splitting 11 people 50-50.) That suggests that we should continue to pursue both strategies!
For the next cycle, there are two other approaches we want to try: more aggressively trying to identify both mailing lists that potential candidates might be on and events with a high concentration of potential candidates. Blasting lists is low cost so there isn’t much downside to casting a wide net. Events are more involved and it’s not obvious which ones would be the most useful.
Fellow Selection and onboarding
The application process overall required a written application and two rounds of interviews. We tried to make the application as non-onerous possible while still weeding out people who weren’t serious and poking at people’s ideas: it consisted primarily of a 1-page writeup about the idea they wanted to pursue and a question about why they wanted to do the program.
For each candidate who made it through both rounds of interviews, each interviewer gave each candidate one of the following scores:
“Would argue to have this person in the cohort”
“Yes but wouldn’t fight for them”
“No but wouldn’t argue against them”
“No and would argue against them”
Based on the scores, we then discussed all the candidates and came up with the final selection.
We received 102 applications and weeded them down to 16. Of these 16 fellows, five dropped the program before demo day.
Program
We kicked off the program with a 2-day in-person event where we brought together the fellows, mentors, and advisors.
The program itself consisted of
“Storytimes.” Each week we did a closed-door, off-the-record Q&A session with important people who had been in the trenches of building, running, and funding various things that fall under “coordinated research programs”: current and former leaders at government ARPAs, former executives at heavy-hitting philanthropies, FRO founders, and more.
“Small Groups.” We split the cohort of 16 fellows into 3 discussion groups of 5-6 fellows. Each discussion group met for an hour once a week. This time was intended to allow fellows to discuss the week’s lessons (lectures & readings), seek help from peers, and workshop deliverables.
Mentoring. Each fellow was assigned to a mentor who had experience running programs at one of the government ARPAs. We suggested that fellows meet with their mentor once a week, but it wasn’t mandatory; different fellows used their mentors different amounts. Each mentor had between one and three mentees – we originally assigned two mentees to each mentor, but shuffling driven by personal preferences shifted the distribution. We tried to match fellows to mentors who had relevant technical experience as well as we could given the wide range of fellow’s areas and a limited pool of mentors.
Deliverables. The entire program was centered around creating two deliverables: a 2-page program overview and a 15-minute pitch for demo day. The theory behind the focus on the deliverables was that a compelling 2-pager and pitch are like the tip of a large iceberg that is a well-designed program: in order to make them good, fellows would actually need to do all the other things to design a good program, especially conditioned on the Brains team pushing them on that front. I think that theory was validated: the best pitches and 2-pagers at the end of the program had the most refined ideas behind them.
(You can see the whole syllabus here, and the research leaders’ playbook that we used as a ‘textbook’ here.)
The program ended with a 1.5-day demo day1 attended by representatives of several large philanthropies and almost all the government ARPAs. During the demo day, each of the fellows gave a 15 minute pitch for their program, followed by 10 minutes of Q+A.
The Demo day was a rousing success as measured by the quality of the ideas the fellows presented, the amount they had improved those ideas (and their presentation of them!) over the course of the accelerator and the attendees' responses to the ideas: almost everyone who attended expressed some flavor of “all the presentations were fascinating, even the ones that I didn’t expect to be interested in!”
While Demo Day was the technical end of the program, we have continued to help fellows because Brains’ ultimate success depends on what the fellows do after demo day (remember the two-years-out metric). Demo day created a lot of goodwill and excitement towards the fellows and their projects — the challenge is how to capitalize on that and keep momentum going both for the fellows and funders/people who know funders.
Program Lessons
While asking people to write free-form 1-pagers about their program ideas likely weeded out many applicants who were unserious, it wasn’t particularly good at identifying high-quality candidates: even high-quality candidates needed a lot of work before their writing and ideas were tight enough for a good 1-pager. (It’s also unclear if this weeding-out effect will continue in a world of LLMs).
We’re pretty sure that an in-person kickoff for a remote program was crucial both for kick-starting relationships for the fellows and getting people to take the program seriously.
Having the storytimes be closed-door and off-the-record was absolutely critical: guests were extremely candid and went into stories that they would never have done in an on-the-record situation or with a large audience.
Having small groups of 5-6 people is the ideal size to get a range of perspectives while making sure everyone has a chance to speak. (We inadvertently tested smaller groups when people dropped out of the program.)
Even smart, motivated adults will wait until the last minute to submit things!
Templates are really helpful for improving quality in a short amount of time. They will never get something to the 99th percentile, but they will get it to the 97th percentile.
People can improve drastically between a first draft and third draft with good feedback.
Part 2: Broader Lessons
Hypotheses that we validated
We can actually help people improve their ideas. Both the actual proposals and the ideas behind them were drastically better at the end of the program than the beginning across the board. It’s easy for programs like these to just select people who would have been successful anyway without adding much by way of education or help — I don’t think that is the case here.
Philanthropists and ARPAs are interested in fellows who come out of Brains. Actions speak far louder than words, and while no fellows are running multi-million dollar programs a month out (remember, the timescale we’re looking at is two years) various philanthropists and government agencies are taking actions to move them towards that goal.
Good people are willing to do an unproven, part-time, unpaid program. There is an open question for any new program (especially a new kind of program) of whether good people will both join and engage seriously with the program. We could have failed on this front in a number of ways:
People who want to start coordinated research programs but hadn’t yet done so on their own could have been a market for lemons; that is, the world could be such that everybody who is capable of doing the thing well could already be doing the thing.
Good people might not have applied because they didn’t think the program would actually help them.
We could have found talented people with the right-shaped ideas, but they might not have taken the program seriously.
Peer relationships help. We put a lot of work into making sure that the fellows developed working relationships with each other: organizing kickoff weekends, making small group meetings mandatory, working to make sure the slack channel was lively. Fellows both rated the peer groups as one of the most useful parts of the program in exit polls and many of them attributed the quality of their pitches to self-organized practice sessions.
Deadlines are important. The most useful thing in the entire program, according to both the fellows’ exit surveys and informal conversations, were simply the intermediate and final deadlines on the deliverables. It’s both surprising because it’s so simple and unsurprising because human nature — even for many talented, motivated people — is to push off work that doesn’t have an external forcing function.
Brains successfully fulfilled all three functions of an accelerator/fellowship program:
Selection: from more than 100 applicants, we managed to select people who did good work developing and communicating their ideas.
Filtering: the program helped some fellows realize that running a coordinated research program was not the right move for them (yet).
Education: fellows seemed to genuinely have learned things and improved their skills throughout the course of the program. (As opposed to us just picking people who would have crushed it with zero intervention.)
Types of fellows
We learned a lot about the types of people who are best for the Brains program.
Right now, the ideal fellow seems to have the following characteristics:
Has a Technical PhD or has done equivalent work (aside: Technical PhD degrees actually mean something!) A PhD gives you the finger knowledge of what it’s like to grapple with the frontier of what we can do (ideally in the physical world) without knowing whether a problem might be literally impossible to solve. Actually getting the piece of paper isn’t the only way to get that experience, but it’s one of the most straightforward ways. Furthermore, the signaling value of the actual degree is unfortunately important both for potential funders and ARPA employers. (See below for more about that).
Has done several years of work outside of academia (ideally at (ideally starting) a startup). There is a lot of knowledge and habits that is useful for starting and running coordinated research programs that is hard (but not impossible!) to pick up in academia: from biasing towards talking to people over just reading papers to understanding that most people don’t care how you get a result as long as you can get it to simply moving fast under deep uncertainty. It might be my bias, but I think startups are a particularly intense crucible for building these skills. Furthermore, working in a company helps people understand the limitations of the institutional structure, meaning that they have a clearer explanation of why their idea needs an FRO or ARPA program.
Has a clear idea of the technical thing they want to create. The most successful fellows had a pretty well-defined idea going into the program and conversely many of the dropouts had the most nebulous ideas coming in. Brains is not long enough for people to both explore a space, refine an idea, and hone communication around it.
But are also flexible about how they achieve that thing. While it’s important that fellows have an idea of what they want to accomplish, fellows who are too attached to how they accomplish that goal (whether in institutional structure or technical approach) often fail to make compelling cases for their programs or get stuck down rabbit holes. (This approach flexibility is one reason why startup experience is valuable.)
Obviously there are many many corner cases!
Based on these characteristics, some personas who might be good fits (and thus worth seeking out):
CTO of recently exited very technical startup
Someone who went back to a PhD program after years working
Someone frustrated with a large R&D org
Unfortunately, the type of person who is a good fit for Brains is to an extent constrained by the attitudes of other organizations: Brains fellows can’t be successful in the long run unless someone either hires them or funds them to execute on a program. The “young/uncredentialed people can be just as good” revolution that has swept Silicon Valley in the past two decades has yet to permeate government organizations and most philanthropies.
On the flipside, I think experience does just matter more for the leader of a coordinated research program than the CEO of a consumer software company. Success is tied to the aforementioned finger knowledge of actually doing research; it requires getting a bunch of academics, government officials, and/or business leaders to take you seriously; it also requires a lot of politics and negotiation. All of these are correlated with some number of years of work experience. It’s almost painful to say, and of course there are exceptions, but I think one of the lessons of Brains is that experience matters.
(I do think that there is a limit to the advantage that experience brings — as they become more senior, people often become rigid in their thinking, or unwilling to get their hands dirty.)
Things we were Wrong About
We thought the small groups would be more for education/flipped-classroom type work where they discussed a case study or lecture. Luckily, we left the fellows a lot of self-direction in how they used the small groups and they used them to share progress updates, strategies, connections, advice and lessons learned.
Group office hours aren’t particularly useful. We put aside times each week for drop-in office hours that people barely attended. It’s not clear if this was just because people were uncomfortable asking questions in groups or some other reason. Next round we’re going to try having individual 15-minute slots people can sign up for.
Failure Modes
We observed several “failure modes” both in applicants who would otherwise be promising and in fellows who we admitted but ultimately didn’t successfully make it to demo day. None of these failure modes are a condemnation of people, just ways in which they ultimately were not good fits for the Brains program.
Being wedded to a specific approach. Some people want to simply level up a previous project or explore a specific approach, instead of asking what approach would be the best for achieving an ambitious goal. These previous projects were often things like PhD/Postdoc research or technology from a failed startup.
Not wanting to go full time. Running a coordinated research program is a full-time job. While we didn’t require that people quit their previous role to join Brains, we did need people to be ready to go full-time to run a program once they were hired or funded.
Too little or too much ambition. Coordinated research programs need to be incredibly ambitious, and most ambition mismatches took the form of not being ambitious enough. There were a few on the other end that looked like “I am going to completely solve climate change within five years” – usually without a precise insight as to how that would happen.
Ideas without a precise insight. Successful ambitious ideas need to be coupled with precise insights about how to achieve them. We absolutely didn’t expect applicants to have everything figured out, but we did expect to hear some precise, non-obvious reasons why they thought their ideas were timely and had some chance of succeeding.
Wanting to do something drastically outside of their area of expertise. Program leaders need to be able to hit the ground running quickly and interface closely with technical experts. This isn’t to say that we would outright reject a biologist who had an insight about chemistry, but that we need a lot of convincing if someone with no “hard technology” experience wanted to work on a deeply technical idea.
Having a problem you want to solve instead of a program you want to run. The Brains program is very oriented towards opinionated solutions and technology. As a result, it can be a poor fit if someone wants to run a program that is meant to gather more data about a problem or support a huge range of approaches to a problem without narrowing them down.
Just having a hunch/having an underdeveloped idea. Brains isn’t long enough for most people to do the exploration to turn a hunch into a program idea. People who come in with an underdeveloped idea just can’t get it to an acceptable level of maturity by demo day.
Being startup-pilled. Brains is not a startup accelerator. While we try to screen for people who just want to startup, there is a failure mode where someone comes in saying they want to do an FRO but immediately goes down startup rabbit holes.
Tensions
There are several tensions that may be inherent to the Brains accelerator:
Between people who really want to start their own thing and be the boss vs program managers. The ARPA program manager role is actually a big tradeoff: in exchange for a huge budget and massive scope, you lose the ability to directly say what people should be doing and be the leader of an organization as you would at a startup or as a PI in a lab. Many research leads have strong opinions about how things should be done and want to be the boss. FROs tempt people with the possibility of both being the boss and that budget and scope without answering to investors; however, it’s extremely hard to get them funded. We’re planning to address this tension by setting expectations and emphasizing tradeoffs clearly up front.
Between the ideal stage in a career experience-wise and the upsides of actually doing Brains. There are several intrinsic and extrinsic reasons why people with 10+ years of post-school experience in different domains make good program leads: networks, deep intuitions, leadership experience, and explicit requirements from various organizations. However, we suspect that accelerators like Brains tend to be most useful for people with less experience because Brains’ primary ways of helping might be dampened by the more fixed mindsets and broader networks that are correlated with more experience. However, we’re optimistic both that we can find amazing young people and that while they are a smaller fraction of the total population, there are still many experienced people who Brains could help a lot.
Conclusion
Overall, I would call the first Brains cohort a success. While it’s too soon to declare absolute victory – our internal metric continues to be how many fellows are running multi-million dollar research programs two years out from the program – we did manage to get past four hard, wired-in-series hurdles, any of which could have caused failure:
Get “good” (engaged, talented with the right-shaped ideas) people to apply.
Admit “good” (engaged, talented with the right-shaped ideas) people to the program.
Get fellows’ ideas, 2-pagers, and pitches to a high quality level across the board.
Convince guests who can actually help fellows execute on their ideas to demo day.
We learned many lessons – both strategic and intensely tactical – that we’re going to incorporate into the program to make the next cohort even better. Planning has already begun for the 2025 Brains cohort! If you would fund Brains, be a mentor or a fellow, or know people who would, please send them our way.
It’s admittedly awkward to have a multi-day demo day. Unlike startup pitches, research programs actually need at least 15 minutes to dig into all the nuances. One way around this would be to not do Q+A for each pitch.