Thousands of feet below the surface of the Pacific Ocean, a sea sponge produces threads that lattice together into a translucent skeleton. A foot-long glass house of their own design. When Tim McGee, now a resident at Astera Institute, learned about this creature — the Venus flower basket — in 2003, it changed his career. McGee left a job in pharma to work with a lab studying how sea sponges could do such a thing. It was an amazing feat of manufacturing completely unlike anything human technology could do. 

“It just kind of melted my brain that at the bottom of the ocean there’s these creatures that are spinning glass, whereas we have forges at thousands of degrees,” McGee said. “It’d be amazing if we could do that.” 

McGee has since devoted his career to learning from nature’s ingenuity and the power of proteins. 

When nature assembles proteins carefully at the molecular scale, it creates strong, responsive, and smart fibers, because proteins can sense and respond to their environment. It’s a sort of material intelligence. But our manufacturing falls short. Our usual fibers don’t have the variable attributes that proteins do; and when we do try to build with proteins, our fiber assembling processes fall far short of nature.

We cannot yet program the exact combination of properties we need, be it strength, conductivity, transparency, and so on. Think of robots actuated by fibers that move like our own ligaments and compute a sense of touch. “How can proteins basically be a way for us to make almost anything?” McGee said. In his residency with Astera, he is working toward that future. 

McGee’s Impossible Fibers program is creating a manufacturing technique to make protein fibers that more closely mimic nature’s tactics. 

“We know nature can create tunable structures,” McGee said. “The question is, how do we start to get there? And how can we get there quicker?”

The problem with traditional fiber spinning

If you want to spin fibers out of proteins today, your options are limited. Manufacturers either melt and resolidify polymers (melt spinning) or extrude polymers from one solution into a bath that coagulates them into thin filaments (wet spinning). Neither method allows for adequate molecular assembly of proteins. They instead essentially force a material into a particular alignment. Whatever special attributes they need must then come from the usual chemistry levers: high or low molecular weight polymers and potentially toxic additives introduced after spinning. 

But proteins are too finicky for this approach. Proteins want to align and bond in their own particular way. “The way that we manufacture today is akin to just supergluing everything together and throwing it out there, as opposed to actually assembling the Lego bricks,” McGee said. 

With Impossible Fibers, McGee is seeking more control over how proteins assemble in space and time. 

More precise control with encapsulation

Impossible Fibers’ proposed system begins with “encapsulation,” inspired by how creatures like spiders, mussels, and velvet worms store, sequence, and trigger protein assembly. 

© Gretchen Hooker/Pixel Naturalist


The team is prototyping a three-part platform. First, a microfluidic device encapsulates small droplets of dissolved proteins, transforming them into stable droplets. The droplets can then be arranged, sorted, manipulated and programmed into desired arrangements, like beads on a string that program how the material is assembled. A third device then bursts the droplets and precisely assembled fibers from its proteins. 

© Gretchen Hooker/Pixel Naturalist


“You can pop them at the right moment,” McGee said. “So we do reactions in this microfluidic device, and then we pull a fiber out of the other end.” 

This is the kind of control that scientists often see in nature’s high performance fibers. 

Encapsulation gives unprecedented flexibility

McGee envisions the same encapsulation platform for programming fibers out of any number of different proteins. And that generalizability is important.

Fibers are everywhere, and the need for high performing multi-functional fibers is everywhere as well. Companies have spent decades trying to engineer spider silk for strong, lightweight materials. And the upside is about more than strength. AI companies would benefit from hollow core optical fibers which transmit data through narrow tunnels of air, rather than glass; roboticists would benefit from strain-sensing and conductive filaments which could unlock proprioception and more human-like function. “Whether it’s optical, electrical, mechanical, chemical, or just adaptable,” McGee said, “All those things you can do with proteins.”
It’s unrealistic to expect a one-size-fits-all fiber spinning platform. But this type of encapsulation platform gives manufacturers an unprecedented generalized step inspired by nature to begin protein assembly. 

Why here

Impossible Fibers is following in the footsteps of prior Astera residents, by identifying a daunting bottleneck that, if resolved, will ripple transformation across tech sectors that would not have the opportunity to innovate so drastically. 

Impossible fibers is the quintessential project that falls in the gap between academia and industry. Academic labs have shared some of Impossible Fibers’ ideas, but seeing that vision through requires a scope and scale beyond academia’s abilities. On the other hand, venture investors won’t touch a high-risk capital intensive project without a clear, focused application. But that’s precisely why previous protein fiber groups have failed: they are forced to use existing manufacturing in order to fit existing markets, effectively abandoning the unexplored terrain that made proteins interesting in the first place. The unique advantage of Astera is that philanthropic resources can de-risk these “boring” process components overlooked by traditional investment, while also taking bigger swings than what’s possible in academia. It’s the starting point for investors to see what might be possible if we invested in novel manufacturing through Open Science. Other labs or start-ups can then build on our work to advance the thousands of possible areas of focus.


“It’s kind of a rare program where they give you a salary and a stipend to build a lab to let you build out this capability,” McGee said. As an Astera Resident, McGee’s Impossible Fibers program will challenge old ideas of what fibers can be and what they can do. Multifunctional fibers could trigger a new era in robotics and unlock more durable and effective medical devices. McGee expects protein fibers to find use as implantable brain electrodes that more reliably match the soft, strong, conductive environment of nervous tissues.

Building openly, for now and the future

As Impossible Fibers works towards catalyzing fiber tech and new applications in the year with Astera, they are designing with open science in mind. The team is developing new microfluidic designs, new tools to prototype fiber spinning, and new methods to assess the protein fibers they spin. “Everything we are working on is open,” McGee said. “We believe this can foster a community of people to explore this exciting new frontier.

Why is this technological transition possible today, rather than five years ago or five years from now? For one, laser etching and 3D printing costs have decreased. But perhaps more important is the feedstock. We can make larger quantities of biopolymers — the building blocks for programmable materials — than ever before. Engineered bacteria can produce interesting proteins found elsewhere (and nowhere) in nature. Prototyping that previously would require millions of dollars and years of development can now be tested in weeks for tens of thousands.

It’s therefore urgent that we invent new manufacturing processes for this next generation of materials. 

McGee hopes the work will lead to predictive algorithms to assist in biomaterial design. “It’s the vision of the far future,” McGee said, “of being able to ask an AI, I want a material with these properties, give me the protein and the manufacturing sequence to enable that to happen.”

This potential to master protein design may even allow us to surpass what nature can do. “Nature is not a perfect solution,” he said. Evolution is a messy, path-dependent process of incremental steps. The goal of Impossible Fibers is to extract the math, physics, and chemistry behind the most clever phenomena. “If we want to make the future faster, we have to figure out how to compress what we can learn from the natural world into our own technologies.”

Want to dig deeper? Visit the impossiblefibers.com and follow along at iflab.substack.com

A new era of power generation is coming with nuclear fusion. Fusion technology mimics the enormous flux of energy powering the core of the stars like our Sun. Small atoms smash together under such immense pressure and temperature that they fuse into heavier elements. The process liberates roughly four million times more energy per kilogram than burning fossil fuels. 

Make no mistake: Fusion is a hard problem requiring immense innovation. But the energetic upside is compelling. If fusion power can reach 1 cent per kilowatt-hour — 5 to 10 times cheaper than today’s cheapest new-build power generation — it may enable other world-altering technologies, from affordable desalination and interplanetary space travel, to other leaps we can’t yet readily imagine.

Dozens of companies and governments around the world are betting on nuclear fusion to revolutionize how we power life on Earth. But despite $10 billion of investment, the road to these transformative promises is economically cloudy.

The interesting question is not really what life could look like with one cent electricity, but rather what sequence of events would make it possible? What needs to be true to achieve 1¢/kWh?

“It’s a wildly aggressive target,” says Damien Scott, a technologist working on fusion systems. “It may well be implausible.”

In 2025, Scott began a residency at Astera Institute to lead 1cFE, an initiative that models the potential costs of fusion energy, and determines whether (and how) any technologies have a path to 1¢/kWh electricity within 10 years. Scott’s goal is to understand what constraints limit the various scientific routes toward sub-cent fusion, resolving to make them visible before years of effort and billions more dollars pour in.

Many paths to cleaner energy

Scott’s interest in electricity came out of necessity. He spent his childhood years living off the grid on a remote farm in Botswana, forty miles from the nearest gas station. The wet seasons could wash out the roads, cutting them off further. Scott and his family learned to improvise. They drilled wells for their water, and jerry-rigged 1990s-era solar panels for power.

He later studied concentrated solar thermal power while earning a degree in applied physics at the University of Sydney, gravitating toward engineering projects that worked under harsh conditions. This led him to Williams Racing in Formula 1, where he helped create the team’s applied technology division. It was an extreme environment for engineering. F1 requires rapid iteration, unforgiving constraints, with constant high-stakes (and public) feedback. Scott went on to found an autonomous and electric vehicle fleet simulation and optimization startup, Marain. And these experiences crystallized his philosophy as a technologist: Before building expensive hardware, model enough to reduce uncertainty. In other words, Model twice. Spend once.

Marain was acquired by General Motors. After spending two years at GM Scott departed and began exploring what problems to work on next. “I kept coming back to fusion,” he says.

He studied the landscape and was struck by the quality of privately funded engineering teams in the space. The technologies were promising. But he couldn’t find a clear measure of how promising. “If we take the culmination of the last 75-80 years,” he wondered, “if this all goes according to plan—how cheap could it be?” 

Wayfinding in fusion

Right now, the future of fusion is like a summit hike through dense forest. Many paths exist, some more delineated than others thanks to the hard work and good fortune of early trailblazers. But each tortuous trail faces unique obstacles.

In order to reach the right extreme conditions to achieve fusion on Earth, some use lasers to rapidly compress fuel. Other technologies use magnets to confine plasma. Nuclear fusion releases energy as electromagnetic radiation, fast moving ions and neutrons. What researchers do with that resulting burst of energy also varies. Many proposed fusion plants resemble current fission plants: They generate heat, boil water, spin turbines, and funnel electricity into the grid. Alternatively, a new model for plants could convert the energy of fast moving ions directly into electricity: plasma expands against the magnetic field that’s confining it, which induces a current.

Two feasibility metrics are particularly important when comparing technologies at the experimental stage: the triple product and the gain. Triple product represents the threshold plasma density, temperature, and confinement time required for a technology to trigger fusion Gain measures how much more energy a technology creates with fusion compared to the amount of energy it needs to start and sustain it. Small-scale experiments at Lawrence Livermore National Lab have yielded greater energy (8.6 MJ) than that delivered by a laser (2.08 MJ). This represents a theoretical, “scientific gain,” as opposed to the engineering gain or “plant gain,” for a whole facility. The inefficiency of the Livermore Lab laser makes it so that the system still uses much more energy than it generates. 

As an Astera resident, Scott’s 1cFE is building open-source cost models that compare these many paths on the basis of their potential upsides and constraints. For example, analyses peg the floor of fusion systems relying on steam generation at 0.5¢/kWh. “The cost of steam turbine processes really constrains,” Scott says. “It may be harder for fusion approaches that generate heat to get to that one cent target.” Direct energy conversion bypasses the costs of steam boilers, but it requires different fuels and less well-studied physics and engineering.

Tradeoffs like this are not necessarily dealbreakers. No leading technologies have yet been ruled out of the one-cent pursuit. But the optimal approach at this point in fusion’s development is to carefully scrutinize how we’ll reach one cent and beyond. And that, according to Scott, requires working backwards.

Frontier backcasting

As of 2025, no private company has demonstrated scientific gain above 1. Some are approaching, with forecasts of hitting scientific breakeven next year. We are seeing new facilities break ground every year with private-public partnerships. Several companies claim they will have carbon-free power plants online before 2030, and tech giants like Google and Microsoft have already signed power purchase agreements.

The hope to finally deliver fusion electricity after decades of work has never been higher, Scott says, but his ambitions aim even higher. 1cFE’s role is to investigate the plausibility of sub-1¢/kWh fusion power. “Solar is a very instructive analogy, because the fuel cost is zero, very similar to fusion where the fuel cost is minimal,” Scott says. “We can do solar plus storage at 5¢/kWh, and that is getting cheaper.” The key innovation lies in how to manufacture the device that produces electricity not just cheaply, but cheaper than anything else.

Hitting 10¢/kWh would make fusion competitive in some markets. At 5¢/kWh, the market balloons toward profound change. But at or below 1¢/kWh is where the more profound changes can emerge — it’s an anchor that exposes the limits facing fusion technology.

1cFE begins with this target, then asks what would have to be true — in physics and economics — for that world to exist. This so-called “frontier backcasting” reveals constraints and the required levers.

Frontier backcasting helps focus a subset of the field’s research, investment, and policy, toward the most aggressive end goals by exposing what’s actually possible. “It is common to overestimate what is possible in one year and underestimate what is possible in ten,” Scott says. Consider the fallen cost of launching payloads into low Earth orbit. Between 2000 and 2010, LEO launches hovered between $8,000 and $12,000 per kilogram. SpaceX sought to lower costs by an order of magnitude. But this would not be possible with incremental innovations. It was only possible with unprecedented reusability. Their bold target forced them to consider an entirely different gameplan, and today reusable rockets have already cut costs by a factor of 10.

“Which levers are unavoidable to reach 1¢/kWh?” Scott asks. “We will use this target in fusion to expose how far the levers must move, and in what order.”

Estimating uncertainty

1cFE will help compare different approaches to reaching cheap electricity. The current landscape of proposals includes mature technologies with relatively predictable lifetime costs as well as much newer technologies that are harder to quantify.

So how will the team quantify approaches comprising such varying degrees of uncertainty?

The first layer of modeling estimates capital cost, interest, operational costs, and learning rates — the change in cost that comes over time with more production experience. They also assign a “technology readiness level” to subsystems or components. “If it’s a laser that has only been made once, that’s on the lower end of the TRL scale. Whereas, if you are reusing fast-switching capacitors found in a bunch of other industries, that’s much higher.”

But what about components that simply don’t yet exist? Although the cost of new concepts that depend on not-yet-invented technology are more difficult to quantify, 1cFE’s modeling can benefit here too. Rather than guessing what unproven components will cost, the team inverts the question: working backwards reveals what those components would need to cost to make new ideas viable. “How cheap does your accelerator need to be?” Scott says. “If it’s half-mile long, that is going to propagate into the cost of your land.”

Outputs

In a one-year residency with Astera, Scott is leading a team with expertise in physics, systems engineering and software to create models, outline assumptions, and identify both promising and discouraging roadmaps.

Like other Astera residencies, 1cFE aims to unlock a future of abundance and human flourishing. The typical Astera project leans into the messy uncertainties about how technology will evolve. For 1cFE, this means applying rigorous cost analysis and technological assessments to expose a plausible path to abundant energy. This is a path to innovation that has too often been neglected in favor of incremental improvements. And 1cFE’s approach is open-science from idea to result: They will publish everything, including negative results as well as fully transparent corrections. “We encourage others to find errors, and we will correct them as we go,” Scott says. “The commitment is to keep the record honest and not just open.”

Later this year, 1cFE will deliver the first systematic, open-source techno-economic comparison of fusion pathways against a sub-cent target.  They will publish  datasets, a report benchmarking new AI tools, and reproducible code. Scott envisions a public dataset listing 10 to 15 fusion concepts and the technoeconomic conditions required for each of them to reach 1¢/kWh.

They will run two workstreams in parallel: backcasting to the technical, industrial, and policy constraints implied by a one cent target; and testing where AI can accelerate the design-build-test-learn cycle.

Researchers have already published a lot of useful fusion information, but data is largely fragmented across formats that don’t lend themselves to quantitative, and prospective comparisons. 1cFE is building the missing layer: a dynamic database of levelized costs across approaches. The team’s technoeconomic assessment will make it cheaper to add concepts, rerun analyses, and compare approaches side-by-side.

For companies interested in targeting 1¢/kWh, 1cFE’s work will help identify a path forward. The goal is not to prove out or favor any particular confinement system or fuel choice; it is to slash uncertainty so that technologists and investors pursuing ultra low cost fusion can direct resources into the ideas most likely to radically change humanity.

“Fusion is frequently described as clean, limitless, and virtually free,” Scott says. “Those words need to be quantified.”

Follow the project: 1cfe.substack.com | github.com/1cFE | @1cfenergy on X

Today, Astera Institute is launching Radial, a division which reimagines how life sciences research happens at a systems level. Radial will be led by Becky Pferdehirt as CEO. We are committing up to $500M over the next decade to expand our build-test-learn approach to scientific infrastructure and practices.

How we fund, do, and build upon science in the U.S. has long needed an update. We’re at a historic inflection point with AI acting as a forcing function on biology — not because it will immediately solve hard problems, but because its demands and widespread adoption will, increasingly, expose how unfit for purpose our scientific infrastructure actually is. We also have more tools than ever before to find new solutions. But positive change isn’t inevitable, which is why Astera is expanding its efforts through Radial.

Radial is grounded in two key beliefs. First, scientific practices for impactful discovery—from how we design methods to how new knowledge is shared and translated into real-world use—need to be rebuilt from the ground up. Second, it’s very difficult to go about this without deliberate iteration through active research efforts. In other words, we need to experiment with how science is done through actual science and scientists.

Our starting framework

Radial is going to try a broader range of things in the beginning that will drive our own evolution. We will be looking for more radical experiments that can give more information about what’s possible, regardless of whether they succeed or fail in the classic sense. We will iterate on:

  1. What science gets done.

We are thinking about what gets funded as well as what scientists decide to work on in the first place. We have more ways than ever to traverse the white space with data and modeling, not just opinions and trends. And we’re happy to work with anyone and any sector that prioritizes impact, utility, and metascience experimentation.

For example, we’re working with industry partners to leverage existing tools and laboratory infrastructure to generate open, high-quality datasets. With OpenADMET, we are characterizing small molecule properties—ADME and toxicity—that can be explored and trained on for real-world utility. We think there could be more general potential here: leverage unique cutting edge platform capabilities from start-ups and point them at public good problems. It’s kind of the inverse of Focused Research Organizations (non-profit start-ups), and we think there could one day be a more generalizable model here that addresses distinct gaps in a complementary way.

  1. How science is organized.

Our institutions are built for an era that emphasizes discrete projects and individual achievement. Many scientific challenges today require truly multidisciplinary or multi-sector teams holistically redesigning all components of technical systems (data, methods, and projects). This requires a lot of time, experimentation, and willingness to step outside dominant incentive structures to first figure out what works.

As an example, The Diffuse Project is our first major in-house program for understanding protein motion by co-developing the necessary experimental methods, computational models, data standards, and infrastructure. Our goal is to make dynamic structural biology data as foundational as the Protein Data Bank has been for static structures and to scale the data through broad methodological adoption.

  1. The outputs of science.

To accelerate scientific progress, we need to realign our infrastructure, metadata, and research artifacts around how AI-empowered scientists will actually work. We also need to build interoperable solutions so that advances compound across the ecosystem. We’re at a rare moment to shed the historical constraints on research sharing that have kept science from reaching its potential. The path forward is full of unknowns, which is where we feel most at home: testing what others don’t yet have the chance to try, and sharing what we learn along the way.

Among many other efforts, we are currently developing The Stacks, an open-access digital platform to experiment with how scientific, technical, and intellectual work is shared and discovered. It’s a publishing infrastructure prototype that we hope to innovate upon to help iterate towards what science actually needs from first principles for machine readability, rapid iteration, and genuine reuse.

Why now?

While we’ve been working in this area for a few years, we’ve needed a few things to fall into place before expanding. First, we needed to try a bunch of approaches to develop conviction around a starting framework worth expanding on. Second, we needed the right leadership team to take it to the next level.

I could not be more excited to share that Becky Pferdehirt has joined as Radial CEO. I’ve known Becky for over a decade and watched with admiration as she’s successfully worn many different hats as a scientist. Becky joins from Andreessen Horowitz, where she was an Investing Partner at a16z Bio + Health. Becky was previously an R&D Scientist at Genentech and held research and business development roles at Amgen. She has a PhD from UC Berkeley and a BS from MIT. If you’ve ever interacted with Becky, you also know that she is an exceptionally sharp, creative, and flexible thinker who acts with integrity – all critical for quickly imagining and exploring new directions for basic and translational science. Becky will be working closely with me and Prachee Avasthi, our Head of Open Science, as she takes the reins on Radial.

Joining her is Stephanie Wankowicz as Scientific Program Director of The Diffuse Project, our research initiative focused on protein dynamics. She will be leading its expansion. We are so grateful to Stephanie for fully taking the leap from her current post at Vanderbilt University, where she ran her own lab developing computational algorithms to model conformational ensembles from X-ray crystallography and cryo-EM data.

Becky and Stephanie will be working closely with several others, including Sekhar Ramakrishnan, who joins from The Swiss Data Science Center as Engineering Lead for The Stacks, our experimental publishing platform that we are developing and building through programs like Diffuse. Steven Moss has also joined us from the National Security Commission on Emerging Biotechnology as a new full-time Science Policy Associate to help think about how we scale change at a national level.

Join us

Radial is adaptable by design. We are building programs in-house, funding external teams with multi-year grants, investing in companies, and designing public-private partnerships across government, academia, and industry. We’re looking for people who are willing to take risks and treat informative failures like a badge of honor.

For all of our roles, we’re excited about candidates who will lead by example, shifting perceptions of what’s possible before it’s popular to do so.

For technical leadership: We’re searching for a Head of Bio AI to lead AI across Radial programs. [Apply here]

For structural biology and protein dynamics: Scientists, engineers, and operators for the Diffuse team. [Apply here]

For ambitious ideas that need space: Astera’s 2026 residency program has slots for projects that don’t fit existing funding models. [Apply here]

For new models of partnership: Companies, national labs, academic institutions—if you’re thinking about how your capabilities could be pointed at public-good science, let’s talk.

For working scientists facing bottlenecks: We’re launching an essay competition inviting active scientists to describe a concrete research challenge caused by structural bottlenecks, and experimental strategies to fix them. [Learn more.]

Get involved:

Today, alongside the launch of Radial, we are opening an essay competition that I’ve been ruminating on for some time. Namely, inviting active scientists from any sector to share concrete research challenges that can inform our future work at Astera. We’re interested in your hypotheses about what broad structural or systemic issues contribute to the bottlenecks you experience in your own science. It’s important to me that we hear more from active scientists on the ground.

Many of our scientific systems and institutions are no longer fit for purpose. How we fund work, share results, build teams, and connect science to other disciplines or sectors has long been in need of experimentation. This is no longer a controversial statement.

We are living through a historical inflection point that demands change. One force is technological, happening at unprecedented scale and speed. AI is making it harder to ignore systemic and infrastructural gaps, while also changing what solutions are possible. This is an incredible forcing function we should leverage to update our scientific practices.

At the same time, it’s become harder to talk constructively about change in light of political differences and more recent budgetary contractions. But it’s more important than ever to openly debate long-term reform now. And many disagreements are unlikely to be resolved through debate in the absence of real life testing.

We’re looking to you, scientists

The field of metascience, i.e. the science of science, is often driven today by non-scientists: policy experts, economists, sociologists, psychologists, historians, politicians. Their work can be very useful, but practicing scientists should be more deeply involved in shaping the systems they depend on.

Scientists know first-hand what is broken. When scientists themselves have led metascience experiments, the outcomes have often been distinctive and more durable: new institutes structured around questions; focused research organizations built to unlock specific field-level bottlenecks; community infrastructure launched because there was simply no other way to make it happen; critical resources that can’t wait for permission.

We want to help get more scientists in the driver’s seat of this conversation and source more hypotheses that can be tested for systemic improvements. We want all of it to happen in the open to stimulate more useful public debate about science. And we hope that will help the most compelling ideas get real world implementation through support from us or others.

Examples of what we’re looking for

Perhaps an easier way to explain what we’re looking for is to highlight a few historical examples that we would have loved to fund early iteration for. Here are a few:

  1. The Protein Data Bank

A few crystallographers were frustrated that hard-won structural data was disappearing into individual labs with no way to share it. They bootstrapped a community archive in 1971 with just seven structures and no formal institutional mandate. We would have loved to award an essay describing this gap and fund the early bootstrapping required to prototype the foundational data infrastructure for structural biology and drug discovery worldwide.

  1. arXiv

The scientist Paul Ginsparg noticed that his colleagues were emailing preprints to each other and built a centralized server in 1991 to do it better. We would have loved to award an essay describing this gap and fund the initial server required to test the utility of what became today’s default open publishing infrastructure for physics, math, and computer science. It has since become a general model for the broader open-access movement.

  1. Focused Research Organizations

Two scientists, Adam Marblestone and Sam Rodriques, were dead set on trying to generate more connectomics data as a critical public resource for the neuroscience community. This was a defined roadmap that required a start-up-like team, which lacked any dedicated funding mechanism. So they created one by inventing FROs, and it has become an enabling structure for many other projects with similar properties. We would have loved to fund early iterations of FRO projects (and we did through the first FRO: the longevity-focused Rejuvenome!).

  1. Arcadia Science

This one’s an experiment I’m directly involved in that’s still in a work-in-progress. Arcadia is a for-profit research company co-founded in 2020 by myself and another scientist, Prachee Avasthi. It was motivated by trying to reimagine how we could more effectively traverse a wider swath of biology for useful discovery than was possible in our academic labs. We asked how we could use data to develop organism-agnostic tools, compound broader lessons by sharing more of our work in real time, and open up new funding and sustainability strategies. It would be exciting to fund smaller scale pilots that could inform experiments that lead to new institutes, which can and should be less monolithic than what dominates today.

I hope more scientists will join us in this dialogue, which is why I’ve asked that all submissions are public. I know it can sometimes be uncomfortable to put your neck out in this way, but positive change is more likely if we normalize open debate. We should approach all disagreements according to the scientific principles we were trained on. Data, not drama: let’s do the experiment.

See more details and apply here by May 1st.