Scientific Publishing: Enough is Enough
Why we're no longer funding journal publications

In Abundance, Ezra Klein and Derek Thompson make the case that the biggest barriers to progress today are institutional. They’re not because of physical limitations or intellectual scarcity. They’re the product of legacy systems — systems that were built with one logic in mind, but now operate under another. And until we go back and address them at the root, we won’t get the future we say we want.
I’m a scientist. Over the past five years, I’ve experimented with science outside traditional institutes. From this vantage point, one truth has become inescapable. The journal publishing system — the core of how science is currently shared, evaluated, and rewarded — is fundamentally broken. And I believe it’s one of the legacy systems that prevents science from meeting its true potential for society.
It’s an unpopular moment to critique the scientific enterprise given all the volatility around its funding. But we do have a public trust problem. The best way to increase trust and protect science’s future is for scientists to have the hard conversations about what needs improvement. And to do this transparently. In all my discussions with scientists across every sector, exactly zero think the journal system works well. Yet we all feel trapped in a system that is, by definition, us.
I no longer believe that incremental fixes are enough. Science publishing must be built anew. I help oversee billions of dollars in funding across several science and technology organizations. We are expanding our requirement that all scientific work we fund will not go towards traditional journal publications. Instead, research we support should be released and reviewed more openly, comprehensively, and frequently than the status quo.
This policy is already in effect at Arcadia Science and Astera Institute, and we’re actively funding efforts to build journal alternatives through both Astera and The Navigation Fund. We hope others cross this line with us, and below I explain why every scientist and science funder should strongly consider it.
Journals are the Problem
First, let me explain why this is such a big deal to those who are new to this issue. It might seem like publishing is a detail. Something that happens at the end of the process, after the real work of science is done. But in truth, publishing defines science.
The currency of value in science has become journal articles. It’s how scientists share and evaluate their work. Funding and career advancement depend on it. This has added to science growing less rigorous, innovative, and impactful over time. This is not a side effect, a conspiracy, or a sudden crisis. It’s an insidious structural feature.
For non-scientists, here’s how journal-based publishing works:
After years of research, scientists submit a narrative of their results to a journal, chosen based on field relevance and prestige. Journals are ranked by “impact factor,” and publishing in high-impact journals can significantly boost careers, visibility, and funding prospects.
Journal submission timing is often dictated by when results yield a “publishable unit” — a well-known term for what meets a journal’s threshold for significance and coherence. Linear, progressive narratives are favored, even if that means reordering the actual chronology or omitting results that don’t fit. This isn’t fraud; it’s selective storytelling aimed at readability and clarity.
Once submitted, an editor either rejects the paper or sends it to a few anonymous peer reviewers — two or three scientists tasked with judging novelty, technical soundness, and importance. Not all reviews are high quality, and not all concerns are addressed before editorial acceptance. Reviews are usually kept private. Scientific disagreements — essential to progress — rarely play out in public view.
If rejected, the paper is re-submitted elsewhere. This loop generally takes 6–12 months or more. Journal submissions and associated data can circulate in private for over a year without contributing to public discussion. When articles are finally accepted for release, journals require an article processing fee that’s often even more expensive if the article is open access. These fees are typically paid for by taxpayer-funded grants or universities.
Several structural features make the system hard to reform:
- Illusion of truth and finality: Publication is treated as a stamp of approval. Mistakes are rarely corrected. Retractions are stigmatized.
- Artificial scarcity: Journals want to be first to publish, fueling secrecy and fear of being “scooped.” Also, author credit is distributed through rigid ordering, incentivizing competition over collaboration. In sum, prestige is then prioritized.
- Insufficient review that doesn’t scale: Three editorially-selected reviewers (who may have conflicts-of-interest) constrain what can be evaluated, which is a growing problem as science becomes increasingly interdisciplinary and cutting edge. The review process is also too slow and manual to keep up with today’s volume of outputs.
- Narrow formats: Journals often seek splashy, linear stories with novel mechanistic insights. A lot of useful stuff doesn’t make it into public view, e.g. null findings, methods, raw data, untested ideas, true underlying rationale.
- Incomplete information: Key components of publications, such as data or code, often aren’t shared to allow full review, reuse, and replication. Journals don’t enforce this, even for publications from companies. Their role has become more akin to marketing.
- Limited feedback loops: Articles and reviews don’t adapt as new data emerges. Reuse and real-world validation aren’t part of the evaluation loop. A single, shaky published result can derail an entire field for decades, as was the case for the Alzheimer’s scandal.
Stack all this together, and the outcome is predictable: a system that delays and warps the scientific process. It was built about a century ago for a different era. As is often the case with legacy systems, each improvement only further entrenches a principally flawed framework. It’s time to walk away and think about what makes sense now.
What We’ve Learned So Far
We’re in a bit of a catch-22 as a scientific community in that we don’t have a solution to jump to, but we also can’t develop one well if we continue with journals. Prohibiting journals is our deliberate forcing function as we support such development at Astera and The Navigation Fund. By removing journals as an option, our scientists have to get more thoughtful about how, when, and why they publish. We’ve started to see some shapes of the future.
We began this as an experiment at Arcadia a few years ago. At the time, I expected some eventual efficiency gains. What I didn’t expect was how profoundly it would reshape all of our science. Our researchers began designing experiments differently from the start. They became more creative and collaborative. The goal shifted from telling polished stories to uncovering useful truths. All results had value, such as failed attempts, abandoned inquiries, or untested ideas, which we frequently release through Arcadia’s Icebox. The bar for utility went up, as proxies like impact factors disappeared.
Peer review has also become better and faster for us at Arcadia. It’s a real mechanism for improving our rigor, not a secret editorial gate. We often get public feedback, and we use it to openly improve our work in real time. Another recent example of accelerating public peer review was a study about room temperature superconductivity that was released in 2023, got bombarded on twitter, and was then countered by several independent validation studies in less than a month. The controversial work happened outside of journals, and it wasn’t just peer reviewed, it was peer tested. Evidence-based community consensus happened at lightning speed.
It’s important to note that you don’t have to opt-out of academia to try something new. Astera recently funded a major structural biology project involving multiple academic groups, and the scientists enthusiastically agreed to forge a path without journals. It has been a delightful experience to think more clearly with them about the true impact of their work. The potential outcomes have to be so valuable for what they are — in this case, scalable X-ray crystallography methods that advance our understanding of how proteins move — that they transcend journal proxies. Expansive, iterative reuse of their methods is a more worthwhile goal than shiny comments from three anonymous reviewers. Those are the kinds of ambitious projects we like to fund.
Pre-prints are also a great way for anyone to participate now. And we need scientists to experiment with more radical formats. Pre-prints still maintain many journal features and are typically released close-in-time to journal articles, for which they are ultimately designed. In contrast, digital notebooks designed for computational work, such as Jupyter, allow for entirely different paradigms of publishing. Arcadia and others are now playing around with ways to automate conversion of such notebooks to publishable, dynamic outputs that can self-update as linked data evolves (see here, here, and here).
These experiences have converted me completely. I can’t unsee this new world. I look forward to making that true for more people by helping them take the first step.
What Could Happen Next
So how do we start? It’s important to define core publishing requirements before trying stuff. In 2016, a group of scientists, publishers, and funding agency representatives put forth the FAIR (Findable, Accessible, Interoperable, and Reusable) Principles, and they can be summarized as follows:
- Findable means we can discover stuff across digital space and time. Published items need to be linked to information about their creators and a long-lasting unique identifier, like a DOI (or Digital Object Identifier).
- Accessible means that you can easily find and search for publications using normal things like Google scholar or even ChatGPT. There shouldn’t be extra barriers, like paywalls, to finding published work.
- Interoperable means that you should be able to connect information across different formats and venues, which will only get more important as we leverage more AI tools over time.
- Reusable means that it’s possible for others to build on published work, which requires information and permissive licenses.
Work also needs to be permanently archived so that it’s accessible in the long term, which is not a new problem and remains largely unsolved. We especially need to figure this out for large datasets and repositories that the community relies on.
Scientists should probably be putting out shorter narratives, datasets, code, and models at a faster rate, with more visibility into their thinking, mistakes, and methods. In this age of the internet, almost anything could technically be a “publishable unit.” It doesn’t even have to sound nice or match the human attention span anymore, given our increasing reliance on AI agents.
In more general terms, we need publishing to be a reliable record that approximates the true scientific process as closely as possible, both in content and in time. The way we publish now is so far from this goal that we’re even preventing our own ability to develop useful large language models (LLMs) that accelerate science. Automated AI agents can’t learn scientific reasoning based on a journal record that presents only the glossy parts of the intellectual process, and not actual human reasoning. We are shooting ourselves in the foot.
What Could Be Possible
What are some practical non-journal options for publishing available now? Many FAIR options exist outside of journals today that are ready to go, and we wrote up our recommendations for Astera scientists recently.
However, we are far from achieving what’s possible. In addition to the many issues inherent to journal processes, it’s a system that was never designed to scale with our needs today. When journals first emerged in the 17th-18th century, they were responsible for handling outputs from hundreds of scientists. In 2021, a UNESCO report estimated that there’s approximately 9 million full-time researchers around the world that publish millions of articles across about 40,000 journals.
It’s also not possible to architect a viable alternative system in a vacuum without users to iterate with. We urgently need more scientists to try strategies that stretch our imagination for what the future could hold. Below are just a few ideas that could be fun to explore:
- Tools to automate the finding and collating of peer review, wherever it lives, e.g. published reviews, comments, social media, meeting notes, conferences, reuse of the science in the real world, etc.
- Ways to directly share lab notebooks and rely on LLMs to dynamically organize and synthesize information, over time. These outputs can be customized based on the readers’ interests.
- LLMs or knowledge graphs that help anyone quickly landscape scientific areas, flag conflicting data, or quantitatively score the novelty of new studies.
- Autonomous agents that can analyze actual data to generate new hypotheses propose alternative interpretations of published conclusions.
- The ability to connect reviews with real world outcomes at a later point-in-time through betting markets or interpretability work on different autonomous agents.
I don’t know if these are the right ideas, which is why we need to try them. But all of these ideas ladder up to giving both authors and readers more agency and optionality to meet their own needs. You drive your own content. You can curate and assess science without an editor. And everyone doesn’t necessarily need to agree on a single, centralized solution. Today’s technologies allow for a multitude of strategies that can be layered on top of the internet and augmented over time. We can have choices.
Towards a Better Return-on-Investment
You might think that scientific publishing would be too costly to revamp, even if we had clear solutions. But journal-based publishing already costs the global scientific community $10-25 billion per year for subscriptions and article processing fees, most of which are paid for using taxpayer-funded grants. In addition, a conservative estimate of millions of scientist hours are spent on journal publications annually. Currently, a significant portion of the science community outside the U.S. can’t even afford to participate via journals.
This is an expensive price to pay in exchange for an unreliable record, immeasurable delays, opportunity costs, and degradation of public trust. I cannot think of a worse return-on-investment for scientists, science funders, and society than continuing to enable journal publishing.
We don’t need to wait for permission to fix this. The future of science is not going to be rescued by journals or legacy institutions. We need to reclaim science’s role in serving society. I often hear scientists say they can’t abandon journals because they will lose their funding. As a funder, I’m letting you know that we’re not just comfortable with new publishing strategies, we require it.
If our approach sounds exciting to you, send us your ideas, apply for funding, and participate in our experiments. If you’re a fellow science funder, I’d love for you to join us in holding the line for change.
Seemay Chou is Astera’s Co-Founder and Interim Chief Scientist.