
The biotech industry is going through an AI inflection point, and it is intense.
Every week brings a new foundation model, a new “breakthrough” in protein structure prediction, a new startup claiming to reinvent drug discovery with transformers. If you work near the intersection of biology and computation, you probably know the feeling: that quiet background pressure saying you should be learning faster, shipping more, reading everything.
I am genuinely excited about what AI can do for biology. I have built parts of my own work around that belief. At the same time, I think it is honest to admit that the way this progress shows up in our feeds, papers, and funding cycles is not psychologically free.
What follows is not a complaint about AI or about progress. It is an attempt to describe the mental load that comes with this moment, so that we can deal with it more intentionally.
The Invisible Toll of AI Anxiety
For a long time, I treated the stress as “just part of the job.” New papers, new models, new benchmarks. If you care about the field, you keep up. Simple.
Over time, I noticed it changing how I felt about my own work.
I have had the Sunday night scroll where Twitter or arXiv is full of new preprints and suddenly everything I am doing feels small or slow. I have read threads from very young founders explaining how they “solved” a difficult biology problem with a stack of transformers and felt a mix of curiosity, respect, skepticism, and a little voice in my head asking, “Are you already behind?”
In the past, I have spent evenings trying to keep up with new architectures while my actual projects quietly piled up. That did not make me a better scientist. It mostly made me more tired and less clear.
This is not just classic FOMO. It feels like a predictable side effect of an ecosystem built around speed, novelty, and big claims.
I know I am not the only one feeling this, even though I am fairly introverted and do not talk to a lot of people about it directly. Just from watching the field from a distance, it is easy to notice patterns. Strong scientists quietly wondering if they picked the “wrong” area because they are not working on the latest model class. Good ML engineers joking online about feeling outdated because they have not fine-tuned an LLM in the last six months, and it does not always feel like a pure joke. People hinting at brutal work schedules to stay “relevant,” even though the work they were already doing had real value.
Most of this I infer from how people write, present, and frame their work, rather than from long private conversations. It is still enough to convince me that the mental load is widespread, not just a personal quirk on my end.
The FOMO Factory
When I look at the broader ecosystem, a few structures seem to amplify this anxiety.
Preprints as a constant backdrop. New arXiv papers appear every day with strong claims and clean figures. Some are genuinely important, some are incremental, some will quietly vanish. In the moment, each one can feel like another reminder that there is something you have not read and are already “late” on.
Social media that compresses nuance. Thoughtful posts about limitations, negative results, or messy reality rarely go far. Bold one-liners like “AGI for biology is basically here” spread faster. Over time, this creates a distorted picture where everyone seems more certain, more advanced, and more confident than they really are.
Funding and branding incentives. Investors naturally look for leverage and defensibility, so phrases like “AI-first” and “foundation model for X” become standard pitch elements. That language then filters into company websites, job descriptions, and internal roadmaps. Careful, narrow, deeply validated work can feel less visible, even when it is what actually moves the field forward.
Conferences as comparison engines. Talks and posters often highlight the newest method or largest model. Saying “we are using an older model that we understand well and that holds up under validation” is scientifically mature, but it can feel dull next to fresh benchmarks and big claims.
None of this is evil. These are just the natural consequences of current incentives. But if you are not careful, it is very easy to internalize all of this as “I am not enough.”
The Real Cost
The cost is not just abstract stress. It shows up in specific ways.
Chronic tension. When you always feel a step behind, your baseline becomes slightly anxious. You can be doing meaningful work and still feel like you are failing some invisible checklist.
Fragmented focus. When everything looks urgent and important, it becomes hard to commit deeply. I have seen, and sometimes done, the pattern of dropping a solid project to chase a hot idea, then realizing later that the original work actually mattered more.
Distorted self-assessment. Even people with real contributions can start to feel like they are faking it because their mental map of the field does not update at the same pace as the news cycle.
Work bleeding into everything. Evenings and weekends quietly turn into catch-up time. Reading papers and documentation starts to occupy space that used to belong to rest or hobbies.
Career questions that loop. Thoughts like “Should I lean fully into AI?” or “Did I spend too long on biology?” or “Will this skill stack still matter in a few years?” are not unreasonable questions. They just become exhausting when they repeat in the background all the time.
I am not writing this to ask for sympathy. I am writing it because naming the pattern makes it easier to handle. Once I started to see it clearly, I could make more deliberate choices about how I work.
What I’ve Learned From Inside the Field
A few things have become clearer to me over time.
Most of the Noise Really Is Noise
A lot of “AI for biotech” is standard supervised or generative modeling packaged in more dramatic language. That does not make it worthless. It just means it is not as revolutionary as the marketing suggests.
The fundamentals are still the same: good experimental design, clean and well-understood data, strong domain knowledge, and rigorous validation in real systems. No clever architecture fixes a broken assay or a biased dataset. When I remember that, the anxiety about missing each new model drops.
I still try to track major, durable shifts like AlphaFold or genuinely strong biological foundation models, but I no longer treat every trending preprint as urgent.
Depth in a Narrower Slice Beats Shallow Range
At some point I accepted that I will never understand every new architecture in detail, and that this is fine.
What actually helps my work is knowing my biological questions deeply, understanding how my data is generated and where it fails, and keeping my ML fundamentals solid enough to evaluate new tools rather than worship them.
The people I respect most are not always the ones who know every acronym. They are often the ones who can say, “For this assay and this question, this level of modeling is appropriate, and here is how we will validate it.” That is the kind of practitioner I would like to be.
The Real World Moves Slower Than the Feeds
Online, it can look like every lab has a fully automated AI pipeline and a near-complete virtual cell model.
In actual conversations and collaborations, the picture is very different. Many teams are still working through data harmonization, annotation quality, reproducibility in basic analyses, and integration of computational tools with lab workflows. Most companies still validate important claims using traditional experiments. Cells, animals, and eventually patients are still in the loop.
The gap between hype and deployment is not embarrassing. It is where most of the practical work is. There is a lot of room for people who are willing to handle the unglamorous pieces well.
Existing Skills Actually Gain Value
There were moments where I quietly wondered if years spent learning biology and building more “traditional” models were a mistake because I was not training enormous LLMs.
With some distance, that feels like the wrong framing.
The things I already know how to do, like designing or interpreting experiments, building pipelines that survive contact with real data, and asking whether a result is biologically plausible at all, compound over time. As models become more powerful, the need for grounded judgment increases. Someone has to say: “This target looks promising in the model, but this pathway is a known dead end clinically.” Or: “These predictions are probably tracking batch effects, not underlying biology.” Or: “This idea is exciting, but here is the simplest experiment that would actually test it.”
AI does not replace domain expertise. It leans on it.
How I Try to Stay Sane
I do not have this figured out perfectly, but a few habits have helped.
Curating what I see. I unfollow or mute accounts and topics that reliably spike my anxiety without adding much insight. I remind myself that if something is truly important, it will surface more than once. I do not need a real-time feed of everything.
Defining my own yardstick. I try to measure progress with questions like: Am I working on problems I care about? Am I learning at a pace that I can sustain, not just this month but this year? Am I building tools or ideas that help real projects move forward? This is not perfect, but it feels healthier than comparing myself to a stream of announcements.
Having a few honest connections. Even as an introvert, having a small number of people I can be real with helps a lot. Being able to say “I do not understand this paper” or “I feel behind” out loud makes those thoughts less heavy.
Practicing strategic ignorance. I choose areas where I want to go deep and consciously ignore some others, at least for now. This is not laziness. It is an acknowledgment that attention and energy are finite.
Separating interest from fear. When I feel pulled toward a new topic, I try to ask, “Am I genuinely curious, or am I just afraid of missing out?” If the answer is mostly fear, I wait. The opportunities that truly matter tend to survive a bit of delay.
Letting breakthroughs prove themselves. If a method is truly transformative, it will still be important in six or twelve months. I do not need to reorganize my roadmap around it in the first week.
Taking real breaks. Reading papers is work, not rest. I try, imperfectly, to protect some time that is genuinely offline and not about cells, code, or GPUs. When I do that, the time I spend on hard problems feels sharper.
The Case for a Completely Unrelated Pursuit
One thing that has helped me more than I expected is learning a musical instrument.
This might sound unrelated to the problem, and that is exactly the point. When you are caught in the AI anxiety loop, your brain keeps circling back to the same questions: What should I learn next? Am I keeping up? What did I miss today? Even “rest” can turn into passive scrolling through the same feeds that stress you out.
Having a pursuit that is completely unrelated to your technical field forces a different kind of mental engagement. When I am practicing guitar, I am not thinking about transformers or perturbation prediction. My brain is occupied with something that has its own logic, its own progress curve, and its own rewards. There is no arXiv for guitar. Nobody is going to drop a preprint that makes my chord progressions obsolete.
The skill itself does not matter. It could be music, painting, woodworking, a sport, cooking, or anything else that genuinely absorbs your attention and has nothing to do with your day job. The key is that it cannot be optimized or compared to your professional output. It exists in a separate space.
What I have noticed is that this kind of engagement is genuinely restorative in a way that passive consumption is not. After an hour of practice, I come back to my work with a clearer head. The anxiety is still there sometimes, but it feels quieter. The mental load has somewhere else to go.
If you are feeling burned out by the pace of the field, I would strongly recommend finding something like this. Not as productivity advice or as another item on your self-improvement list, but as a way to remember that you are a person who exists outside of your job title and your publication record.
The Real Work
I do not expect the rate of AI progress in biotech to slow down. There will always be new models, new claims, and new companies.
For my own sanity, I keep coming back to a few simple ideas: depth over constant shallow breadth, impact over novelty for its own sake, and working solutions in messy reality over perfect metrics in a tightly controlled benchmark.
Sometimes the best thing I can do is close my feeds, ignore the latest “AGI for biology” thread, and quietly return to the concrete biological or engineering problem in front of me.
The future of biotech will need people who can bridge AI capabilities and biological reality. People who can say “this helps,” “this does not,” and “this needs more evidence.” People who can stay grounded enough to use powerful tools without being swallowed by the hype around them.
That is the kind of role I am trying to grow into. If you are reading this, there is a good chance you are trying to do something similar.
You are not falling behind. You are doing real work in a field that needs real work. The noise around you is loud, but it is still just noise.
If you are feeling some of this mental load, it is not a sign that you are weak or ungrateful. It is a very human response to a very intense environment.
If you are comfortable sharing, I would genuinely like to hear what has helped you manage the stress. Small, practical strategies from other people in the same storm are often more helpful than one more “hot take” on AI.