On Episode 6 of Season 3, Dr. Tina Hernandez-Boussard joined us for for a thought-provoking conversation about artificial intelligence (AI), fairness, and the molecular future of medicine. Dr. Boussard is an Associate Dean of Research and Professor of Medicine (Biomedical Informatics), Biomedical Data Sciences, Surgery and Epidemiology & Population Health (by courtesy) at Stanford University.
Why Trust Is the Foundation of Health AI
When AI harms, trust evaporates, and without trust, patients stop sharing data. That’s a vicious cycle. If diverse groups don’t participate, models underperform, reinforcing inequities in care. Dr. Boussard frames this as a “catch-22” in algorithm development and argues that trust and transparency must be baked in from the beginning, and not retrofitted later.
Mitigating Bias: Not Just a Data Problem
Bias in AI isn’t just a technical bug, it’s a structural issue. As Dr. Boussard explains, equitable AI depends on diverse data across race, gender, geography, and socioeconomic status. One example? Predictive models built only on privately insured patients may utterly fail for those on public insurance or without consistent access to care.
This connects to her broader philosophy of the “AI life cycle”—a holistic framework that considers data sourcing, model selection, implementation, and real-world impact as interdependent.
Evo 2: ChatGPT, But for DNA
Easily one of the most mind-blowing parts of the conversation is Evo 2, a large language model trained not on human text, but on 9.3 trillion DNA base pairs from across the tree of life.
Dr. Boussard compares it to ChatGPT for biology: instead of predicting the next word in a sentence, Evo 2 predicts the next base in a genetic sequence. This allowed researchers to identify pathogenic mutations in human genes (like BRCA1 in breast cancer) even though the model was barely trained on human data.
What made it powerful was its evolutionary diversity, bacteria, plants, animals, allowing it to uncover deep, universal patterns in life itself.
Read the Evo 2 paper: Arc Institute Evo
Safety, Open Science, and Ethics
Dr. Boussard, her team and the rest of the Arc Institute team made sure Evo 2 didn’t include any sequences from known human-infecting viruses during training, a critical biosafety safeguard, especially for an open-source model.
The episode also delves into difficult but important questions: Who defines fairness in AI? Can transparency replace regulation? And how do we educate both the public and developers to speak a common language around responsible AI?
From Rural America to Stanford: Tina’s Journey
Tina grew up in a small rural town, wanting to become a vet. However, her studies would lead Tina down a different path, a path to make sure that new innovations improve the lives of everyone and not just a privileged few.
This ethos carries into her work, ensuring AI doesn’t just reinforce existing disparities, but actually works to reduce them.
What’s Next? From Wearables to Ambient AI
Looking ahead, Dr. Boussard is most excited about the next generation of tools that go beyond electronic health records: wearable devices, environmental data, and “ambient AI” that captures the nuances of daily life. The goal? More personalized, real-world models that reflect actual lives, not just hospital visits.

Leave a comment