October 2024

A Biomedical Researcher on AI's Promises and Pitfalls

A conversation with Marinka Zitnik, assistant professor of biomedical informatics in the Blavatnik Institute at HMS

Autumn 2024

  • by Ekaterina Pesheva
  • 2 min read

Marinka Zitnik

Marinka Zitnik

You work at the intersection of machine learning and biomedicine. What ignited your interest in this field?

As a student I was deeply interested in mathematics, but I also had a strong desire to become a doctor and help people. Early in college, I designed an algorithm to predict candidate genes in a species of slime mold that activate antibacterial pathways that trap and kill bacteria. That was my first direct experience of the impact that computation can have on biomedicine. Since then, I have sought opportunities to work in this emerging area at the interface of the two fields.

How do you stay creative?

I generally start the day with a cup of coffee and a quick scan of the latest research papers, both from machine-learning conferences and areas of biology for which we have projects in the lab. This keeps me up to date and sparks new ideas for my own research. I also organize international workshops and conferences that bring together scientists from diverse disciplines. I encourage my research group to explore topics outside of our immediate area of expertise because the potential for groundbreaking discoveries and novel approaches is greatest at the borders where one field meets another.

What is your greatest hope and your greatest fear with the rise of artificial intelligence in science and medicine?

My greatest hope is that we develop AI systems that could eventually make major discoveries — the type worthy of a Nobel Prize. I hope this will not take humans out of the discovery process; instead, human creativity and expertise could be augmented by the capacity of AI models to analyze large datasets and execute repetitive tasks. My hope is to leverage increasingly powerful models to develop better medicines to cure and manage diseases, particularly those that have very few or no treatments.

My greatest fear is that we’re developing AI models that address low-hanging fruit and focus too narrowly on diseases for which we already have a lot of data and knowledge. This could worsen health inequalities, in the sense that we are generating more and larger datasets for diseases that are already much better understood than others. We need to maintain focus on challenges that could benefit all patients, including those with less-researched diseases where AI-ready datasets are scarce.

Is there a feature of human intelligence that AI will never achieve?

Human intelligence is characterized by qualities like empathy, moral compass, intuition, and emotional understanding rooted in our experiences interacting with one another in the real world. Recent advances have shown that AI systems can potentially mimic certain aspects of these qualities. Yet subjective, very nuanced aspects of human intelligence, like grasping and appreciating creativity or moral reasoning, might remain beyond AI’s reach.

What do we get wrong about AI?

One common misconception is that AI will completely replace humans in various fields. AI systems are not drop-in replacements for human ingenuity and creativity. Another misunderstanding is that AI is infallible. AI systems are only as good as the data and algorithms that underpin them. They can be biased, they can make errors, and they require careful oversight and continuous improvement. Another point of confusion is that AI lacks transparency, that it’s a black box that cannot be trusted or understood. Although that’s true for some models, there are ongoing efforts to design AI with insights into how the models make decisions in a way that is easier for humans to understand, trust, and verify.

 

Ekaterina Pesheva is the director of science communications and media relations in the Office of Communications and External Relations at HMS.

Image by John Soares.