AI in Medicine

AI in medicine visualization

Behind, Biased, and (Still) Full of Possibility

Let me get straight to the point:
AI is disrupting healthcare; good or bad is still to be determined, but nonetheless, most of us in the medical field are already behind. Not because we’re lazy, not because we’re unwilling… but because the system isn’t built for us to keep up.

While the rest of the world is riding the AI wave, many of us are still drowning in EHR tabs, faxes (yes, literal faxes), and patched legacy software from a bygone era. And even when we do finally adopt new technology (side note: cost prohibits a lot of medical inertia), it often feels like it was designed for us, but not by us, and certainly not with our patients in mind.

This isn’t just about efficiency.
It’s about power.
It’s about bias.
And it’s about the future of care… if we’re bold enough to reimagine it.

Let’s Talk Truth: Medicine Is Behind

Here’s the hard truth. We train for a decade or more in anatomy, pathology, pharmacology, but not in data fluencyAI ethics, or machine learning algorithms, and definitely not in politics or business.
We know how to assess risk in the ER and OR, but not when it’s laced into an algorithm used in predictive diagnostics.

Most of us are users of technology, not its shapers.
And when you’re not shaping something this powerful, you are being shaped by it.

Now Let’s Talk Bias: Who’s Controlling the Code?

AI is not neutral by any stretch of the imagination
Let me repeat that… AI is not neutral (learn more about seeding).

The outputs are only as unbiased as the inputs. And guess who controls those inputs?

  • Tech companies.
  • Investors.

Engineers who, frankly, often don’t look like us, live like us, or understand the communities we serve. In 2019, less than 2.4% of AI creators were African American, and 3.2% were Hispanic PhD graduates.1

Healthcare AI tools are being trained on data that underrepresents Black and Brown patients, women, rural communities, and people with disabilities.
You think health disparities are alarming now?
Let a biased algorithm guide your triage system and treatment plans.
Let a data set built on wealthy white urban populations decide who gets flagged as “high risk.” This will inevitably limit what and who insurance companies will cover, leading to a great health divide in humanity.

It’s not hypothetical… It’s happening.
And most physicians are unaware of it, because we are not trained for it.

How to Use AI — Without Getting Used by It

I’m not anti-AI. I’m anti-blind adoption. I believe AI can, and should, make our work easier, safer, and more patient-centered. So here is how we can strategically impact the future of medicine with AI:

1. Become AI-literate.
Learn how these tools work. Ask where the data comes from. Know the difference between generative AI and predictive AI. If you don’t understand it, you can’t challenge it.

2. Use AI to reduce burnout, not amplify bias.
Let AI summarize notes, assist with prior authorizations, or translate instructions for patients.
But you stay in charge of the diagnosis.
You ask the human questions.
You maintain the empathy.

3. Advocate for inclusion at the table.
We need more clinicians, public health experts, educators, and patients in the rooms where this tech is being built.
Not as tokens.
As decision-makers.

4. Train the next generation differently.
If your med school or residency program isn’t teaching about AI, say something. If you’re a mentor or a leader, incorporate this into your training approach for others. This isn’t optional anymore. It’s a must for the future of medical professionals.

What the Future Demands of Us

The power of AI will not be measured in how fast it works, but in who it works for.
And right now, that power is disproportionately in the hands of people who don’t live the medical reality we do as doctors, nurses, physician assistants, nurse practitioners, and other medical professionals… long hours, complex patients, cultural nuances, human grief, and bureaucratic red tape, that no algorithm can quantify.

If we want AI to truly work in medicine, then we need to stop waiting to be invited into the conversation.

We are the conversation.

Final Thought from #OrthoDoc

Healthcare has always been slow to change.
But that can’t be an excuse anymore.
AI is here. It is not going anywhere. It’s moving fast. And if we don’t lead, we’ll be led… by code written in a boardroom, not a clinic or operating room.

So here’s your call to action, colleague, innovator, disruptor:

Don’t just use the tools. Shape them. Question them. Challenge them. Improve them.
Because the future of medicine should be human-powered, AI-assisted, and equitably built.


Sonya Sloan, M.D.
Orthopedic Surgeon. Advocate. Disruptor.
#OrthoDoc | 

Sonya Sloan, M.D.