AI's potential in healthcare requires equitable, transparent design to avoid amplifying disparities. Read more on ethical AI integration.
AI is rapidly changing the future of healthcare. Its ability to analyze complex medical and research data is helping improve diagnostics, drug development, and our overall knowledge of healthcare! But along with its promise comes serious ethical questions, particularly around equity, trust, and inclusion. These topics were at the center of the “Code, Context, and Care” symposium, hosted by the Cobb Institute on Sunday, July 20, during the National Medical Association conference, bringing together experts in medicine, informatics, public health, and education to explore how AI can responsibly support healthcare delivery.
AI presents transformative opportunities in healthcare—from diagnostics to population health—but without thoughtful design, it can also amplify existing inequities. As noted by panelists like Dr. Alison Whelan and Dr. Hassan Tetteh, biased datasets, opaque algorithms, or poorly validated tools can undermine clinical trust, misguide interventions, and further marginalize vulnerable populations.
Dr. Gilles Gnacadja, PhD, a research strategist at Amgen, provided a critical industry perspective on the ethical integration of AI in clinical research and development. He emphasized that for AI to be truly impactful, it must be:
From a biopharmaceutical standpoint, Dr. Gnacadja underscored the responsibility of industry leaders to implement AI with clinical validity and ethical guardrails, especially when these tools influence real-world treatment decisions. His remarks were a strong reminder that advanced AI must serve all patients—not just those best represented in training datasets.
For healthcare professionals, the takeaway is clear: our engagement and oversight are essential to ensuring AI enhances care without compromising equity or trust.
This year’s symposium featured a dynamic roster of panelists and speakers representing diverse expertise and lived experience:
Many algorithms use flawed or incomplete data that fail to account for the health needs of Black patients. For example, using the cost of care as a proxy for health needs has led to underdiagnosis and undertreatment. HCPs must scrutinize how models are developed and push for datasets that reflect the true diversity of the populations they serve.
AI can speed up trial recruitment and expand access—but only if underrepresented groups are deliberately included. Black patients remain significantly under-enrolled in research studies. HCPs must advocate for equity in trial design, eligibility criteria, and participant outreach to ensure inclusive, data-driven innovation.
AI literacy is essential for the future of medicine. Curricula must go beyond technical training to address how bias shows up in AI tools and how to respond. For Black medical students and faculty, equitable access to these learning opportunities is also critical to leveling the field.
AI should never override clinical judgment. Providers—especially those serving vulnerable communities—must stay informed about the tools being used, question their outputs, and speak up when something doesn’t align with patient needs or ethical standards.
AI tools used in radiology, orthopedics, and OB/GYN must be validated across diverse populations. Studies show that imaging data and diagnostic accuracy often vary by race and skin tone. Without diverse training data, AI can miss critical differences in conditions affecting Black patients.
By subscribing, you consent to receive emails from BlackDoctor.pro You may unsubscribe at any time. Privacy Policy & Terms of Service.
Are you a healthcare professional? Register with us today!