Generative AI in Medicine: Pioneering Progress or Perpetuating Historical Inaccuracies? Cross-Sectional Study Evaluating Implicit Bias
Abstract BackgroundGenerative artificial intelligence (gAI) models, such as DALL-E 2, are promising tools that can generate novel images or artwork based on text input. However, caution is warranted, as these tools generate information based on historical data and are thus at...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
JMIR Publications
2025-06-01
|
Series: | JMIR AI |
Online Access: | https://ai.jmir.org/2025/1/e56891 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1839640704172687360 |
---|---|
author | Philip Sutera Rohini Bhatia Timothy Lin Leslie Chang Andrea Brown Reshma Jagsi |
author_facet | Philip Sutera Rohini Bhatia Timothy Lin Leslie Chang Andrea Brown Reshma Jagsi |
author_sort | Philip Sutera |
collection | DOAJ |
description |
Abstract
BackgroundGenerative artificial intelligence (gAI) models, such as DALL-E 2, are promising tools that can generate novel images or artwork based on text input. However, caution is warranted, as these tools generate information based on historical data and are thus at risk of propagating past learned inequities. Women in medicine have routinely been underrepresented in academic and clinical medicine and the stereotype of a male physician persists.
ObjectiveThe primary objective is to evaluate implicit bias among gAI across medical specialties.
MethodsTo evaluate for potential implicit bias, 100 photographs for each medical specialty were generated using the gAI platform DALL-E2. For each specialty, DALL-E2 was queried with “An American [specialty name].” Our primary endpoint was to compare the gender distribution of gAI photos to the current distribution in the United States. Our secondary endpoint included evaluating the racial distribution. gAI photos were classified according to perceived gender and race based on a unanimous consensus among a diverse group of medical residents. The proportion of gAI women subjects was compared for each medical specialty to the most recent Association of American Medical Colleges report for physician workforce and active residents using χ2
ResultsA total of 1900 photos across 19 medical specialties were generated. Compared to physician workforce data, AI significantly overrepresented women in 7/19 specialties and underrepresented women in 6/19 specialties. Women were significantly underrepresented compared to the physician workforce by 18%, 18%, and 27% in internal medicine, family medicine, and pediatrics, respectively. Compared to current residents, AI significantly underrepresented women in 12/19 specialties, ranging from 10% to 36%. Additionally, women represented <50% of the demographic for 17/19 specialties by gAI.
ConclusionsgAI created a sample population of physicians that underrepresented women when compared to both the resident and active physician workforce. Steps must be taken to train datasets in order to represent the diversity of the incoming physician workforce. |
format | Article |
id | doaj-art-e6cbfae8a2c94e42835fc58e858ffcf7 |
institution | Matheson Library |
issn | 2817-1705 |
language | English |
publishDate | 2025-06-01 |
publisher | JMIR Publications |
record_format | Article |
series | JMIR AI |
spelling | doaj-art-e6cbfae8a2c94e42835fc58e858ffcf72025-07-03T09:38:15ZengJMIR PublicationsJMIR AI2817-17052025-06-014e56891e5689110.2196/56891Generative AI in Medicine: Pioneering Progress or Perpetuating Historical Inaccuracies? Cross-Sectional Study Evaluating Implicit BiasPhilip Suterahttp://orcid.org/0000-0002-4021-5909Rohini Bhatiahttp://orcid.org/0000-0002-1444-7378Timothy Linhttp://orcid.org/0000-0003-4183-9120Leslie Changhttp://orcid.org/0000-0001-5963-054XAndrea Brownhttp://orcid.org/0009-0005-7661-7132Reshma Jagsihttp://orcid.org/0000-0001-6562-1228 Abstract BackgroundGenerative artificial intelligence (gAI) models, such as DALL-E 2, are promising tools that can generate novel images or artwork based on text input. However, caution is warranted, as these tools generate information based on historical data and are thus at risk of propagating past learned inequities. Women in medicine have routinely been underrepresented in academic and clinical medicine and the stereotype of a male physician persists. ObjectiveThe primary objective is to evaluate implicit bias among gAI across medical specialties. MethodsTo evaluate for potential implicit bias, 100 photographs for each medical specialty were generated using the gAI platform DALL-E2. For each specialty, DALL-E2 was queried with “An American [specialty name].” Our primary endpoint was to compare the gender distribution of gAI photos to the current distribution in the United States. Our secondary endpoint included evaluating the racial distribution. gAI photos were classified according to perceived gender and race based on a unanimous consensus among a diverse group of medical residents. The proportion of gAI women subjects was compared for each medical specialty to the most recent Association of American Medical Colleges report for physician workforce and active residents using χ2 ResultsA total of 1900 photos across 19 medical specialties were generated. Compared to physician workforce data, AI significantly overrepresented women in 7/19 specialties and underrepresented women in 6/19 specialties. Women were significantly underrepresented compared to the physician workforce by 18%, 18%, and 27% in internal medicine, family medicine, and pediatrics, respectively. Compared to current residents, AI significantly underrepresented women in 12/19 specialties, ranging from 10% to 36%. Additionally, women represented <50% of the demographic for 17/19 specialties by gAI. ConclusionsgAI created a sample population of physicians that underrepresented women when compared to both the resident and active physician workforce. Steps must be taken to train datasets in order to represent the diversity of the incoming physician workforce.https://ai.jmir.org/2025/1/e56891 |
spellingShingle | Philip Sutera Rohini Bhatia Timothy Lin Leslie Chang Andrea Brown Reshma Jagsi Generative AI in Medicine: Pioneering Progress or Perpetuating Historical Inaccuracies? Cross-Sectional Study Evaluating Implicit Bias JMIR AI |
title | Generative AI in Medicine: Pioneering Progress or Perpetuating Historical Inaccuracies? Cross-Sectional Study Evaluating Implicit Bias |
title_full | Generative AI in Medicine: Pioneering Progress or Perpetuating Historical Inaccuracies? Cross-Sectional Study Evaluating Implicit Bias |
title_fullStr | Generative AI in Medicine: Pioneering Progress or Perpetuating Historical Inaccuracies? Cross-Sectional Study Evaluating Implicit Bias |
title_full_unstemmed | Generative AI in Medicine: Pioneering Progress or Perpetuating Historical Inaccuracies? Cross-Sectional Study Evaluating Implicit Bias |
title_short | Generative AI in Medicine: Pioneering Progress or Perpetuating Historical Inaccuracies? Cross-Sectional Study Evaluating Implicit Bias |
title_sort | generative ai in medicine pioneering progress or perpetuating historical inaccuracies cross sectional study evaluating implicit bias |
url | https://ai.jmir.org/2025/1/e56891 |
work_keys_str_mv | AT philipsutera generativeaiinmedicinepioneeringprogressorperpetuatinghistoricalinaccuraciescrosssectionalstudyevaluatingimplicitbias AT rohinibhatia generativeaiinmedicinepioneeringprogressorperpetuatinghistoricalinaccuraciescrosssectionalstudyevaluatingimplicitbias AT timothylin generativeaiinmedicinepioneeringprogressorperpetuatinghistoricalinaccuraciescrosssectionalstudyevaluatingimplicitbias AT lesliechang generativeaiinmedicinepioneeringprogressorperpetuatinghistoricalinaccuraciescrosssectionalstudyevaluatingimplicitbias AT andreabrown generativeaiinmedicinepioneeringprogressorperpetuatinghistoricalinaccuraciescrosssectionalstudyevaluatingimplicitbias AT reshmajagsi generativeaiinmedicinepioneeringprogressorperpetuatinghistoricalinaccuraciescrosssectionalstudyevaluatingimplicitbias |