Categories | News & Politics Article

Inaccuracies in AI Diagnosing Can Have Harmful Results

December 22nd, 2023 News & Politics 4 minute read
Article Image

Inaccuracies in AI Diagnosing Can Have Harmful Results

The utilization of artificial intelligence (AI) tools by doctors for diagnosing patients can lead to inaccuracies due to the inherent biases embedded in these tools. Despite efforts to promote transparency in explaining how AI makes predictions, a recent study published in JAMA suggests that this transparency does not effectively address the issue of potential bias in AI-assisted diagnoses.The significance of this issue is heightened by the increasing role of AI in diagnosis and treatment, which necessitates the identification and rectification of models developed with flawed assumptions. For instance, if an AI model is trained on data that consistently underdiagnoses heart disease in female patients, it may learn to perpetuate this bias, resulting in underdiagnoses in females, as highlighted by the researchers. The concern is to ensure that AI models are unbiased and do not impact medical decision-making.In the study, approximately 450 healthcare professionals, including doctors, nurses, and physician assistants, were presented with various cases of patients admitted to the hospital with acute respiratory failure. The clinicians were provided with information about the patient's symptoms, physical examinations, laboratory results, and chest radiographs. Their task was to assess the likelihood of pneumonia, heart failure, or chronic obstructive pulmonary disease. To establish a baseline, all participants initially evaluated two cases without any input from an AI model.

Inaccuracies in AI Diagnosing Can Have Harmful ResultsPhoto by Tara Winstead from Pexels

Subsequently, they were randomly assigned to evaluate six more cases with input from an AI model, including three cases with systematically biased model predictions. The study revealed that the clinicians' diagnostic accuracy on their own was 73%. However, when presented with predictions from an unbiased AI model, their accuracy improved by 2.9 percentage points. Furthermore, when provided with an explanation of how the AI model reached its prediction, their accuracy increased by 4.4 percentage points compared to the baseline.These findings indicate the potential of AI tools to enhance diagnostic accuracy when used in conjunction with healthcare professionals, perhaps making the potential for inaccuracies less burdensome. The Biden administration has expressed plans to create protections for the future of artificial intelligence. The Department of Health and Human Services is also establishing an artificial intelligence task force to oversee the utilization of AI-enabled technologies currently deployed in hospitals, insurance companies, and other healthcare enterprises.An executive order mandates the U.S. Department of Health and Human Services to create an AI task force within a year. This group is tasked with formulating a strategic plan encompassing policies and potential regulatory measures for the responsible implementation of AI and AI-enabled technologies in the healthcare sector, reducing inaccuracies while allowing for the progression of collaboration with the human workforce. This plan will cover various aspects such as research and discovery, drug and device safety, healthcare delivery and financing, and public health.Recently, lawmakers have started exploring ways to put this directive into action, particularly focusing on its implications for healthcare. Senator Roger Marshall, a Republican from Kansas who is also a physician, cautions against overregulating AI in healthcare, emphasizing the need to avoid stifling innovation amid an effort to correct inaccuracies. He acknowledges the positive impact of artificial intelligence and machine learning on healthcare over the past five decades and suggests a careful approach to rulemaking to prevent hindering progress.Senator Edward Markey, a Democrat from Massachusetts and the chair of the subcommittee, expresses concerns about the potential harms and exacerbation of existing inequities if AI is not properly regulated in healthcare. He highlights the need for guardrails to ensure the responsible and ethical use of AI. Markey emphasizes the lessons learned from the tendency of big tech to prioritize profit over people when left to self-regulate and stresses the importance of regulating artificial intelligence to avoid repeating similar mistakes.

Sources:

AI guardrails can fall short in health care: studyMeasuring the Impact of AI in the Diagnosis of Hospitalized PatientsBiden to HHS: Create an AI task force to keep health care 'safe, secure and trustworthy'
Sara E. Teller

About Sara E. Teller

Sara is a credited freelance writer, editor, contributor, and essayist, as well as a novelist and poet with nearly twenty years of experience. A seasoned publishing professional, she's worked for newspapers, magazines and book publishers in content digitization, editorial, acquisitions and intellectual property. Sara has been an invited speaker at a Careers in Publishing & Authorship event at Michigan State University and a Reading and Writing Instructor at Sylvan Learning Center. She has an MBA degree with a concentration in Marketing and an MA in Clinical Mental Health Counseling, graduating with a 4.2/4.0 GPA. She is also a member of Chi Sigma Iota and a 2020 recipient of the Donald D. Davis scholarship recognizing social responsibility. Sara is certified in children's book writing, HTML coding and social media marketing. Her fifth book, PTSD: Healing from the Inside Out, was released in September 2019 and is available on Amazon. You can find her others books there, too, including Narcissistic Abuse: A Survival Guide, released in December 2017.

Related Articles