A report released on Wednesday highlights the possible difficulties that come with implementing artificial intelligence (AI) tools in the healthcare industry, as highlighted by the World Health Organization’s (WHO) recent considerations for the regulation of AI in the field.
The WHO’s regulatory considerations address the significance of proving AI tools’ efficacy and safety, ensuring that systems are accessible to those who require them, and encouraging communication between AI tool developers and users.
The WHO acknowledges the potential of artificial intelligence (AI) in healthcare, noting that it could enhance current systems or devices by bolstering clinical trials, enhancing diagnosis and treatment, and enhancing the expertise of medical professionals.
According to a report by the data and analytics company GlobalData, AI technologies are and have been implemented quite quickly, sometimes without a complete understanding of how they will function in the long run. This could have negative effects on patients or healthcare providers.
“AI has so many advantages, and it has already enhanced a number of systems and devices. But these tools also carry some risks, given how quickly they are being adopted, according to Alexandra Murdoch, Senior Analyst at GlobalData, in a statement.
Since AI systems used in healthcare and medicine frequently access personal and medical data, legal frameworks protecting security and privacy should be in place. Other possible issues with AI in healthcare include risks related to cybersecurity, unethical data collection, and the amplification of biases and false information.
A recent Stanford University study provides an example of biases in AI tools. According to the study’s findings, certain AI chatbots propagated inaccurate medical information about people of color through their responses.