Home Technology Artificial intelligence in health care needs more regulation, WHO says

Artificial intelligence in health care needs more regulation, WHO says

by myphillyconnection
0 comments

In an effort to curb the improper use of artificial intelligence in health care, the World Health Organization released new guidelines Thursday for ethically using the technology.

Generative AI, or AI that can generate text and images (think ChatGPT), has gained popularity across industries in recent years, and health care is no exception. The WHO's new guidelines include more than 40 recommendations for the ethics and governance of large multimodal models — which allow generative AI to create content from text, videos, sounds and images — to prevent harm to patients and create better health care experiences.

The WHO noted that AI can be useful in responding to written queries from patients; investigating symptoms and treatments; clerical tasks like documenting and summarizing patient visits; providing patient simulations for training; and identifying new compounds for research and drug development. Radiologists have even begun to use AI to detect breast cancer.

But generative AI and LMMs have produced false, inaccurate and incomplete statements, the WHO warned, many times because data used to train the models are biased or of poor quality. The WHO said engagement from tech companies, health care providers, governments, patients and other stakeholders is crucial in the development, oversight and regulation of artificial intelligence.

"Governments from all countries must cooperatively lead efforts to effectively regulate the development and use of AI technologies, such as LMMs," Dr. Alain Labrique, director of digital health and innovation at the WHO, said in a statement.

In its recommendations, the WHO said that governments should:

• Invest in not-for-profit or public infrastructure related to generative AI, including making technology and data accessible, and that health care tech creators should adhere to certain ethical principles in order to access the infrastructure.
• Regulate AI in health care with government policies related to human rights, including patients' dignity, autonomy and privacy.
• If resources are available, either assign a current agency or create a new regulatory agency for approving LMMs and health care apps.
• Make auditing and impact assessment by independent third parties mandatory for the large-scale use of generative AI. The assessments and audits should be published with results separated by patient demographics.

The WHO also said that development and design processes need to be transparent and inclusive, giving the opportunity to note ethical issues and raise concerns.

Health care isn't the only industry interested in generative AI. Pennsylvania Gov. Josh Shapiro inked an agreement this month with OpenAI for state employees to test its ChatGPT product for use in government business. The partnership will start with a test group of 50 employees. In September, Shapiro also established an AI governing board to guide the state's use of the technology.

You may also like

Leave a Comment