Newswire: March 15, 2018.
Machine learning can be a powerful assistant to physicians and healthcare professionals but researchers say advancements in that field pose ethical risks specific to medicine.
In an editorial published this week in the New England Journal of Medicine researchers from the Stanford University School of Medicine say the tremendous growth seen in machine-learning tools demands that physicians and scientists carefully consider the ethics of including that data into decision-making.
Specifically, that involves:
- Data that informs algorithms may contain a bias, therefore biasing the algorithm and the clinical recommendation it generates.
- Physicians, in addition to understanding their patients’ needs, must also be informed as to how machine-learning algorithms are created, so that they can properly apply their recommendations to the needs of a patient.
- Data that is gathered for the development of machine-learning algorithm might go into a large stew of information collected by health care systems and then used without regard for privacy, clinical experience or human interaction in health care.
In October, a study by the New York City Department of Health and Mental Hygiene revealed, through the use of electronic records, that physicians were not prescribing to men of color or women a pre-exposure medication that lowers the chances of HIV infection, demonstrating how data can both create a biased clinical recommendation, or expose a pattern.
“Once machine-learning-based decision support is integrated into clinical care, withholding information from electronic records will become increasingly difficult, since patients whose data aren’t recorded can’t benefit from machine-learning analyses,” the current study’s authors wrote.
Read the full story here.