Loading...

Risks and remedies for artificial intelligence in health care

Nov 14, 2019 | | Say something

Introduction

Artificial intelligence (AI) is rapidly entering medical care and performs important functions, from the automation of heavy work and routine tasks in medical practice to the management of patients and medical resources. As developers create artificial intelligence systems to take on these tasks, several risks and challenges arise, including the risk of injury to patients due to errors in the artificial intelligence system, the risk to patient privacy of data acquisition and the artificial intelligence inference, and more. Possible solutions are complex but involve investment in infrastructure to obtain high quality representative data; collaborative supervision by the Food and Drug Administration and other health actors; and changes in medical education that will prepare providers for changing roles in an evolving system.

Loading...

Potential benefits

Although the field is quite young, AI has the potential to play at least four major roles in the health care system:one

Pushing the limits of human performance. The most striking use of medical AI is to do things that human providers, even excellent ones, still cannot do. For example, Google Health has developed a program that can predict the occurrence of acute kidney damage up to two days before the damage occurs; compare that to current medical practice, where the injury is often not noticed until after Happens.2 These algorithms can improve attention beyond the current limits of human performance.

"The most striking use of medical AI is to do things that human providers, even excellent ones, still cannot do."

Democratizing medical knowledge and excellence. AI can also share the experience and performance of specialists to complement providers that might otherwise lack that experience. Ophthalmology and radiology are popular goals, especially since AI image analysis techniques have long been a focus of development. Several programs use images of the human eye to give diagnoses that would otherwise require an ophthalmologist. By using these programs, the general practitioner, the technician or even a patient can reach that conclusion.3 Such democratization is important because specialists, especially highly qualified experts, are relatively rare compared to the need in many areas.

Heavy duty automation in medical practice. AI can automate some of the computing tasks that today occupy much of medical practice. Providers spend a huge amount of time dealing with electronic medical records, reading screens and typing on keyboards, even in the exam room.4 4 If artificial intelligence systems can queue the most relevant information in patient records and then summarize records of appointments and conversations in structured data, they could save considerable time for providers and could increase the amount of contact time between providers and patients and the quality of medical care. I meet for both.

Patient management and medical resources. Finally, and in a way less visible to the public, AI can be used to allocate resources and shape businesses. For example, AI systems can predict which departments are likely to need additional staff in the short term, suggest which of the two patients could benefit more from poor medical resources or, more controversial, identify practices that maximize income.

Related Post:   Women of color: what we know and do not know about their unique health challenges

Risks and challenges.

While AI offers a number of possible benefits, there are also several risks:

Injuries and error. The most obvious risk is that AI systems will sometimes be wrong and that patient injuries or other health care problems may occur. If an AI system recommends the wrong medication for a patient, does not notice a tumor on a radiological examination or allocates a hospital bed to one patient over another because he erroneously predicted which patient would benefit most, the patient could be injured. Of course, many injuries occur due to a medical error in the current health system, even without the involvement of AI. AI errors are potentially different for at least two reasons. First, patients and providers may react differently to injuries resulting from software than from human error. Second, if AI systems become widespread, an underlying problem in an AI system could cause injuries to thousands of patients, rather than the limited number of patients injured by the error of a single provider.

Data availability Training of artificial intelligence systems requires large amounts of data from sources such as electronic health records, pharmacy records, insurance claims records or consumer-generated information, such as fitness trackers or purchase history. But health data is often problematic. The data is usually fragmented in many different systems. Even apart from the variety just mentioned, patients often see different providers and change insurance companies, leading to the division of data into multiple systems and multiple formats. This fragmentation increases the risk of error, decreases the completeness of data sets and increases the cost of data collection, which also limits the types of entities that can develop an effective medical care AI.

Concerns about privacy. Another set of risks arise around privacy.5 5 The requirement of large data sets creates incentives for developers to collect such data from many patients. Some patients may be concerned that this collection may violate their privacy, and lawsuits based on the exchange of data between large health systems and AI developers have been filed.6 6 AI could imply privacy in another way: AI can predict private information about patients even though the algorithm never received that information. (In fact, this is often the goal of AI for health care.) For example, an AI system could identify that a person has Parkinson's disease due to the tremor of a computer mouse, even if the person has never disclosed that information to anyone else (or did not know). Patients could consider this as a violation of their privacy, especially if the AI ​​system inference were available to third parties, such as banks or life insurance companies.

Bias and inequality. There are risks that involve biases and inequalities in the AI ​​of medical care. AI systems learn from the data in which they are trained, and can incorporate biases from that data. For example, if the data available for AI are mainly collected in academic medical centers, the resulting AI systems will know less and, therefore, will treat patients from populations that generally do not frequent academic medical centers less effectively. Similarly, if voice recognition AI systems are used to transcribe meeting notes, that AI may perform worse when the provider is of a race or gender underrepresented in the training data.7 7

"Even if artificial intelligence systems learn from accurate and representative data, there may still be problems if that information reflects biases and underlying inequalities in the health system."

Even if AI systems learn from accurate and representative data, there may be problems if that information reflects biases and underlying inequalities in the health system. For example, African-American patients receive, on average, less pain treatment than white patients;8 An AI system that learns from health system records could learn to suggest lower doses of analgesics to African-American patients, even if that decision reflects a systemic bias, not a biological reality. Resource allocation AI systems could also exacerbate inequality by allocating fewer resources to patients considered less desirable or less profitable by health systems for a variety of problematic reasons.

Related Post:   Do you suffer from poor circulation? Here it is how to solve the problem in just 30 minutes!

Professional Realignment Longer term risks involve changes in the medical profession. Some medical specialties, such as radiology, are likely to change substantially as much of your work becomes automated. Some academics are concerned that the widespread use of AI will cause a decrease in human knowledge and capacity over time, so that providers lose the ability to detect and correct AI errors and further develop medical knowledge.9 9

The fallacy of nirvana. A final risk deserves mention. AI has the potential of tremendous good in medical care. The nirvana fallacy raises that problems arise when policy makers and others compare a new option perfectly, rather than the status quo. Health care AI faces risks and challenges. But the current system is also plagued with problems. Doing nothing because AI is imperfect creates the risk of perpetuating a problematic status quo.

Possible solutions

There are several ways to address the possible risks of AI in medical care:

Data generation and availability. Several risks arise from the difficulty of collecting high quality data in a manner consistent with the protection of patient privacy. A set of potential solutions revolves around the government provision of data infrastructure resources, ranging from setting standards for electronic health records to directly providing technical support for high-quality data collection efforts in health systems than otherwise. way they would lack those resources. A parallel option is the direct investment in the creation of high quality data sets. As a reflection of this direction, both the All of Us initiative of the United States and the BioBank of the United Kingdom aim to collect comprehensive health care data on a large number of people. Ensuring effective privacy safeguards for these large-scale data sets will probably be essential to ensure patient confidence and participation.

Related Post:   Keto flu: natural remedies to relieve symptoms

Quality Supervision Monitoring the quality of the AI ​​system will help address the risk of injury to the patient. The Food and Drug Administration (FDA) oversees some health care AI products that are marketed. The agency has already approved several products to enter the market, and is creatively thinking about the best way to monitor AI systems in health. However, many health care AI systems will not be under the jurisdiction of the FDA, either because they do not perform medical functions (in the case of back-end business or resource allocation AI) or because they develop and they implement systems in health themselves: a category of products that the FDA generally does not monitor. These health care AI systems fall into a kind of supervisory gap. Further supervision efforts may be needed by health systems and hospitals, professional organizations such as the American College of Radiology and the American Medical Association, or insurers to ensure the quality of the systems that remain outside the exercise of the FDA regulatory authority.10

“A hopeful vision is that providers will be trained to provide more personalized and better care. … A less hopeful vision would see suppliers struggling to weather a monsoon of uninterpretable predictions and recommendations of competing algorithms. "

Supplier commitment and education. The integration of AI into the health system will undoubtedly change the role of health care providers. A hopeful vision is that providers will be trained to provide more personalized and better care, with the freedom to spend more time interacting with patients as humans.eleven A less hopeful view would see suppliers struggling to weather a monsoon of uninterpretable predictions and recommendations of competing algorithms. In any case, or in any intermediate option, medical education should prepare providers to evaluate and interpret the AI ​​systems they will find in the evolving health care environment.


Brookings Institution is a non-profit organization dedicated to independent research and policy solutions. Its mission is to carry out high quality independent research and, based on that research, provide innovative and practical recommendations for policy makers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author (s) and do not reflect the views of the Institution, its administration or other academics.

Microsoft supports the Artificial Intelligence and Emerging Technology Initiative (AIET) of The Brookings Institution, and Google provides general support without restrictions to the Institution. The findings, interpretations and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence and impact. The activities supported by their donors reflect this commitment.

Loading...

Posted in: Health, Remedies | Tags: , , , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *

==[Click 2x to Close X]==
Most Popular Today!