Ethical facet of medical AI


Artificial intelligence is becoming very popular in the healthcare industry at a faster speed. AI technology is working with doctors delivering results with expert-level accuracy. Well, artificial intelligence holds the promise of tremendous breakthroughs but it also comes with some ethical questions that need to be considered. In this article, we will discuss some of the challenges in leveraging AI appropriately in the healthcare systems.

As with most, every tool in existence, the tool itself is not ethical or unethical. It’s about how it is used and applied. In most situations, it also depends on the perspectives of different people. To make it clearer, imagine if an AI system can be built that can use genetic information from any of the genealogy resources available out there, to predict the probability of a child being born with a significant yet treatable condition, such as congenital heart disease. Ethically we could plan out treatment courses for the entire life of that person and of course unethically as a payer, we can deny the coverage of this child as we know that would be very expensive care for the whole life.

Let’s take a look at a few ethical considerations that must be kept in mind while using AI technology in the health industry:


1- Data Bias:

Data bias in healthcare is constantly something we have to combat against when we are dealing with machine learning and artificial intelligence. In simple terms, data bias means your data set is incomplete in certain aspects which causes  AI models and algorithms to be incorrectly skewed in some manner. Let’s take an example of an AI application built to predict the likelihood of someone developing heart disease by the age of 40 years. If the algorithm is trained using a data set of females between the ages of 15 to 30 years, it is pretty obvious to say that the application probably does not produce a fair outcome for all age groups. The result depends on the variables and data points fed into the AI-based systems and how they are applied in the problem.

For example, imagine if an application is designed to predict the performance of a sports team for the next season and one of the data points is the type of turf they are going to play on. The application might forecast that the respective team will win every time they play on astroturf because that’s what the historical data tells. The truth may be they win because of other factors such as wind speed or closed stadium. That can be the crucial data points, not necessarily the turf. 

2- Privacy issue: 

The next question rises about privacy, how the use of patient’s data by various providers affect patient privacy as it relates to artificial intelligence. Suppose an application can be built that collects all of a person’s health-related data over an entire episode of a medical event to improve life expectancy. In this situation, artificial intelligence can help analyze the data and predict future outcomes to help prevent tragic events. AI technology can be used asynchronously with various protection techniques like homomorphic encryption to increase the privacy and security of a patient’s data. 

3- Accountability and Transparency:

AI systems used in the medical field helps doctors in making decisions that affect a patient’s health. As AI is advancing its capacity to improve clinical decision-making, a strategy is required to make all functions more visible and relatable to the patients. 

4- Reliability and Trust: 

AI’s black box problem is the main issue that many giant tech companies are trying to find solutions to. The black box problem is the consciousness of the fact that we cannot understand and explain the methods the AI algorithms use to arrive at a specific outcome. Researchers are working to resolve this problem and develop trust and confidence in the AI systems.

Some prominent AI researchers have raised the same questions over a study by Google Health for breast cancer screening. The study in discussion asserted that an AI system for diagnosing breast cancer has been developed which can outperform radiologists. 

AI developers must address these ethical concerns before designing systems so as to deploy this technology in a safer and equal manner. Artificial intelligence and machine learning are shiny and new now and there is even a fair share of snake oil promises out there today. Physicians, clinicians, all of the hospital office personnel, all are very skilled groups of people and they use all these tools appropriately to get the whole leverage.

Spread the love