After a long Artificial intelligence (AI) winter, AI-based technologies have been achieving stunning results, perhaps declaring a new era of AI-powered technologies. Recently, ChatGPT demonstrated an extraordinary ability in generating text or computer programs that are usually indifferentiable from human written ones. Similarly, AI has been generating images, pieces of art, videos of plausible content. Some experts never imagined the pace or quality of these developments, making less people skeptical about the role of AI in many other fields.

 

AI In the Medical Field

One major field that AI started to revolutionize is the medical field. For instance, AI is expected to enable more accurate diagnoses by analyzing medical images such as X-rays and MRIs to identify and diagnose conditions such as cancer and heart disease. Also, AI may provide personalized treatment plans since AI algorithms can be used to analyze electronic health records to identify patterns and identify patients at risk for certain conditions or using an AI-based chatbot to engage with patients, collect very detailed information, answer their questions, and provide them with information on their symptoms, conditions, and treatments. In addition, AI can enhance patient treatment by analyzing large amounts of data to identify new drug targets and predict which compounds will be effective in treating specific diseases. Similarly, AI can be used to control surgical robots and assist with procedures such as laparoscopy and colonoscopy. This is just to name a few of the aspects of change that AI soon might bring to the field.

However, there are also risks associated with the use of AI in medicine that must be carefully considered. These risks span a variety of topics from bias in AI algorithms, and data privacy and security to reliability issues and lack of human intervention. Such risks can easily lead to discriminatory mistreatment, inhumane decision making, fatal errors, and human rights infringements.  As a consequence, the World Health Organization proactively tried to put a framework for using AI in the medical field. 

In the coming sections, we will explore some of the major risks of AI in the medical field:

 

AI Bias The Medical Field

One of the most serious AI risks within the medical field is AI bias, which is the tendency of AI systems to make decisions that are unfair or discriminatory towards certain groups of people. This can happen when the data used to train AI models is not representative of the population it will be used on, or when the algorithms used in the AI system are not designed to be fair. On the one hand, if the data used to train an AI model is biased, the model may make inaccurate or unfair predictions. For example, if a model is trained on data that primarily includes patients of a certain race or gender, it may not perform well on patients from other groups. Or if an AI system is trained on data that is mostly composed of pictures of light-skinned people, the system might not perform as well on pictures of people with darker skin tones. On the other hand, algorithms used to train and make decisions with AI can also introduce bias. For example, algorithms in health care technology may not just simply reflect back social inequities such as socioeconomic status, but may ultimately exacerbate them.

 

Lack of Transparency

Since most AI systems are being developed within the private sector, most technologies, methods, models, and data used to build or train them are not available to the public. For that reason, several issues regarding AI may go unnoticed until significant harms are inflicted upon individuals, which constitutes a material risk to human rights. AI transparency principles, if correctly implemented, may help in ensuring that AI decisions can be better understood and evaluated by stakeholders. This includes providing explanations for the reasoning behind decisions made by the AI system. Yet, in practice, many AI models introduce inherent explainability challenges because they are treated as a black box, limiting the ability to pinpoint causes to some decisions or conclusions AI makes. Nonetheless, being more transparent with what we know at least would lead AI systems to be more trustworthy and less of a risk to human rights. 

 

Dilution of Responsibility & Accountability

AI accountability refers to the responsibility of individuals, organizations, and governments for ensuring that AI systems are designed, developed, and used in an ethical and responsible manner. This needs to be based on good AI governance, which means processes, mechanisms, policies, and procedures are put in place to ensure that AI systems are fair, transparent, and explainable, and that any negative consequences of their decisions are identified, understood, and addressed and to guarantee a responsible use of AI systems. Also, Governance includes the creation of oversight bodies and adherence to ethical guidelines for AI development and deployment. Finally, some auditing role after deploying AI systems needs to be there to continuously monitor and evaluate the performance and outcomes of AI systems in order to ensure that they are operating as intended and that any negative consequences are identified and addressed. Without a doubt, AI accountability is becoming increasingly important as AI systems are increasingly being used in critical decision-making processes, such as hiring, lending, criminal justice, and the medical field.

 

Low Reliability & Safety

Another risk for AI in the medical field is the potential for errors in AI-assisted diagnosis and treatment plans. While AI can aid doctors in making more accurate diagnoses and treatment plans, it is not perfect and can make mistakes. These errors could lead to incorrect or delayed treatments, which could have serious consequences for patients. Also, there is the risk of data privacy and security. AI systems used in the medical field often handle sensitive patient information, and if this information is not properly secured, it could be accessed by unauthorized parties. These serious risks should be mitigated by aiming at building trustworthy AI, which is dependable and operates as intended, making accurate and consistent decisions and minimizing the risk of unintended harm. This includes regularly monitoring and testing the performance of the AI system to ensure that it is operating correctly and ensuring that the system is robust and can handle unexpected inputs or scenarios, and that the system is secure and protected against malicious attacks.

Overall, while AI has the potential to greatly benefit the medical field, it is important to carefully consider these risks and take steps to mitigate them, such as ongoing monitoring, transparency and accountability of AI systems, and including diverse perspectives in the development and implementation of AI in healthcare.

 

AI in the Middle East

Until today, AI has not been regulated by any major regulator. The artificial intelligence act in the EU is seen as one of the major attempts so far, but it is just a proposed law at this stage that is yet to be accepted and adopted. This is mainly because the narrative to leave AI unregulated is still strong, with recurring arguments such as the difficulty of regulation or protecting innovation prospects. Accordingly, the common wisdom for most governments and major institutions, globally and in the Middle East, leads to adopting non-binding codes of ethics for AI.

In the Middle East, the International Development Research Center (IDRC) is funding several non-governmental initiatives to tackle the challenges little prioritized by governments, such as fostering inclusive and human rights-based AI and mitigating the possible negative effects of AI, such as job losses. One of their initiatives AI4D is trying to enhance and improve the capacity of infrastructure and skills in the MENA healthcare sector. 

In light of the absence of AI regulation, such initiatives might play a crucial role in steering AI direction to be safe, reliable, fair, and much more.