Volume 15, Issue 6 And S7 (Artificial Intelligence 2025)                   J Research Health 2025, 15(6 And S7): 661-682 | Back to browse issues page

Ethics code: NOT (Review article)


XML Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Keykha A, Hojati M, Taghavi Monfared A, Shahrokhi J. Artificial Intelligence in Healthcare: Unveiling Ethical Challenges Through Meta-synthesis of Evidence. J Research Health 2025; 15 (6) :661-682
URL: http://jrh.gmu.ac.ir/article-1-2834-en.html
1- Department of Educational Administration, Faculty of Psychology and Education, Kharazmi University, Karaj, Iran. , ahmad.keykha72@sharif.edu
2- Department of General Psychology, Faculty of Literature and Humanities, Guilan University, Rasht, Iran.
3- Department of Educational Administration and Planning, Faculty of Psychology and Education, University of Tehran, Tehran, Iran.
Full-Text [PDF 1184 kb]   (292 Downloads)     |   Abstract (HTML)  (2138 Views)
Full-Text:   (16 Views)
Introduction
Artificial intelligence (AI) has created unprecedented opportunities across diverse sectors, with particularly transformative impacts in medicine, healthcare education, and biomedical research. From personalized treatment recommendations and predictive diagnostics to intelligent tutoring systems and robotic-assisted surgeries, AI has rapidly evolved from a novel technological tool to a central component of contemporary medical ecosystems [1–4]. These developments promise increased efficiency, accuracy, and accessibility in clinical services and educational platforms alike.
However, the swift integration of AI into healthcare systems has simultaneously raised substantial ethical concerns. These include, but are not limited to, challenges related to algorithmic bias, the opacity of decision-making processes (“black-box” models), data privacy violations, the erosion of professional autonomy, legal ambiguities regarding responsibility and liability, and the risk of deskilling among healthcare practitioners [5–8]. For instance, when AI is used to augment or replace clinical decision-making, there is a growing fear that health professionals may lose core competencies over time and become overly reliant on algorithmic tools, thus compromising the development of sound clinical judgment and reducing physician-patient trust [9, 10].
Moreover, AI-generated content, whether in research or clinical documentation, raises questions about authorship, intellectual property rights, and the blurring boundaries between human and machine-generated outputs. These concerns are compounded by global disparities in access to AI technologies and the risk that such tools may exacerbate existing health inequities if not carefully monitored and ethically deployed.
Although there is a growing body of scholarship discussing the ethical implications of AI in medicine, a significant gap remains in terms of synthesizing qualitative insights from diverse empirical and review studies. Most existing analyses tend to focus on specific ethical domains (e.g. data protection or transparency), while neglecting the interconnectedness and complexity of ethical issues across real-world healthcare settings. Furthermore, few studies have systematically integrated qualitative findings from multiple perspectives—including patients, clinicians, developers, and ethicists—through a rigorous interpretive synthesis.
Given the ethical weight and social ramifications of AI deployment in healthcare, a more comprehensive and methodologically grounded understanding of these challenges is urgently needed. This study addressed this gap by conducting a qualitative meta-synthesis of peer-reviewed research, guided by the principles of thematic synthesis and informed by the PRISMA framework. Our objective was to identify the ethical challenges associated with the implementation of AI in healthcare. By consolidating diverse qualitative evidence into a coherent analytical framework, this study aimed to strengthen the theoretical and practical foundations for ethical AI governance in health systems.

Methods
We conducted a thematic synthesis of qualitative and review studies in accordance with the PRISMA guidelines. We included qualitative and review studies that focused on the ethical challenges associated with the application of AI in healthcare. Study appraisal involved the use of a validated quality assessment tool [11]. Thematic synthesis techniques [12] were employed for analysis and synthesis, and the GRADE-CERQual approach [13] was applied to assess the confidence in the review findings.

Criteria for inclusion
A comprehensive and systematic search strategy was developed in consultation with an academic librarian. Searches were performed across four major electronic databases: PubMed, Scopus, Web of Science, and Google Scholar. Each search included combinations of controlled vocabulary (e.g. MeSH terms) and free-text keywords related to AI, ethics, and healthcare, alongside filters for qualitative and review studies. The search strategy for each database was tailored to its syntax and structure. Searches were conducted between May 15, 2010, and 2015.

Search strategy
A comprehensive search strategy was developed to identify relevant studies across major academic databases, including PubMed, Scopus, Web of Science, and Google Scholar. The search combined controlled vocabulary terms and free-text keywords related to AI, ethics, and healthcare. Specifically, the following search terms were used: (“AI” OR “machine learning” OR “deep learning” OR “AI”) AND (“ethics” OR “ethical issues” OR “ethical challenges” OR “ethical considerations” OR “bioethics”) AND (“healthcare” OR “health care” OR “medicine” OR “clinical practice” OR “medical ethics”) AND (“qualitative study” OR “qualitative research” OR “systematic review” OR “narrative review” OR “thematic synthesis”).

Study selection
All retrieved records were organized using Microsoft Excel, where duplicates were removed. Title and abstract screening was conducted by two independent reviewers (Ahmad Keykha and Ava Taghavi Monfared) in duplicate for an initial subset of articles to ensure consistency in applying inclusion and exclusion criteria. The remaining articles were then equally divided and screened individually. Full-text assessments were conducted independently by the same two reviewers, and disagreements were resolved through consensus with input from a third reviewer (Jafar Shahrokhi and Maryam Hojati) when needed. During the study selection process, all retrieved records were imported into Microsoft Excel. After removing duplicates and retracted articles (n=10), titles (n=89) and abstracts (n=56) were screened. This resulted in 78 articles for full-text review. These reasons are reflected in the PRISMA flowchart (Figure 1).


Quality assessment
To assess the methodological rigor of included studies, we employed the critical appraisal skills programme (CASP) qualitative checklist [14], which includes ten standard questions evaluating aspects, such as study design, data collection, ethical considerations, and validity of findings. Each question was scored as either “yes”=10, “no”=0, or “can’t tell”=5. thus, the maximum possible score for each study was 100. No differential weighting was applied to individual items. The “Total” score represents the cumulative sum of the ten individual question scores. Studies scoring (below 50) were considered methodologically weak and were excluded from the final thematic synthesis. However, they are reported for transparency and completeness. The quality appraisal was conducted independently by two reviewers (Ahmad Keykha and Jafar Shahrokhi), and disagreements were resolved by discussion or consultation with a third reviewer (Maryam Hojati) (Appendix 1). 





Risk of bias assessment
To evaluate the risk of bias in the included studies, we employed the Cochrane Risk of Bias Tool for qualitative and review studies, supplemented by the ROBIS framework [15] for systematic reviews. The assessment focused on key domains, including study selection, data collection methods, clarity of ethical considerations, transparency in reporting, and potential conflicts of interest. Each study was independently assessed by two reviewers (Ahmad Keykha and Jafar Shahrokhi), with discrepancies resolved by consensus or by consultation with a third reviewer (Maryam Hojati). The results of the risk of bias assessment are summarized in Figure 2, using a traffic-light system (green=low risk, yellow=unclear risk, red=high risk).

This visual representation provides a transparent overview of the methodological soundness and credibility of the included studies.

Data extraction and analysis
The analysis was guided by the core principles of thematic synthesis [12]. Data extraction and thematic development were performed by two researchers (Ahmad Keykha and Jafar Shahrokhi) working in parallel. Themes were developed inductively following the principles of thematic synthesis, rather than based on an a priori framework. The key findings and themes reported in this index paper were systematically coded and organized within a spreadsheet, forming the basis of an initial thematic framework. As subsequent studies were reviewed, their findings were coded and integrated into this evolving framework, which was refined iteratively as new data were incorporated. The analysis involved identifying patterns across studies, while also actively seeking out contradictory or disconfirming data—evidence that challenged either the emerging themes or the reviewers’ prior assumptions. This step was essential in ensuring the robustness of the synthesis. Data extraction and thematic development occurred in parallel.

Data validation
To ensure the reliability of the extracted concepts, the primary researcher compared their interpretations with those of an expert in the field. Inter-rater agreement was then assessed using Cohen’s Kappa coefficient, yielding a value of k=0.664, with a significance level of P=0.001. According to the interpretation guidelines provided by Jensen and Allen [54], this level of agreement is considered acceptable, indicating substantial consistency between raters.

Results
The inclusion and exclusion criteria applied in this review were predefined according to the PICOS framework, as summarized in Table 1.


Table 2 presents the results of the content analysis of the reviewed articles.






The process involved initially extracting key codes or concepts. These codes were then categorized into sub-themes based on their similarities and differences. Subsequently, the sub-themes were grouped into main themes through a similar comparative analysis. Table 2 comprises six main themes, each further divided into sub-themes based on topical similarity. For each main theme and its corresponding sub-themes, the key concepts constituting that theme are presented, accompanied by reference numbers indicating the supporting sources. 
The content analysis revealed that the most frequently addressed theme, cited in over 35 studies, concerns challenges related to data privacy and security, underscoring a widespread concern regarding the control, ownership, and protection of health data in AI-based platforms. AI systems need large amounts of sensitive information to make decisions, such as medical history, genetic data, or mental health records. However, it is often unclear how this data is collected, stored, or used. Without proper safeguards, data may be misused or leaked. If patients do not trust the safety of their data, they may reject the use of AI in healthcare settings.
Closely following this are transparency and explainability challenges, highlighted in over 32 studies, particularly focusing on the “black box” nature of algorithms and the lack of standardized validation protocols for algorithms, especially those based on deep learning, which often work like a “black box,” meaning users cannot understand how the system makes decisions. This lack of transparency makes it difficult for doctors to explain AI-based diagnoses or treatment recommendations to patients. When decisions are not clearly explained, both medical professionals and patients may lose trust in the system. Improving explainability is essential for the responsible and accepted use of AI in medicine.
Issues of fairness and algorithmic discrimination were found in approximately 28 studies, with emphasis on biased datasets and the exclusion of marginalized populations. AI systems can unintentionally act in unfair or biased ways toward certain social or ethnic groups. This usually happens when the training data is not diverse enough. For example, the algorithm may work well for young men but perform poorly for women, the elderly, or minority populations. Such biases can lead to unequal access to quality care and may even worsen existing health disparities. Ensuring fairness requires using inclusive, representative datasets.
Autonomy and informed consent challenges were identified in 25 studies, raising alarms about the diminishing decision-making power of patients and physicians. The growing use of AI in healthcare can limit the decision-making power of both doctors and patients. In many cases, the system recommends without explaining the process, and the patient may feel forced to accept it. This can reduce the patient’s ability to make informed and independent choices. Respecting patient autonomy means ensuring that patients understand the AI’s role and have real options in their care decisions.
Ethical concerns tied to professional responsibility emerged in 22 studies. The use of AI in clinical environments raises questions about who is responsible when something goes wrong. If an AI system makes a harmful mistake, it is often unclear whether the doctor, the software developer, or the technology provider is accountable. This lack of clarity can reduce trust and make legal or ethical follow-up difficult. Clear rules are needed to define responsibility and ensure accountability in AI-assisted medical decisions.
Legal and regulatory gaps and clinical reliability issues were addressed in around 20 and 18 studies, respectively. Many countries still lack clear legal frameworks for regulating the use of AI in healthcare. There are few standards for evaluating the safety, effectiveness, or transparency of these systems. As a result, some technologies are used without proper oversight, increasing the risk of harm. Policymakers must create strong, future-oriented regulations that address the unique challenges of AI in medicine. AI systems that perform well in laboratory settings may not work as reliably in real-world clinical environments. Factors such as poor data quality, patient diversity, or differences in local healthcare resources can affect performance. If the system is not carefully tested in real conditions, it may produce inaccurate or harmful results. Thorough validation in practical settings is essential before wide adoption in medical practice.
Table 3 summarizes the key ethical challenges identified in the reviewed literature, organized into six main themes.


Each theme is illustrated with a concise real-world or hypothetical case context to enhance practical relevance. For each case, Table 3 also outlines the associated policy or practice implications and provides a GRADE-CERQual confidence rating to indicate the strength of evidence and ethical severity. This structure enables readers to quickly grasp the nature of the challenge, its contextual manifestation, and its potential impact on healthcare systems.
The findings in Table 3 highlight the multi-dimensional nature of ethical challenges in AI-enabled healthcare, spanning privacy, transparency, fairness, autonomy, professional responsibility, clinical reliability, and regulatory governance. Data privacy and security risks underscore the urgent need for robust legal and technical safeguards, while transparency and explainability issues emphasize the necessity of mandatory disclosure and interpretability standards. Persistent algorithmic bias illustrates the deep-seated equity implications of AI, necessitating proactive dataset diversification and bias auditing. Challenges to autonomy reveal a pressing requirement for patient-centered consent processes that are both accessible and informative. Within professional practice, the absence of clear accountability mechanisms threatens ethical integrity, and in clinical contexts, reliability concerns demand rigorous validation and human oversight. Finally, the lack of harmonized legal frameworks not only undermines governance but also delays resolution in cases of harm. Collectively, these themes indicate that ethical AI integration in healthcare requires systemic, cross-disciplinary interventions that bridge technical, regulatory, and human-centered approaches.

Comparison with existing reviews
In the following section of the findings, a comparative analysis of this meta-synthesis article with existing review studies on the ethical challenges of AI in healthcare is presented. Subsequently, Table 4 provides a summary of the review articles in this field.


In contrast to the reviews presented in Table 4, which are mostly in the form of narrative reviews, systematic reviews, or scoping reviews and whose primary focus is on compiling, categorizing, and descriptively presenting the findings of previous studies, the present research adopted an analytical–synthetic approach through the use of meta-synthesis and qualitative content analysis. This approach, in addition to collecting secondary data, reconstructed them through a systematic process involving coding, categorization, synthesis, and in-depth interpretation. Accordingly, the present study not only identified the common themes and patterns among previous research but also uncovered the gaps, contradictions, and shortcomings in the literature, ultimately offering an integrated analytical framework for a more comprehensive understanding of the ethical challenges of AI in healthcare. Therefore, while prior reviews mainly address the question of “what findings have been reported,” this meta-synthesis examined “how these findings interrelate, what conceptual connections exist among them, and what new pathways can be outlined for future research and policy-making.”

Discussion
This study aimed to identify the ethical challenges of AI in healthcare. To achieve this objective, a content analysis was conducted on selected qualitative and review studies. In total, seven main themes and thirty-three sub-themes were identified. Figure 3 is a schematic model of the research findings.


Data privacy and security as the cornerstone of trust
In the field of AI, data privacy and security are not merely technical requirements but form the foundation of trust and stability in intelligent ecosystems. The quality and diversity of data, which underpin the learning and accuracy of algorithms, are meaningful only when users are confident that their information is stored and processed in a secure and controlled environment. This issue is especially critical in healthcare, where medical data reflect not only individuals’ physical conditions but also their psychological, social, and even genetic dimensions. Protecting this data directly influences technology acceptance, voluntary participation in innovative projects, and ultimately the pace of scientific advancement. In other words, data security in AI serves as a bridge linking innovation to public trust, and without it, even the most advanced algorithms will face distrust and social resistance. The findings of this section align with those reported in case studies [16, 18, 21-27, 30-33, 35, 37-41, 43-49, 51].

Transparency and explainability for accountability
In the field of AI, transparency and explainability constitute the backbone of trust, accountability, and social acceptance of the technology. Algorithms whose decision-making processes can be explained at a human-understandable level enable effective oversight, evaluation, and correction, preventing intelligent systems from becoming “black boxes.” This principle is especially critical in healthcare, where algorithmic decisions can directly impact patients’ lives and quality of care. When both clinicians and patients can comprehend the rationale behind a diagnosis or treatment recommendation, truly informed consent becomes possible, and accountability is strengthened at individual and institutional levels. Moreover, transparency and explainability not only serve as tools to detect errors and biases but also create a foundation for continuous learning and improvement of algorithms, fostering a constructive and trustworthy interaction between medical science and AI technology. The findings of this section align with those reported in studies [16, 21, 25, 26, 28-34, 37, 38, 41, 43, 44, 49-51].

Fairness and mitigation of algorithmic bias
In the realm of AI, fairness and algorithmic discrimination are intrinsically linked to social justice, equitable access to services, and the ethical legitimacy of technology. Algorithms reflect the data on which they are trained; therefore, if these datasets contain historical, social, or structural biases, intelligent systems may unintentionally reproduce and even amplify existing inequalities. In healthcare, this issue carries critical implications, from diagnostic errors in underrepresented population groups to inequitable allocation of treatment resources. Ensuring fairness in algorithms is not only an ethical imperative but also a prerequisite for public trust and clinical efficacy. Consequently, continuous monitoring, responsible data-driven design, and evaluation of social impacts are integral components of developing and deploying fair AI systems. The findings of this section are consistent with those presented in studies [16, 18, 20-24, 26, 28, 30, 31, 33, 34, 38, 42-44, 45, 46, 49-52].

Autonomy and informed consent in clinical practice
In the context of AI, autonomy and informed consent refer to preserving individuals’ right to make free decisions based on complete and transparent information. In medical applications, this principle ensures that patients are not only aware of AI-based interventions but also clearly understand their nature, purpose, benefits, limitations, and potential risks. When AI systems operate without sufficient explanation or with technical complexity that is difficult for users to comprehend, there is a risk of undermining individual autonomy and turning treatment decisions into a vague and uncontrollable process. Upholding this principle, in addition to respecting human dignity and worth, forms the foundation of trust and effective collaboration among patients, clinicians, and technology. Genuine consent is achieved only when individuals have a clear understanding of what they accept, and their choices result from awareness and free will rather than mere implicit acceptance of machine recommendations. The findings of this section are in alignment with those reported in studies [16, 18-25, 32-34, 38, 47, 50, 52].

Professional responsibility and ethical practice
In the field of AI, professional practice and ethical responsibility refer to the commitment of specialists, developers, and technology users to adhere to professional ethical standards, norms, and values. This commitment encompasses ensuring accuracy, safety, transparency, and accountability throughout all stages of designing, implementing, and deploying AI systems. In healthcare, this principle requires that professionals not only understand the technical functions and limitations of AI models but also accept responsibility for the consequences of decisions based on this technology. Negligence in this regard can lead to diagnostic errors, patient harm, or erosion of public trust. Ethical professional practice serves as a bridge between technical capability and human responsibility, ensuring that technological innovation remains dedicated to the welfare and rights of stakeholders, rather than merely focusing on efficiency or processing speed. The findings of this section are consistent with those presented in studies [16-20, 22-25, 35, 48].

Clinical reliability and patient safety
In the context of AI within healthcare systems, treatment and clinical reliability refer to the capability of a system to provide accurate, consistent, and evidence-based therapeutic recommendations and support. This principle implies that AI outputs should not only be technically valid but also demonstrate reliable and reproducible performance across diverse clinical scenarios and patient populations. Clinical reliability requires continuous evaluation, validation with real-world data, and monitoring of treatment outcomes to ensure that the technology contributes to improved patient results and reduces medical errors. Ultimately, clinical reliability serves as a critical link between algorithmic innovation and patient safety, ensuring that therapeutic decisions are based on valid data and precise analyses rather than solely on automated predictions. The findings of this section converge with those reported in studies [16, 17, 19, 22, 23, 30, 32, 34, 36, 38, 40, 43-45, 47, 48, 50, 52].

Legal, policy, and regulatory frameworks
In the realm of AI, legal, policy, and regulatory challenges play a crucial role in establishing safe, fair, and trustworthy frameworks for the development and deployment of these technologies. The rapid pace of AI advancement often outstrips the capacity of existing legal and regulatory mechanisms, resulting in legal gaps and unclear accountability. In healthcare, these gaps can have serious consequences, including a lack of clarity regarding liability in cases of errors or harm, an absence of standardized protocols for safety and efficacy assessment, and weak protection of sensitive patient data. Furthermore, conflicts of interest among developers, service providers, and policymakers may undermine the formulation of comprehensive and inclusive regulations. Therefore, creating flexible, transparent, and technology-aligned legal frameworks, alongside active involvement of diverse stakeholders, is essential to ensure the ethical and responsible use of AI in healthcare. The findings of this section align with those reported in studies [16, 20-30, 32, 35, 36, 38, 39, 41-45, 47, 49-51].

Challenges 
The following is an analysis and explanation of each of these main ethical challenges.

Challenges related to data privacy and security
Given the extensive data requirements associated with the application of AI technologies in the healthcare sector, the preservation of patient privacy has emerged as one of the fundamental challenges in this domain. Although data encryption has been proposed as a means to mitigate security risks, the complexity of such methods may reduce the transparency of algorithmic operations, thereby potentially undermining patient trust in the healthcare system. Safeguarding patient information presents a major concern in the application of AI technologies within medical settings. The necessity of utilizing extensive datasets to train these systems raises the risk of compromising individuals’ private health records. While strategies, such as data encryption have been introduced to mitigate these risks, they often reduce the system’s interpretability, as complex security protocols can obscure algorithmic processes. This lack of clarity in data handling may erode trust between patients and healthcare providers, potentially discouraging open communication due to fears over confidentiality breaches [68]. Pervasive monitoring technologies in users’ environments result in significant privacy intrusions and turn the home into a medicalized space, which may cause psychological distress. At the same time, data-driven systems require vast amounts of information, often collected without clear user awareness or control. Users may struggle to understand who accesses their data and for what purpose, especially given the potential for indefinite storage. Compared to traditional in-person care, the risk of data leakage or loss is substantially higher [69]. 

Challenges related to transparency and explainability
The lack of transparency in AI systems goes beyond a technical shortcoming and is also an epistemic and ethical crisis within modern medicine. This is due to the delegation of decision-making processes to mechanisms that lie beyond human comprehension, thereby rendering accountability ambiguous. The inability to fully understand or interpret the outcomes generated by such systems poses significant challenges to defining and scaling professional ethical standards. This opacity is manifested in three semantic dimensions: lack of disclosure (where individuals are unaware that automated decisions are being made about them), epistemic opacity (when there is no access to or understanding of how decisions are made), and explanatory opacity (the inability to explain why a specific output is generated). Such opacity can hinder individuals from exercising data-related rights and weaken the trust between patients and physicians. Moreover, AI systems may rely on features that are unfamiliar or irrelevant to clinicians, with no clear scientific explanation for their association with clinical outcomes [70]. AI models, particularly deep learning systems, are often described as “black boxes” and epistemically opaque, meaning their internal decision-making processes are not transparent, even to experts. This poses a serious ethical challenge, as critical medical decisions are made by systems whose reasoning cannot be fully understood or explained. Such opacity directly conflicts with core principles of medical ethics, especially the patient’s right to informed consent, which requires clear information about the logic, significance, and potential consequences of diagnostic or therapeutic interventions [71]. 

Challenges related to fairness and algorithmic discrimination
The issue of fairness in AI is not merely a technical flaw, but rather a reflection of unjust human structures that are reproduced, and even amplified, through algorithmic systems. Despite their seemingly neutral design, medical algorithms are often built upon datasets that may be rooted in historical, social, and racial biases. Consequently, the emergence of injustice within these systems is not only possible but also probable. The issue of fairness in the use of AI systems arises primarily from unintended algorithmic biases and inherent statistical distortions embedded in the design and functioning of these technologies. These biases, often subtle yet deeply rooted, can lead to significant consequences across various domains, including healthcare, law, and social systems [72]. AI algorithms are only as reliable as the data they are built and are not entirely autonomous, as they reflect human-designed logic. Human errors and biases can be amplified through these systems, especially when applied to large datasets. Moreover, the homogeneity of input data often leads to the under- or over-representation of certain population groups, potentially reinforcing existing health disparities [73]. 

Challenges related to autonomy and informed consent
The reliance of AI on personal health data and information derived from social networks for decision-making in situations where individuals lack decision-making capacity is based on the assumption that one’s digital identity accurately reflects their real-world preferences. However, this assumption is highly contentious. Given the dynamic nature of human values and preferences, decisions made on the basis of past behaviors and online presence may lead to a misrepresentation of an individual’s current wishes. Data from personal health records and social media can be used by AI to support medical decision-making when an individual is incapacitated and no human surrogate is available. However, human preferences are dynamic, and it is uncertain whether a competent individual would consent to AI-generated decisions based on inferred online behavior. Social media identities often do not reflect genuine personal values, and AI systems may prioritize cost-efficiency over individual well-being. This raises ethical concerns, especially when surrogate decision-makers are present but potentially overruled by AI due to automation bias. Ultimately, this creates a tension between human-centered care and algorithm-driven efficiency [74]. Khawaja and Bélisle-Pipon [75] warn that commercial providers of therapeutic AI may, under the guise of promoting patient autonomy, lead to therapeutic misconception, where users fail to accurately understand the system’s capabilities and limitations. 

Challenges within professional practice and the obligations of ethical responsibility
The generative and creative nature of these models renders them prone to “hallucination”, the production of inaccurate or fabricated information, a characteristic that, in contexts such as healthcare, goes beyond an error but is a potential threat to human life. Physicians’ concerns about disruptions to clinical workflows caused by the integration of AI reflect an inherent tension between technological determinism and the preservation of coherence within experience- and evidence-based healthcare systems. It is important to note that large language models (LLMs) have not yet been approved for diagnostic or therapeutic use. These models, originally designed for creative tasks, are inherently prone to generating inaccurate information (hallucinations) and exhibiting bias. This means there is no official assurance that they meet the safety and efficacy standards required for clinical applications [76]. The integration of AI into clinical workflows has also introduced tension. Investigators conducting randomized controlled trials aimed to assess the effectiveness of AI without compromising patient safety or disrupting established care pathways with proven outcomes. Clinicians expressed concerns that modifying existing workflows to accommodate AI systems might unintentionally impact the quality of patient care or increase the workload for healthcare staff [77]. AI’s ability to analyze large volumes of patient data enables the detection of hidden patterns, but it also carries the risk of overdiagnosis. This involves identifying conditions that would not have impacted the patient’s health if left undetected. The consequences may include unnecessary treatments, potential harm to patients, and the misuse of healthcare resources [78]. 

Challenges related to treatment and clinical reliability
The growing role of AI tools in medical diagnostics, while seemingly promising on the surface, carries the deeper risk of gradually eroding human clinical judgment. Clinical judgment arises from a synthesis of experience, human insight, and direct patient interaction, elements that no algorithm has yet been able to fully replicate. Excessive reliance on machine-generated outputs may lead to a form of “cognitive surrender,” wherein the physician assumes the role of a passive validator of algorithmic suggestions rather than engaging in critical analysis. Although AI models demonstrate high accuracy, excessive reliance on machine-generated outputs may diminish the role of human expertise in medical decision-making. This is particularly troubling in complex cases that require a comprehensive evaluation of the patient’s clinical condition, comorbidities, and personal preferences [79]. 

Legal, policy, and regulatory challenges
Policies, regulatory frameworks, and governance mechanisms related to AI also play a decisive role in shaping its ethical implications. A notable gap currently exists between existing legal structures and the rapid pace of technological advancements in this domain. Conflicting interests often emerge between those who develop and manage AI models and the goals of public health, particularly when viewed through the lens of government accountability and the inclusion of key stakeholders such as physicians and patients. One of the frequently raised concerns in AI-driven healthcare is the ambiguity surrounding accountability in the event of diagnostic or treatment errors. The technical complexity of AI systems, coupled with their proprietary nature, limits transparency, public scrutiny, and legal recourse. While some sources argue that healthcare professionals should be held responsible for AI-assisted decisions, others emphasize the responsibility of developers to ensure the safety, effectiveness, and suitability of AI systems for diverse patient populations [65]. The discourse on responsible surveillance and the preference for proactive over reactive regulatory approaches highlights the need for ethical frameworks to reduce public distrust and enable the ethical use of AI surveillance technologies in public health. The intersection of public and private sector surveillance further complicates data privacy and ethical use; as private companies often adhere to lower ethical standards than governmental bodies. Moreover, health-related data generated outside clinical settings typically fall outside the scope of privacy regulations. This regulatory gap allows commercial data collectors to legally aggregate individuals’ behavioral and social data from various sources for health and non-health purposes [80]. With the rapid expansion of AI in the healthcare sector, a significant regulatory gap has become increasingly evident. There is currently no clearly defined regulatory body, no standardized trial procedures, and no transparent accountability mechanisms in place to address potential harms caused by AI. This situation, often referred to as a “regulatory vacuum,” is particularly concerning in the context of legal responsibility for AI-driven decisions. While data protection regulations, such as the GDPR in the European :union: and HIPAA in the United States, are in effect, a comprehensive framework governing the clinical application of AI remains absent [81, 82, 83]. 

Conclusion
AI has brought about transformative advancements across multiple dimensions of the higher education and healthcare sectors. This meta-synthesis of 53 qualitative and review studies provides a comprehensive and multidimensional understanding of the ethical challenges posed by the integration of AI in healthcare. The analysis identified seven overarching themes: (a) data privacy, security, and ownership; (b) transparency and explainability; (c) algorithmic fairness and discrimination; (d) autonomy and informed consent; (e) professional responsibility and ethical engagement; (f) clinical reliability and trust in care; and (g) legal, regulatory, and governance challenges. These findings reveal that ethical concerns surrounding AI in healthcare are not limited to isolated technical issues but are deeply rooted in structural, epistemological, and socio-political dimensions. Across the themes, recurring patterns of asymmetry between patients and systems, humans and algorithms, low- and high-resource settings, highlight the potential for AI to reinforce existing inequities if left unchecked. The implications of this review are clear: ethically sound AI deployment requires more than robust technical design. It demands interdisciplinary collaboration, inclusive policymaking, critical engagement with power dynamics in data practices, and a commitment to protecting human dignity and agency. By offering an integrative thematic framework, this study provides a conceptual foundation for future empirical research and the development. Given the variety and complexity of ethical challenges identified in the use of AI in healthcare, such as data privacy, algorithmic transparency, bias and discrimination, patient autonomy, unclear responsibility, legal gaps, and concerns over clinical reliability, there is a strong need for further interdisciplinary and context-sensitive research. Future studies are encouraged to explore the real-world experiences of healthcare professionals, patients, and AI developers when interacting with AI-based systems in clinical settings. Qualitative methods, such as in-depth interviews, ethnographic observation, and thematic analysis, can help reveal the practical and ethical tensions that may not be fully captured in theoretical models. Moreover, researchers should aim to develop localized ethical frameworks for the design and implementation of AI in healthcare. These frameworks should be co-created with key stakeholders, including policymakers, clinicians, and technology developers, to ensure practical relevance and cultural sensitivity. Finally, comparative research across countries with different levels of AI adoption in healthcare can provide insights into how cultural, institutional, and legal factors shape ethical challenges and solutions. Such work can contribute to a more comprehensive and globally informed understanding of trustworthy AI in health systems.

Ethical Considerations
Compliance with ethical guidelines

There were no ethical considerations to be considered in this research. 

Funding
This research did not receive any grant from funding agencies in the public, commercial, or non-profit sectors. 

Authors' contributions
Conceptualization: Ahmad Keykha and Jafar Shahrokhi, Supervision: Ahmad Keykha, Methodology and Data collection: Ahmad Keykha, Ava Taghavi and Jafar Shahrokhi, Investigation: Ahmad Keykha and Ava Taghavi; Writing the original draft Ahmad Keykha and Maryam Hojati Review & editing: Ava Taghavi; Data analysis: Ahmad Keykha and Maryam Hojati and Jafar Shahrokhi.

Conflict of interest
The authors declared no conflict of interest.

Acknowledgments
The authors would like to express their sincere gratitude to the colleagues who contributed to the analysis and interpretation of the qualitative findings in this study. Their valuable insights and collaborative efforts were instrumental in enriching the research process.


References
  1. Jafari F, Keykha A. Identifying the opportunities and challenges of artificial intelligence in higher education: A qualitative study. Journal of Applied Research in Higher Education. 2024; 16(4):1228-45. [DOI:10.1108/JARHE-09-2023-0426]
  2. Keykha A, Mohammadi H, Darabi F, Hosseini SS. [Identifying the applications of artificial intelligence in the assessment of medical students (Persian)]. Strides in Development of Medical Education, 2025; 22(1):1-18. [DOI:10.22062/sdme.2025.200833.1512]
  3. Keykha A, Imanipour M, Shahrokhi J,  Amiri,M. The advantages and challenges of electronic exams: A qualitative research based on shannon entropy technique. Journal of Advances in Medical Education & Professionalism. 2025; 13(1): 1-11. [DOI:10.30476/jamp.2024.102951.1987]
  4. Keykha A, Behravesh S, Ghaemi F. ChatGPT and medical research: A meta-synthesis of opportunities and challenges. Journal of Advances in Medical Education & Professionalism. 2024; 12(3):135-47. [DOI:10.30476/jamp.2024.101068.1910]
  5. Taiwo E, Akinsola A, Tella E, Makinde K, Akinwande M. A review of the ethics of artificial intelligence and its applications in the United States. International Journal on Cybernetics & Informatics. 2023; 12(6):123-37. [DOI:10.5121/ijci.2023.1206010]
  6. Sallam M. ChatGPT utility in healthcare education, research, and practice: Systematic review on the promising perspectives and valid concerns. Healthcare 202; 11(6):887. [DOI:10.3390/healthcare11060887] [PMID] 
  7. Jeyaraman M, Ramasubramanian S, Balaji S, Jeyaraman N, Nallakumarasamy A, Sharma S. ChatGPT in action: Harnessing artificial intelligence potential and addressing ethical challenges in medicine, education, and scientific research. World Journal of Methodology. 2023; 13(4):170. [DOI:10.5662/wjm.v13.i4.170] [PMID]
  8. Korkmaz S. Artificial intelligence in healthcare: A revolutionary ally or an ethical dilemma? Balkan Medical Journal. 2024; 41(2):87. [PMID]
  9. Ahmad SF, Han H, Alam MM, Rehmat M, Irshad M, Arraño-Muñoz M, et al. Impact of artificial intelligence on human loss in decision making, laziness, and safety in education. Humanities and Social Sciences Communications. 2023; 10(1):1-4. [DOI:10.1057/s41599-023-01842-4]
  10. Floridi L. Translating principles into practices of digital ethics: Five risks of being unethical. Philosophy & Technology. 2019; 32(2):185-93. [DOI:10.1007/s13347-019-00354-x]
  11. Walsh D, Downe S. Appraising the quality of qualitative research. Midwifery. 2006; 22(2):108-19. [DOI:10.1016/j.midw.2005.05.004] [PMID]
  12. Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Medical Research Methodology. 2008; 8:1-0. [DOI:10.1186/1471-2288-8-45] [PMID]
  13. Lewin S, Booth A, Glenton C, Munthe-Kaas H, Rashidian A, Wainwright M, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings: Introduction to the series. Implementation Science. 2018; 13:1-0. [DOI:10.1186/s13012-017-0688-3] [PMID]
  14. Critical Appraisal Skills Program. 10 questions to help you make sense of qualitative research: CASP qualitative checklist. Critical Appraisal Skills Program; 2006. [Link]
  15. Whiting P, Savović J, Higgins JP, Caldwell DM, Reeves BC, Shea B, et al. ROBIS: A new tool to assess risk of bias in systematic reviews was developed. Journal of Clinical Epidemiology. 2016; 69:225-34. [DOI:10.1016/j.jclinepi.2015.06.005] [PMID]
  16. Sheth S, Baker HP, Prescher H, Strelzow JA. Ethical considerations of artificial intelligence in health care: Examining the role of generative pretrained transformer-4. The Journal of the American Academy of Orthopaedic Surgeons. 2024; 32(5):205-10. [DOI:10.5435/JAAOS-D-23-00787] [PMID]
  17. Collins BX, Bélisle-Pipon JC, Evans BJ, Ferryman K, Jiang X, Nebeker C, et al. Addressing ethical issues in healthcare artificial intelligence using a lifecycle-informed process. JAMIA Open. 2024; 7(4):ooae108. [DOI:10.1093/jamiaopen/ooae108] [PMID] 
  18. Vandemeulebroucke T. The ethics of artificial intelligence systems in healthcare and medicine: From a local to a global perspective, and back. Pflügers Archiv-European Journal of Physiology. 2024; 477:591–601. [DOI:10.1007/s00424-024-02984-3]
  1. Soares A, Piçarra N, Giger JC, Oliveira R, Arriaga P. Ethics 4.0: Ethical dilemmas in healthcare mediated by social robots. International Journal of Social Robotics. 2023; 15(5):807-23. [DOI:10.1007/s12369-023-00983-5] [PMID]
  2. Zarif A. The ethical challenges facing the widespread adoption of digital healthcare technology. Health and Technology. 2022; 12(1):175-9. [DOI:10.1007/s12553-021-00596-w] [PMID]
  3. Karimian G, Petelos E, Evers SM. The ethical issues of the application of artificial intelligence in healthcare: A systematic scoping review. AI and Ethics. 2022; 2(4):539-51. [DOI:10.1007/s43681-021-00131-7]
  4. Iniesta R. The human role to guarantee an ethical AI in healthcare: A five-facts approach. AI and Ethics. 2025; 5(1):385-97. [DOI:10.1007/s43681-023-00353-x]
  5. Ganesan S, Somasiri N. Navigating the Integration of Machine Learning in Healthcare: Challenges, Strategies, and Ethical Considerations. Journal of Computational and Cognitive Engineering. 2024; 4(1):8-23. [DOI:10.47852/bonviewJCCE42023600]
  6. Emdad FB, Ho SM, Ravuri B, Hussain S. Towards a unified utilitarian ethics framework for healthcare artificial intelligence. arXiv preprint arXiv:2309.14617. 2023 [Unpublished]. [DOI:10.48550/arXiv.2309.14617]
  7. Arbelaez Ossa L, Milford SR, Rost M, Leist AK, Shaw DM, Elger BS. AI through ethical lenses: a discourse analysis of guidelines for AI in Healthcare. Science and Engineering Ethics. 2024; 30(3):24. [DOI:10.1007/s11948-024-00486-0] [PMID] 
  8. Jeyaraman M, Balaji S, Jeyaraman N, Yadav S. Unraveling the ethical enigma: artificial intelligence in healthcare. Cureus. 2023; 15(8). [DOI:10.7759/cureus.43262]
  9. Chikhaoui E, Alajmi A, Larabi-Marie-Sainte S. Artificial intelligence applications in healthcare sector: ethical and legal challenges. Emerging Science Journal. 2022; 6(4):717-38. [DOI:10.28991/ESJ-2022-06-04-05]
  10. Elendu C, Amaechi DC, Elendu TC, Jingwa KA, Okoye OK, Okah MJ, et al. Ethical implications of AI and robotics in healthcare: A review. Medicine. 2023; 102(50):e36671. [DOI:10.1097/MD.0000000000036671] [PMID]
  11. Naik N, Hameed BM, Shetty DK, Swain D, Shah M, Paul R, et al. Legal and ethical consideration in artificial intelligence in healthcare: who takes responsibility?. Frontiers in Surgery. 2022; 9:862322. [DOI:10.3389/fsurg.2022.862322] [PMID]
  12. Wang C, Liu S, Yang H, Guo J, Wu Y, Liu J. Ethical considerations of using ChatGPT in health care. Journal of Medical Internet Research. 2023; 25:e48009. [DOI:10.2196/48009] [PMID]
  13. Mohammad Amini M, Jesus M, Fanaei Sheikholeslami D, Alves P, Hassanzadeh Benam A,  Hariri F. Artificial intelligence ethics and challenges in healthcare applications: a comprehensive review in the context of the European GDPR mandate. Machine Learning and Knowledge Extraction. 2023; 5(3):1023-35. [DOI:10.3390/make5030053]
  14. Morley J, Machado CC, Burr C, Cowls J, Joshi I, Taddeo M, et al. The ethics of AI in health care: A mapping review. Social Science & Medicine. 2020; 260:113172. [DOI:10.1016/j.socscimed.2020.113172] [PMID]
  15. Char DS, Abràmoff MD, Feudtner C. Identifying ethical considerations for machine learning healthcare applications. The American Journal of Bioethics. 2020; 20(11):7-17. [DOI:10.1080/15265161.2020.1819469] [PMID]
  16. Dankwa-Mullan I. Health equity and ethical considerations in using artificial intelligence in public health and medicine. Preventing Chronic Disease. 2024; 21:E64. [DOI:10.5888/pcd21.240245] [PMID]
  17. Ning Y, Teixayavong S, Shang Y, Savulescu J, Nagaraj V, Miao D, et al. Generative artificial intelligence and ethical considerations in health care: A scoping review and ethics checklist. The Lancet Digital Health. 2024; 6(11):e848-56. [DOI:10.1016/S2589-7500(24)00143-2] [PMID]
  18. McLennan S, Fiske A, Tigard D, Müller R, Haddadin S, Buyx A. Embedded ethics: A proposal for integrating ethics into the development of medical AI. BMC Medical Ethics. 2022; 23(1):6. [DOI:10.1186/s12910-022-00746-3] [PMID]
  19. Frost EK, Bosward R, Aquino YS, Braunack-Mayer A, Carter SM. Public views on ethical issues in healthcare artificial intelligence: Protocol for a scoping review. Systematic Reviews. 2022; 11(1):142. [DOI:10.1186/s13643-022-02012-4] [PMID]
  20. Solanki P, Grundy J, Hussain W. Operationalising ethics in artificial intelligence for healthcare: A framework for AI developers. AI and Ethics. 2023; 3(1):223-40. [DOI:10.1007/s43681-022-00195-z]
  21. Machado H, Silva S, Neiva L. Publics’ views on ethical challenges of artificial intelligence: A scoping review. AI and Ethics. 2025; 5(1):139-67. [DOI:10.1007/s43681-023-00387-1]
  22. Adeniyi AO, Arowoogun JO, Okolo CA, Chidi R, Babawarun O. Ethical considerations in healthcare IT: A review of data privacy and patient consent issues. World Journal of Advanced Research and Reviews. 2024; 21(2):1660-8. [DOI:10.30574/wjarr.2024.21.2.0593]
  23. Wu Y, Liu XM. Navigating the ethical landscape of AI in healthcare: Insights from a content analysis. IEEE Technology and Society Magazine. 2023; 42(3):76-87. [DOI:10.1109/MTS.2023.3306543]
  24. Boudi AL, Boudi M, Chan C, Boudi FB, Boudi A. Ethical Challenges of Artificial Intelligence in Medicine. Cureus. 2024; 16(11). [DOI:10.7759/cureus.74495]
  25. Sallstrom L, Morris O, Mehta H. Artificial intelligence in Africa’s healthcare: Ethical considerations. ORF Issue Brief. 2019; 312:1. [Link]
  26. Zhui L, Fenghe L, Xuehu W, Qining F, Wei R. Ethical considerations and fundamental principles of large language models in medical education. Journal of Medical Internet Research. 2024; 26:e60083. [DOI:10.2196/60083] [PMID] 
  27. Busch F, Adams LC, Bressem KK. Biomedical ethical aspects towards the implementation of artificial intelligence in medical education. Medical Science Educator. 2023; 33(4):1007-12. [DOI:10.1007/s40670-023-01815-x] [PMID]
  28. Franco D’Souza R, Mathew M, Mishra V, Surapaneni KM. Twelve tips for addressing ethical concerns in the implementation of artificial intelligence in medical education. Medical Education Online. 2024; 29(1):2330250. [DOI:10.1080/10872981.2024.2330250] [PMID]
  29. Perkins M, Pregowska A. The role of artificial intelligence in higher medical education and the ethical challenges of its implementation. Artificial Intelligence in Health. 2024; 2(1):1-3. [DOI:10.36922/aih.3276]
  30. Weidener L, Fischer M. Proposing a principle-based approach for teaching AI ethics in medical education. JMIR Medical Education. 2024; 10(1):e55368. [DOI:10.2196/55368] [PMID]
  31. Lu H, Alhaskawi A, Dong Y, Zou X, Zhou H, Ezzi SH, et al. Patient autonomy in medical education: Navigating ethical challenges in the age of artificial intelligence. The Journal of Health Care Organization, Provision, and Financing. 2024; 61:00469580241266364. [DOI:10.1177/00469580241266364] [PMID]
  32. Khosravi M, Zare Z, Mojtabaeian SM, Izadi R. Artificial intelligence and decision-making in healthcare: A thematic analysis of a systematic review of reviews. Health Services Research and Managerial Epidemiology. 2024; 11:23333928241234863. [DOI:10.1177/23333928241234863] [PMID]
  33. Shoghli A, Darvish M, Sadeghian Y. Balancing innovation and privacy: Ethical challenges in ai-driven healthcare. Journal of Reviews in Medical Sciences. 2024; 4(1):e31. [DOI:10.22034/jrms.2024.494112.1034]
  34. Al-Hwsali A, Al-Saadi B, Abdi N, Khatab S, Solaiman B, Alzubaidi M, et al. Legal and ethical principles of artificial intelligence in public health: Scoping review. Journal of Public Health Ethics. 2022; 8(2):112-25. [DOI:10.20944/preprints202211.0457.v1]
  35. Mahomed S. Healthcare, artificial intelligence and the Fourth Industrial Revolution: Ethical, social and legal considerations. South African Journal of Bioethics and Law. 2018;  11(2):93-5. [DOI:10.7196/SAJBL.2018.v11i2.664]
  36. Jensen LA, Allen MN. Meta-synthesis of qualitative findings. Qualitative Health Research. 1996; 6(4):553-60. [DOI:10.1177/104973239600600407]
  37. Kumar P, Chauhan S, Awasthi LK. Artificial intelligence in healthcare: Review, ethics, trust challenges & future research directions. Engineering Applications of Artificial Intelligence. 2023; 120:105894. [DOI:10.1016/j.engappai.2023.105894]
  38. Tilala MH, Chenchala PK, Choppadandi A, Kaur J, Naguri S, Saoji R, et al. Ethical considerations in the use of artificial intelligence and machine learning in health care: A comprehensive review. Cureus. 2024; 16(6):e62443. [DOI:10.7759/cureus.62443]
  39. Marques M, Almeida A, Pereira H. The medicine revolution through artificial intelligence: Ethical challenges of machine learning algorithms in decision-making. Cureus. 2024 . [DOI:10.7759/cureus.69405]
  40. Monteith S, Glenn T, Geddes JR, Achtyes ED, Whybrow PC, Bauer M. Challenges and ethical considerations to successfully implement artificial intelligence in clinical medicine and neuroscience: A narrative review. Pharmacopsychiatry. 2023; 56(06):209-13. [DOI:10.1055/a-2142-9325] [PMID]
  41. Kooli C, Al Muftah H. Artificial intelligence in healthcare: A comprehensive review of its ethical concerns. Technological Sustainability. 2022 ;1(2):121-31. [DOI:10.1108/TECHS-12-2021-0029]
  42. Abdullah YI, Schuman JS, Shabsigh R, Caplan A, Al-Aswad LA. Ethics of artificial intelligence in medicine and ophthalmology. The Asia-Pacific Journal of Ophthalmology. 2021; 10(3):289-98. [DOI:10.1097/APO.0000000000000397] [PMID]
  43. Prakash S, Balaji JN, Joshi A, Surapaneni KM. Ethical conundrums in the application of artificial intelligence (AI) in healthcare-a scoping review of reviews. Journal of Personalized Medicine. 2022; 12(11):1914. [DOI:10.3390/jpm12111914] [PMID] 
  44. Park SH, Kim YH, Lee JY, Yoo S, Kim CJ. Ethical challenges regarding artificial intelligence in medicine from the perspective of scientific editing and peer review. Science Editing. 2019; 6(2):91-8. [DOI:10.6087/kcse.164]
  45. Mörch CM, Atsu S, Cai W, Li X, Madathil SA, Liu X, et al. Artificial intelligence and ethics in dentistry: A scoping review. Journal of Dental Research. 2021; 100(13):1452-60. [DOI:10.1177/00220345211013808] [PMID]
  46. Hui AT, Ahn SS, Lye CT, Deng J. Ethical challenges of artificial intelligence in health care: A narrative review. Ethics in Biology, Engineering and Medicine: An International Journal. 2021; 12(1). [DOI:10.1615/EthicsBiologyEngMed.2022041580]
  47. Murphy K, Di Ruggiero E, Upshur R, Willison DJ, Malhotra N, Cai JC, et al. Artificial intelligence for good health: a scoping review of the ethics literature. BMC Medical Ethics. 2021; 22(1):14. [DOI:10.1186/s12910-021-00577-8] [PMID] 
  48. Ratti E, Morrison M, Jakab I. Ethical and social considerations of applying artificial intelligence in healthcare-a two-pronged scoping review. BMC Medical Ethics. 2025; 26(1):68. [DOI:10.1186/s12910-025-01198-1] [PMID] 
  49. Tang L, Li J, Fantus S. Medical artificial intelligence ethics: A systematic review of empirical studies. Digital Health. 2023; 9:20552076231186064. [DOI:10.1177/20552076231186064] [PMID]
  50. Chen M, Zhou AE, Jain N, Gronbeck C, Feng H, Grant-Kels JM. Ethics of artificial intelligence in dermatology. Clinics in Dermatology. 2024; 42(3):313-6. [DOI:10.1016/j.clindermatol.2024.02.003] [PMID]
  51. Rubeis G. iHealth: The ethics of artificial intelligence and big data in mental healthcare. Internet Interventions. 2022; 28:100518. [DOI:10.1016/j.invent.2022.100518] [PMID] 
  52. Astromskė K, Peičius E, Astromskis P. Ethical and legal challenges of informed consent applying artificial intelligence in medical diagnostic consultations. AI & SOCIETY. 2021; 36:509-20. [DOI:10.1007/s00146-020-01008-9]
  53. Kawamleh S. Against explainability requirements for ethical artificial intelligence in health care. AI and Ethics. 2023; 3(3):901-16. [DOI:10.1007/s43681-022-00212-1]
  54. Rudzicz F, Saqur R. Ethics of artificial intelligence in surgery. arXiv preprint arXiv:2007.14302. 2020 [Unpublished]. [Link]
  55. Anom BY. Ethics of Big Data and artificial intelligence in medicine. Ethics, Medicine and Public Health. 2020; 15:100568. [DOI:10.1016/j.jemep.2020.100568]
  56. Arnold MH. Teasing out artificial intelligence in medicine: An ethical critique of artificial intelligence and machine learning in medicine. Journal of Bioethical Inquiry. 2021; 18(1):121-39. [DOI:10.1007/s11673-020-10080-1] [PMID] 
  57. Khawaja Z, Bélisle-Pipon JC. Your robot therapist is not your therapist: Understanding the role of AI-powered mental health chatbots. Frontiers in Digital Health. 2023; 5:1278186. [DOI:10.3389/fdgth.2023.1278186] [PMID] 
  58. Hasan SS, Fury MS, Woo JJ, Kunze KN, Ramkumar PN. Ethical application of generative artificial intelligence in medicine. Arthroscopy: The Journal of Arthroscopic & Related Surgery. 2025; 41(4):874-85. [DOI:10.1016/j.arthro.2024.12.011] [PMID]
  59. Youssef A, Nichol AA, Martinez-Martin N, Larson DB, Abramoff M, Wolf RM, et al. Ethical considerations in the design and conduct of clinical trials of artificial intelligence. JAMA Network Open. 2024; 7(9):e2432482 [DOI:10.1001/jamanetworkopen.2024.32482] [PMID] 
  60. Quinn TP, Coghlan S. Readying medical students for medical AI: The need to embed AI ethics education. arXiv preprint arXiv:2109.02866. 2021 [Unpublished]. [Link]
  61. Ramoni D, Scuricini A, Carbone F, Liberale L, Montecucco F. Artificial intelligence in gastroenterology: Ethical and diagnostic challenges in clinical practice. World Journal of Gastroenterology. 2025; 31(10):102725. [DOI:10.3748/wjg.v31.i10.102725] [PMID] 
  62. Federico CA, Trotsyuk AA. Biomedical data science, artificial intelligence, and ethics: Navigating challenges in the face of explosive growth. Annual Review of Biomedical Data Science. 2024; 7. [DOI:10.1146/annurev-biodatasci-102623-104553] [PMID]
  63. Carter SM, Rogers W, Win KT, Frazer H, Richards B, Houssami N. The ethical, legal, and social implications of using artificial intelligence systems in breast cancer care. The Breast. 2020; 49:25-32. [DOI:10.1016/j.breast.2019.10.001] [PMID]
  64. Keykha A, Fazlali B, Behravesh S, Farahmandpour Z. Integrating Artificial Intelligence in Medical Education: A meta-synthesis of potentials and pitfalls of ChatGPT. Journal of Advances in Medical Education & Professionalism. 2025; 13(3):155-72. [DOI:10.30476/jamp.2024.104617.2071]
  65. Keykha A. Extraction and classification of smart university components to provide a conceptual framework: A meta-synthesis study. Sciences and Techniques of Information Management. 2022; 8(4):75-112. [DOI:10.22091/stim.2021.6873.1571]

 
Type of Study: Review Article | Subject: ● Artificial Intelligence
Received: 2025/07/2 | Accepted: 2025/08/26 | Published: 2025/12/31

Add your comments about this article : Your username or Email:
CAPTCHA

Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

© 2026 CC BY-NC 4.0 | Journal of Research and Health

Designed & Developed by : Yektaweb