Ethical and Bias Considerations in Artificial Intelligence/Machine Learning: Matthew G. Hanna et al

ABSTRACT

As artificial intelligence (AI) continues to play an increasingly prominent role in pathology and medical practice, critical attention must be paid to the ethical challenges and inherent biases associated with machine learning (ML) integration. While ML systems have demonstrated exceptional performance across a range of clinical tasks—including image analysis, natural language processing, and predictive modeling—their implementation raises substantial ethical questions due to potential biases embedded in data, algorithms, and human interactions. These biases are broadly classified into data bias, development bias, and interaction bias, arising from factors such as non-representative training datasets, flawed feature engineering, institutional variability, selective reporting, and evolving clinical standards. Although AI-ML tools hold immense promise, their responsible deployment necessitates rigorous, system-wide evaluation protocols to ensure equitable and transparent outcomes. This review examines the ethical landscape and bias-related considerations specific to the application of AI-ML technologies in the field of pathology and medicine.

Full Citation (APA Style) :

Hanna, M. G., Pantanowitz, L., Jackson, B., Palmer, O., Visweswaran, S., Pantanowitz, J., Deebajah, M., & Rashidi, H. H. (2025). Ethical and Bias Considerations in Artificial Intelligence/Machine Learning. Modern Pathology, 38, 100686. https://doi.org/10.1016/j.modpat.2024.100686

Authors and Affiliations

AuthorAffiliation
Matthew G. Hanna*Department of Pathology, University of Pittsburgh Medical Center; Computational Pathology and AI Center of Excellence (CPACE)
Liron PantanowitzSame as above
Brian JacksonDepartment of Pathology, University of Utah; ARUP Laboratories
Octavia PalmerSame as above (UPMC + CPACE)
Shyam VisweswaranDepartment of Biomedical Informatics, University of Pittsburgh
Joshua PantanowitzUniversity of Pittsburgh Medical School
Mustafa DeebajahDepartment of Pathology, Cleveland Clinic
Hooman H. Rashidi*Same as Matthew G. Hanna (*Corresponding authors)

Methodology

This is a comprehensive review article, not an empirical study. It synthesizes existing research, frameworks, and case examples to evaluate:

  • Core principles of medical ethics extended to AI (autonomy, beneficence, nonmaleficence, justice)
  • Sources and types of bias in medical AI systems (data, development, interaction)
  • Real-world examples from pathology, radiology, and oncology
  • Mitigation frameworks such as fairness-aware ML, transparency, and the FAIR principles

The methodology involves detailed literature analysis, domain-specific guidelines, and cross-disciplinary integration from pathology, AI ethics, and clinical trials.

Summary of Key Contributions

1. Ethical Foundations in Medical AI

  • Expands on the Belmont Report’s four principles and introduces accountability as a fifth pillar.
  • Emphasizes informed consent, transparency of AI-assisted decisions, and limitations of de-identified data.

2. Taxonomy of Bias in AI/ML

  • Introduces three major bias types: data bias, development bias, and interaction bias.
  • Explores subtypes like sampling bias, labeling bias, algorithmic bias, and feedback loop bias with pathology-specific examples.

3. Fairness and Inclusion in Model Design

  • Highlights consequences of imbalanced datasets (e.g., underdiagnosis in minority populations).
  • Advocates for race-conscious fairness over race-blind models in clinical algorithms.

4. Bias Mitigation Across the AI Lifecycle

  • Outlines corrective strategies at each stage: data collection, algorithm development, evaluation, deployment, and ongoing monitoring.
  • Stresses the role of explainability and introspective models for ethical deployment.

5. Guidelines & Checklists

  • Presents a full inventory of medical AI reporting standards (e.g. STARD-AI, TRIPOD-AI, PROBAST-AI, CONSORT-AI).
  • Aligns these frameworks with FAIR data principles (Findability, Accessibility, Interoperability, Reusability).

Implications

This review positions itself as a vital educational resource for:

  • Pathologists and laboratory professionals integrating AI into workflows
  • Developers designing health care ML tools
  • Policymakers shaping regulations in clinical AI
  • Researchers addressing bias and trust in emerging health technologies

Follow JSTOR ONLINE or search on Google Scholar for more!