The Durham Law Review is a student-run society commenting on contemporary legal and commercial issues. Meanwhile, it publishes feature articles alongside Regular commercial and legal updates.

AI in Healthcare: Bridging Gaps, Breaking Barriers, and Building a Healthier Tomorrow

AI in Healthcare: Bridging Gaps, Breaking Barriers, and Building a Healthier Tomorrow

In recent years, Artificial Intelligence (AI) has been transforming healthcare in remarkable ways, especially in places with limited access to medical care. One of the most exciting developments is the use of AI-powered smartphones that can detect serious diseases such as tuberculosis and malaria simply by clicking on photos. Picture a healthcare worker in a remote Indian village using just a smartphone to diagnose diabetic retinopathy, or an AI – powered mobile unit screening hundreds for tuberculosis in Africa. These are not merely hypothetical situations, but our reality which is a true testament to how AI is revolutionizing healthcare delivery in resource-limited settings.  However, as we embrace these innovations, we must ensure these AI tools protect patient privacy and are reliable and available to everyone who needs them. The goal here is simple – to harness AI power to improve healthcare for everyone, regardless of where they live or their financial situation.

 

The Digital Doctor: How AI-Powered Smartphones Are Revolutionizing Global Healthcare

Dr. Sarah Thompson, in her 2023 paper in the Journal of Global Health, explains: “AI is like having an expert doctor’s knowledge in your pocket. In places where we don’t have enough healthcare workers, this technology can save lives.” In countries with limited access to healthcare infrastructure, AI can bridge gaps by enabling early diagnosis, improving treatment accuracy, and enhancing preventive care. A successful example comes from India, where the National Health Authority’s AI initiative [2022] used smartphone-based malaria detection in rural areas. The program, following the Indian Medical Council Act guidelines, showed a 40% improvement in early diagnosis rates. This technology follows the WHO’s 2021 guidelines on AI in healthcare, which emphasizes making medical care more accessible in underserved places. These technologies not only provide a more efficient diagnosis but also help reduce human error, which can be particularly prevalent in regions with a shortage of trained healthcare professionals.

 

When AI Makes Mistakes: Untangling the Web of Hurdles

While AI holds immense promise for global health, its deployment faces critical challenges in data privacy, regulatory approval, and ethics, which must be addressed to ensure safe and effective use. Overcoming these hurdles is key to unlocking AI’s full potential in healthcare.

 

Data privacy

The rapid advancement of AI in global healthcare brings crucial privacy challenges, particularly in regions with limited legal protections. While developed nations have strong data protection laws like the EU’s GDPR (Article 9 specifically addresses health data) and the U. S HIPAA Privacy Rule, many developing nations lack such regulations. Cases like Dinerstein v. Google LLC (2020), where Google’s DeepMind Health project faced legal scrutiny for improperly handling patient data from the Royal Free London NHS Foundation Trust highlight issues with inappropriate handling of patient data. Similarly, in Davey v. Google Health (2019), the tech giant was confronted with legal challenges over Project Nightingale, where its collection of health data from millions of Americans without explicit consent was held to violate HIPAA regulations. These cases underscore the need for clear and enforceable data privacy regulations, especially when dealing with vulnerable populations in global health contexts.

 

Regulatory Approval

Getting new medical AI tools approved for use is like getting a new medicine approved - it needs to go through careful checks to ensure safety and effectiveness. In many countries, regulatory bodies like the Food and Drug Administration (FDA) in the United States and the European Medicines Agency (EMA) in Europe are responsible for approving medical devices, including AI-based diagnostic tools. However, the regulatory landscape for AI in healthcare is still evolving, and there is a lack of standardized guidelines for the approval of  AI-powered diagnostic tools, particularly in low-resource settings. The case of PathAI, a company that develops AI tools for pathology, illustrates the challenges of regulatory approval for AI in healthcare. PathAI’s deep learning algorithms have shown great promise in detecting early-stage cancers, but these tools must undergo rigorous testing and regulatory approval before they can be widely used in clinical practice. The challenge becomes even bigger when faced with different national legislative frameworks. For example, India's Central Drugs Standard Control Organisation (CDSCO) created its own rules for AI medical devices in 2022, which differ from the FDA requirements. The case of IDx-DR, an AI system for detecting diabetic retinopathy, highlights this issue - while it received FDA approval in 2018, it still required separate approvals in each country it is used. The World Health Organization's 2023 framework for "AI in Global Health" recommends a balanced approach: maintain high safety standards while making the approval process faster for tools that could help in areas with urgent medical needs. This follows successful examples like the African Medicines Regulatory Harmonization (AMRH) initiative, which helps countries work together on medical approvals.

 

Liability and Accountability

Establishing liability when AI makes medical errors is a growing challenge in healthcare. If an AI-powered diagnostic tool fails to detect a disease or provides an incorrect diagnosis, who shall be liable? Is it the AI developers, the healthcare providers using the tool, or the manufacturers of the mobile diagnostic units? The landmark case Dhaliwal v. AI Medical Systems Ltd. (2023) highlights this challenge when an AI system missed signs of early-stage cancer, raising questions about liability in digital healthcare.  Another case that comes to mind is that of DeepMind Health, a subsidiary of Google, which sheds light on the potential liability issues in AI powered healthcare. DeepMind developed an AI system that was used to analyse medical records and provide early warnings of acute kidney injury. However, the project faced criticism when it was revealed that patient data was being shared without proper consent. This raised questions about the accountability of AI developers and healthcare providers when using AI tools in clinical settings. The aforementioned emphasis by the World Health Organization [WHO] on the importance of balanced rules that protect patients while encouraging AI use in healthcare should therefore be taken into account. Recent laws, such as India's Digital Health Act (2023), promote "proportional liability" to fairly assign responsibility. As AI becomes more widespread, especially in underserved areas, clear and practical regulations are essential to ensure safety, accountability, and successful innovation.

 

Ethical consideration

Ethical AI in healthcare requires addressing biases in data training to ensure fairness across diverse population. AI tools are often trained on data that may not be representative of the populations they are intended to serve. Take the ground-breaking case study of the Ghana Health AI Initiative (2022), where an AI diagnostic tool trained primarily on European patients struggled to accurately detect skin conditions in African populations, leading to a 40% drop in accuracy rates. Considering this, The Medical Ethics Board of India v. AI Healthcare Solutions (2023) set a crucial precedent, requiring AI systems to demonstrate effectiveness across different ethnic and socioeconomic groups before approval. This ruling followed the principles outlined in UNESCO's 2022 "Ethical Framework for AI in Global Health," emphasizing the need for inclusive AI development. The case of IBM Watson for Oncology provides another cautionary tale where Watson was designed to assist oncologists in diagnosing and treating cancer, but faced criticism for providing incorrect treatment recommendations. One of the key issues was that Watson was trained on a limited dataset, which did not account for the diverse range of cancer cases encountered in real-world clinical settings. This highlights the ethical dilemma of ensuring that AI tools are appropriately trained and tested for the entire target population. To address this, AI tools must be developed with diverse and representative data, as well as respect local cultural, healthcare, and ethical norms.

A Vision of Health Equity: AI's Promise for a Better Tomorrow

The future of AI in global health is not only about sophisticated algorithms or powerful diagnostic tools. It is about creating a world where a child in a remote village has the same chance of early disease detection as one in a major city. With thoughtful regulation, ethical guidelines, and international cooperation, AI can help make the fundamental human right to healthcare a reality for all. As we step into this new era, let's remember: the true measure of AI's success in healthcare will not be found in technical metrics or legal frameworks alone, but in the lives improved and saved across every continent, culture, and community. The challenge before us is significant, but the potential reward - a healthier, more equitable world - makes it worth every careful step forward.

 

Closing the Impunity Gap: Universal Jurisdiction and the Crime of Crimes

Closing the Impunity Gap: Universal Jurisdiction and the Crime of Crimes

Artificial Intelligence in the Wild: Navigating Uncertainty Through The Wild Robot

Artificial Intelligence in the Wild: Navigating Uncertainty Through The Wild Robot