HealthAI: The Global Agency for Responsible AI in Health

The International Digital Health & AI Research Collaborative (I-DAIR) is an independent global platform based in Geneva with the mission of enabling and improving access to inclusive, impactful, and responsible research into digital health and artificial intelligence for health.

Acronym: HealthAI

Established: 2023

Address: Chemin Eugène-Rigot 2A, 1202 Geneva, Switzerland


HealthAI – The Global Agency for Responsible AI in Health – is a Geneva-based non-profit organisation with the mission of advancing the development and adoption of Responsible AI solutions in health through the collaborative implementation of regulatory mechanisms and global standards.

HealthAI envisions a world where artificial intelligence (AI) produces equitable and inclusive improvements in health and well-being for all individuals and communities.

As the premier implementing partner to ensure global standards for Responsible AI in health are actively applied, HealthAI works with countries, normative agencies, the private sector, and other stakeholders to build national and regional regulatory capacity so that countries can actively validate AI technologies, reducing both risks and long-term costs of AI-enabled health.

With a network of over 45 partners, HealthAI’s work is rooted in three core principles, namely cultivating trust, catalysing innovation, and centring equity.

 Logo, Text

An Organisational Refresh:

Following four years of operation under The International Digital Health and AI Research Collaborative (I-DAIR), we have transformed into HealthAI: The Global Agency for Responsible AI in Health.

Digital policy issues

HealthAI new strategy

AI and other emerging technologies have immense potential to improve health and well-being but they also bring a unique set of risks and challenges that must be addressed to safeguard individuals and communities from potential harms. Globally, a lack of effective governance is increasing the risk and hindering the adoption of Responsible AI solutions towards better health outcomes. Strong, responsive regulatory mechanisms are required to establish AI systems’ safety and effectiveness and build trust for the long-term acceptability and success of AI-enabled progress in the health sector.

Some countries, mainly those with the highest gross domestic product (GDP) and the most advanced technology sectors, have begun integrating AI regulation into governance structures and national regulations. Most countries have only just begun considering the regulation of AI in general terms and even less so within the context of health. This risks deepening inequity in both access and outcome between early adopter countries and countries that do not have the resources or flexibility to match the pace of technological innovation.                    

Global efforts addressing the need for AI regulation through the harmonisation of existing standards are critical, but require collaborative partners who can support the implementation of the resulting standards and recommendations at a local level. With the new strategy for 2024-2026, HealthAI positions itself as a premier implementing partner for countries, normative agencies, the private sector, and other stakeholders to ensure global standards of Responsible AI in health are actively applied in the push towards improved health and well-being outcomes for all in alignment with the SDGs.

HealthAI’s Core Outputs

To achieve our mission, HealthAI’s work spans four key areas (Figure 1): 

i) Building and certifying national and regional validation mechanisms on Responsible AI in health:

  • Establish in-country, government-led regulatory mechanisms by implementing global standards and guidance set by the World Health Organization (WHO) and others at the country level.
  • Support the implementation of existing auditing tools, and provide guidance on the use of data for AI solutions validation.

ii) Establishing a global regulatory network for knowledge sharing and early warning of adverse events:

  • Facilitate knowledge sharing so as to streamline the certification of the same technology and to identify AI solutions that require refinement or re-evaluation.
  • Rapid notification of adverse events arising from an AI-driven health solution.

iii) Creating a global public repository of validated AI solutions for health:

  • Allow countries to evaluate solution options against local health needs.
  • Surface unmet health needs as insights and inspiration for technology developers.

iv) Delivering advisory support on policies and regulations:

  • Provide technical guidance and insights into global trends and best practices so as to assist public and private stakeholders in developing effective and contextually relevant strategies, policies, and regulations.
  • Democratise AI for health policy-making through diverse stakeholder and citizen engagement to cultivate trust and improve inclusiveness.
 Text, QR Code, Symbol

Figure 1 – Responsible AI Solution for Health

The outputs will lead to the following outcomes. Stronger policies, regulations, and institutions will enable the effective governance and validation of AI and other emerging technologies, reducing both the risks and long-term costs of AI-enabled health. In the long term, countries will be able to identify validated AI solutions with greater certainty in their efficacy to meet local health needs, while private sector partners will have clarity about regulatory requirements and a better understanding of AI use in health systems and services. 

HealthAI’s Impact

HealthAI is dedicated to contributing to enhanced health and well-being outcomes for all in alignment with the SDGs. HealthAI aims to achieve this by facilitating increased access to safe, high-quality, effective, and equitable AI solutions. This involves ensuring that AI solutions are not only safe for use but also comply with rigorous quality standards, delivering the intended health outcomes or system improvements.

HealthAI commits to providing information on market access authorisation, and reimbursement processes while supporting an early-warning mechanism to alert countries of adverse events. Through streamlined information sharing between countries and the establishment of a global repository of validated AI solutions, the organisation seeks to propagate the availability of proven Responsible AI solutions. Furthermore, HealthAI envisions a positive impact on government revenue from regulatory activities, generating new sources of income for regulatory agencies and government budgets. This financial support is crucial for the sustained funding of regulatory mechanisms and additional investment capacity, ultimately accelerating approval processes across countries and leading to cost savings and bureaucratic streamlining. 

Finally, by fostering an ecosystem that ensures compliance with internationally defined Responsible AI standards, protects national data sovereignty, and supports local validation processes that enable feedback from civil society, HealthAI’s work will increase trust, investment, and innovation in Responsible AI solutions for health.

Definition of Responsible AI

Responsible AI is characterised by AI technologies that align with established standards and ethical principles, prioritising human-centric attributes. In the context of HealthAI, Responsible AI is defined as AI solutions that exhibit ethical, inclusive, rights-respecting, and sustainable qualities. These attributes encompass a commitment to protecting and respecting human autonomy, promoting well-being and safety, ensuring technical robustness, safeguarding privacy and data, adhering to laws and ethics, prioritising transparency and explainability, maintaining responsibility and accountability, fostering inclusivity and equity, upholding diversity and non-discrimination, and considering societal and environmental well-being. HealthAI applies these principles across all facets of AI technologies, from technical development and data use to technology implementation and its ultimate impact. This comprehensive definition is drawn from reputable sources, including WHO, the International Development Research Center’s AI for Global Health Initiative, the European Commission’s High-Level Expert Group on AI, and pertinent journal publications on the ethics and governance of artificial intelligence in health.

Social media channels

LinkedIn @healthaiagency

X @thehealthai

YouTube @I-DAIR