Event date: 23 June 2022, 17:30–18:30 CEST
In this talk, researchers from the Geneva Academy will shed light on the different examples of cyber operations (e.g. Stuxnet, NotPetya, and SolarWinds) allegedly conducted or sponsored by states to explicate their geopolitical effects and challenges to international law. As the importance of ICT grows in the modern world, cyber operations have become an integral part of state and non-state actors’ strategies against other states and actors. Researchers will present their findings as part of the project on disruptive military technologies.
For more information, and to register, please visit the official page.
Event date: 22 June 2022, 9:30–15:00 CEST
The International Geneva Welcome Centre (CAGI) and the CyberPeace Institute jointly curated an all-day cyber skills event for NGOs. The uptake of cloud-based technologies and storage of valuable donor and beneficiary data have made NGOs a frequent target of cyberattacks. The reality for NGOs is that they have to safeguard their cybersecurity as much as private businesses do. The event invites local government representatives, cybersecurity experts, and NGOs to partake in testimonial drafting, roundtable discussions, and awareness raising for boosting NGOs’ cyber skills.
For more information, and to register, please visit the official page.
Address: 3 rue de Varembé, 1211 Geneva 20 , Switzerland
Stakeholder group: International and regional organisations
The IEC is the world leader in the preparation and publication of international standards for all electrical, electronic, and related technologies. A global, not-for-profit membership organisation, the IEC provides a neutral and independent institutional framework to over 170 countries, coordinating the work of more than 20,000 experts. We administer four IEC Conformity Assessment Systems, which represent the largest working multilateral agreement based on one-time testing of products globally. The members of each system certify that devices, systems, installations, services, and people perform as required.
IEC International Standards represent a global consensus of state-of-the-art know-how and expertise. Together with conformity assessment, they are foundational for international trade.
IEC Standards incorporate the needs of many stakeholders in every participating country and form the basis for testing and certification. Every member country, and all its stakeholders represented through the IEC National Committees has one vote and a say in what goes into an IEC International Standard.
Our work is used in the verification of the safety, performance, and interoperability of electric and electronic devices and systems such as mobile phones, refrigerators, office and medical equipment, or electricity generation. It also helps accelerate digitisation, arteficial inteligence (AI), or virtual reality applications; protects information technology (IT) and critical infrastructure systems from cyberattacks, and increases the safety of people and the environment.
The IEC works to ensure that its activities have a global reach in order to meet all the challenges of digital transformation worldwide. The organisation covers an array of digital policy issues.
Digital policy issues
Artificial intelligence and the internet of things
AI applications are driving digital transformation across a diverse range of industries, including energy, healthcare, smart manufacturing, transport, and other strategic sectors that rely on IEC Standards and Conformity Assessment Systems. AI technologies allow insights and analytics that go far beyond the capabilities of legacy analytic systems.
For example, digital transformation of the grid is enabling increased automation, making it more efficient and able to integrate fluctuating renewable energy sources seamlessly. IEC Standards pave the way for the use of a variety of digital technologies relating to smart energy. They deal with issues such as the integration of renewable energies within the electrical network but also increased automatisation.
The IEC’s work in the area of AI takes a three-pronged approach. IEC experts focus on sector-specific needs (vertical standards) and conformity assessment, while the joint IEC and International Organization for Standardization (ISO) technical committee on AI, JTC1/SC 42, brings together technology experts, as well as ethicists, lawyers, social scientists, and others to develop generic and foundational standards (horizontal standards).
In addition, IEC Safety Standards are an essential element of the framework for AI applications in power utilities and smart manufacturing. IEC Conformity Assessment Systems complete the process by ensuring that the standards are properly implemented.
SC 42 addresses some of the concerns about the use and application of AI technologies. For example, data quality standards for ML and analytics are crucial for helping to ensure that applied technologies produce useful insights and eliminate faulty features.
Governance standards in AI and the business process framework for big data analytics address how the technologies can be governed and overseen from a management perspective. International standards in the areas of trustworthiness, ethics, and societal concerns will ensure responsible deployment.
The joint IEC and ISO technical committee also develop foundational standards for the IoT. Among other things, SC 41 standards promote interoperability, as well as architecture and a common vocabulary for the IoT.
The IEC develops standards for many of the technologies that support digital transformation. Sensors, cloud, and edge computing are examples.
Advances in data acquisition systems are driving the growth of big data and AI use-cases. The IEC prepares standards relating to semiconductor devices, including sensors.
Cloud computing and its technologies have also supported the increase of AI applications. The joint IEC and ISO technical committee prepares standards for cloud computing including distributed platforms and edge devices, which are situated close to users and data collection points. The publications cover key requirements relating to data storage and recovery.
International Standards play an important role in increasing trust in AI and help support public and private decision-making, not least because they are developed by a broad range of stakeholders. This helps to ensure that the IEC’s work strikes the right balance between the desire to deploy AI and other new technologies rapidly and the need to study their ethical implications.
The IEC has been working with a wide range of international, regional, and national organisations to develop new ways to bring stakeholders together to address the challenges of AI. These include the Swiss Federal Department of Foreign Affairs (FDFA) and the standards development organisations, ISO, and the International Telecommunication Union (ITU).
More than 500 participants followed the AI with Trust conference, in-person and online, to hear different stakeholder perspectives on the interplay between legislation, standards and conformity assessment. They followed use-case sessions on healthcare, sensor technology, and collaborative robots, and heard distinguished experts exchange ideas on how they could interoperate more efficiently to build trust in AI. The conference in Geneva was the first milestone of the AI with Trust initiative.
The IEC is also a founding member of the Open Community for Ethics in Autonomous and Intelligent Systems (OCEANIS). OCEANIS brings together standardisation organisations from around the world to enhance awareness of the role of standards in facilitating innovation and addressing issues related to ethics and values.
– IEC Blog
Network security and critical infrastructure
The IEC develops cybersecurity standards and conformity assessments for both IT and operational technology (OT). One of the biggest challenges today is that cybersecurity is often understood only in terms of IT, which leaves critical infrastructure, such as power utilities, transport systems, manufacturing plants and hospitals, vulnerable to cyberattacks.
Cyberattacks on IT and OT systems often have different consequences. The effects of cyberattacks on IT are generally economic, while cyberattacks on critical infrastructure can impact the environment, damage equipment, or even threaten public health and lives.
When implementing a cybersecurity strategy, it is essential to take the different priorities of cyber-physical and IT systems into account. The IEC provides relevant and specific guidance via two of the world’s best-known cybersecurity standards: IEC 62443 for cyber-physical systems and ISO/IEC 27001 for IT systems.
Both take a risk-based approach to cybersecurity, which is based on the concept that it is neither efficient nor sustainable to try to protect all assets in equal measure. Instead, users must identify what is most valuable and requires the greatest protection and identify vulnerabilities.
ISO/IEC 27001 for IT
IT security focuses in equal measure on protecting the confidentiality, integrity, and availability of data – the so-called CIA triad. Confidentiality is of paramount importance and information security management systems, such as the one described in ISO/IEC 27001, are designed to protect sensitive data, such as personally identifiable information (PII), intellectual property (IP), or credit card numbers, for example.
Implementing the information security management system (ISMS) described in ISO/IEC 27001 means embedding information security continuity in business continuity management systems. Organisations are shown how to plan and monitor the use of resources to identify attacks earlier and take steps more quickly to mitigate the initial impact.
IEC 62443 for OT
In cyber-physical systems, where IT and OT converge, the goal is to protect safety, integrity, availability, and confidentiality (SIAC). Industrial control and automation systems (ICAS) run in a loop to check continually that everything is functioning correctly.
The IEC 62443 series was developed because IT cybersecurity measures are not always appropriate for ICAS. ICAS are found in an ever-expanding range of domains and industries, including critical infrastructure, such as energy generation, water management, and the healthcare sector.
ICAS must run continuously to check that each component in an operational system is functioning correctly. Compared to IT systems, they have different performance and availability requirements and equipment lifetime.
Conformity assessment: IECEE
Many organisations are applying for the IEC System of Conformity Assessment Schemes for Electrotechnical Equipment and Components (IECEE) conformity assessment certification to verify that the requirements of IEC 62443 have been met.
IECEE provides a framework for assessments in line with IEC 62443, which specifies requirements for security capabilities, whether technical (security mechanisms) or process (human procedures) related. Successful recipients receive the IECEE industrial cybersecurity capability certificate of conformity.
Conformity assessment: IECQ
While certification to ISO/IEC 27001 has existed since the standard was published in 2013, it is only in recent years that the IEC Quality Assessment System for Electronic Components (IECQ), has set up a true single standardised way of assessing and certifying an ISMS to ISO/IEC 27001.
International standards such as IEC 62443 and ISO/IEC 27001 are based on industry best practices and reached by consensus. Conformity assessment confirms that they have been implemented correctly to ensure a safe and secure digital society.
- Cyber Security: Ensuring IEC 62443 is Implemented Correctly
- Understanding IEC 62443
- IECQ Certification, a Crucial Requirement for ISO/IEC 27001
- Eight Things Organizations Should do to Ensure Compliance with Cyber Security Regulations
- Cyber Security for Critical Infrastructure
- Cybersecurity for the Healthcare Sector
- Cybersecurity for Power Utilities and other Cyber Physical Systems
Social media channels
YouTube @IEC – International Electrotechnical Commission
Address: Maison de la Paix, Chemin Eugène-Rigot 2D, 1211 Geneva, Switzerland
DCAF is dedicated to making states and people safer through more effective and accountable security and justice. Since 2000, DCAF has facilitated, driven, and shaped security sector reform (SSR) policy and programming worldwide.
Cyberspace and cybersecurity have numerous implications for security provision, management, and oversight, which is why DCAF is engaged in these topics within its work. DCAF has implemented a cycle of policy projects to develop new norms and good practices in cyberspace. At the operational level, cybersecurity governance has become a prominent part of SSR programming.
Digital policy issues
DCAF supported the drafting of the Global Counterterrorism Forum’s (GCTF) Zurich-London Recommendations on Preventing and Countering Violent Extremism (P/CVE) and Terrorism Online. Subsequently, it co-developed the Policy Toolkit, which transforms these recommendations into practical tools for states. DCAF applies the Policy Toolkit in its work in the Western Balkans. Several UN bodies – as well as the Organization for Security and Co-operation in Europe (OSCE) – are planning to incorporate it into their activities. DCAF has also developed a French language guide on good practices concerning cyberspace governance for the Ecole nationale à vocation régionale (ENVR) de la cybersécurité in Senegal, which is mainly targeted at cybersecurity practitioners in Francophone Africa.
DCAF contributes to effective and accountable cybersecurity in Europe and Central Asia by providing practical guidance and support for the governance of the cybersecurity sector; supporting the development of national and international legal and policy frameworks to promote good cybersecurity governance, and facilitating multistakeholder engagement in cybersecurity. This work is organised in several service lines: providing national cybersecurity assessments; developing policy advice; enhancing regional and transnational cooperation between cybersecurity authorities; building the capacity of computer emergency response teams (CERTs); promoting dialogue and coordination between state and non-state cybersecurity actors; and publishing policy research on good governance in cybersecurity. DCAF regularly works with partners, including the (International Telecommunication Union (ITU), the Regional Cooperation Council (RCC), the OSCE, and DiploFoundation.
To increase the transparency and accountability of the security sector in the Middle East and North Africa, DCAF supports the automation of internal processes, information sharing, document management systems, and data visualisation and analysis in parliaments, ministries, public administrations, and oversight institutions. Furthermore, four online Sector Observatories (Marsads) provide centralised information and analyses on the Tunisian, Libyan, Palestinian, and Egyptian security sectors and their actors, and three legal databases provide searchable online access to legislation governing the security sectors in Libya, Tunisia, and Palestine. Finally, DCAF has provided legal expertise to national oversight institutions in regard to possible privacy violations through and misuse of COVID-19 apps developed by national governments.
In 2016, DCAF developed a social media guide for ombuds institutions and the armed forces under its jurisdiction to support the use of social media as a safe and effective communication tool.
DCAF uses social media platforms to inform stakeholders and the public about its activities, including in relation to cybersecurity.
Social media channels
YouTube @DCAF Geneva Centre for Security Sector Governance
Acronym: CyberPeace Institute
Address: Campus Biotech Innovation Park, 15 avenue de Sécheron, 1202 Geneva, Switzerland
Stakeholder group: NGOs and associations
The CyberPeace Institute is an independent and neutral non-governmental organisation (NGO) that strives to reduce the frequency, impact, and scale of cyberattacks, to hold actors accountable for the harm they cause, and to assist vulnerable communities.
The Institute works in close collaboration with relevant partners to reduce the harm from cyberattacks on people’s lives worldwide, and provide assistance. By analysing cyberattacks, it exposes their societal impact and how international laws and norms are being violated, and advances responsible behaviour to enforce cyberpeace.
At the heart of the Institute’s efforts is the recognition that cyberspace is about people. It supports providers of essential services to the most vulnerable members of society, ultimately benefitting us all, like NGOs and the healthcare sector. Attacking them can have a devastating impact on beneficiaries and patients, putting their rights and even lives at risk.
To deliver on this mission, the Institute relies on donations and the generosity of individuals, foundations, companies, and other supporters. This support enables it to assist and support vulnerable communities, including NGOs, to enhance their resilience to cyberattacks.
The Institute also provides evidence-based knowledge and fosters awareness of the impact of cyberattacks on people, to give a voice to and empower victims to highlight the harm and impact of cyberattacks. It reminds state and non-state actors of the international laws and norms governing responsible behaviour in cyberspace, and advances the rule of law to reduce harm and ensure the respect of the rights of people.
Created in 2019, the Institute assesses the impact of cyberattacks from a human perspective, focusing on the rights of people. It grounds its analysis on evidence and the impact on human well-being, telling the story of people, linking with the technical reality of cyberattacks, and assessing it against the violation of laws. The Institute advocates for an evidence-based, human-centric approach to the analysis of cyberattacks as essential to the process of redress, repair, and/or justice for victims. It works collaboratively in its research, analysis, assistance, mobilisation, and advocacy. It engages with vulnerable communities to understand their needs for cybersecurity support and provides free and trusted cybersecurity assistance to vulnerable communities.
The CyberPeace Institute
- assists NGOs and other vulnerable communities to prepare for and recover from cyberattacks.
- investigates cyberattacks targeting vulnerable communities, analysing these attacks to provide alerts and support and for accountability.
- advocates to advance the rule of law and respect for the rights of people.
- anticipates threats to people associated with emerging and disruptive technologies.
- Examples of operational activities
- Assisting humanitarian and other NGOs with free and trusted cybersecurity support.
- Analysing cyberattacks and highlighting their impact on people and how they violate the rule of law.
- Documenting violations of international laws and norms and advocating for strengthened legal protection in cyberspace.
- Offering expertise and support to states and civil society in relation to responsible behaviour in cyberspace.
Digital policy issues
Cyberattacks against critical infrastructure have been on the rise, from attacks against hospitals and vaccine supply chains to attacks on the energy sector. When such disruptions occur, access to basic services is at risk. It is vital that there is an increase in the capacity and ability to improve resilience to cyberthreats in critical sectors, such as healthcare. The CyberPeace Institute urges stakeholders in diplomatic, policy, operational, and technical areas to increase their capacity and resilience to cyberthreats.
The Institute advocates for capacity building aimed at enabling states to identify and protect national critical infrastructure and to cooperatively safeguard its operation. This includes capacity building, implementation of norms of responsible behaviour, and confidence building measures. In strengthening efforts to protect critical infrastructure, the Institute calls for the sharing of lessons learned between countries to assist those with less capacity and fewer capabilities.
NGOs in civilian-critical sectors, for example water, food, healthcare, energy, finance, and information, need support and expertise to help them strengthen their cybersecurity capabilities. While these NGOs provide critical services to communities and bridge areas not covered by public and private actors, they lack the resources to protect themselves from cybersecurity threats.
Examples of the Institute’s work in this regard:
- Calls to governments to take immediate and decisive action to stop all cyberattacks on hospitals and healthcare and medical research facilities, as well as on medical personnel and international public health organisations.
- Capacity building is essential for achieving cyber preparedness and resilience across sectors and fields, and activities focus on providing assistance and capacity building to NGOs that might lack technical expertise and resources.
- Publication of the strategic analysis report Playing with Lives: Cyberattacks on Healthcare are Attacks on People, and launch of the Cyber Incident Tracer (CIT) #Health platform that bridges the current information gap about cyberattacks on healthcare and their impact on people. This is a valuable source of information for evidence-led operational, policy, and legal decision-makers.
- Monitoring and analysing how cyberattacks and operations are and have been, targeting critical infrastructure and civilian objects in the armed conflict between Ukraine and the Russian Federation through the publicly accessible Cyber Attacks in Times of Conflict Platform #Ukraine. The information on cyberattacks can be used to identify developments or clarify the law in relation to the use of cyber operations in armed conflicts, and for accountability in any future judicial proceedings.
NGOs play a critical role in ensuring the delivery of critical services, such as the provision of healthcare, access to food, micro-loans, information, and the protection of human rights.
Malicious actors are already targeting NGOs in an effort to get ransoms and exfiltrate data. Often these NGOs do not have the budget, know-how, or time to effectively secure their infrastructures and develop a robust incident response to manage and overcome sophisticated attacks.
With this in mind, the Institute launched its CyberPeace Builders programme in 2021, a unique network of corporate volunteers providing free pre- and post-incident assistance to NGOs supporting vulnerable populations.
This initiative brings support to NGOs in critical sectors at a level that is unequalled in terms of staff, tools, and capabilities. It assists NGOs with cybersecurity whether they work locally or globally, and supports them in crisis-affected areas across the globe.
The Institute believes that meaningful change can occur when a diversity of perspectives, sectors, and industries work together. To address the complex challenges related to ensuring cyberpeace, it works with a wide range of actors at the global level including governments, the private sector, civil society, academia, philanthropies, policymaking institutions, and other organisations. The Institute contributes by providing evidence-led knowledge, emphasising the need to integrate a genuine human-centric approach in both technical and policy-related projects and processes, and by highlighting the civil society perspective to support and amplify existing initiatives.
To contribute to closing the accountability gap in cyberspace, the Institute seeks to advance the role of international law and norms.
It reminds state and non-state actors of the international law and norms governing responsible behaviour in cyberspace, and contributes to advancing the rule of law to reduce harm and ensure the respect of the rights of people.
Contribution to UN processes
- In 2021–2022, the Institute contributed to and commented on various UN-led processes (notably the United Nations Group of Governmental Experts on Advancing responsible state behaviour in cyberspace in the context of international security (UN GGE) and the Working Group (WG) on the use of mercenaries as a means of violating human rights and impeding the exercise of the rights of peoples to self-determination).
- The Institute has closely followed the work of the UN Open-Ended Working Group (UN OEWG) on developments in the field of information and telecommunications in the context of international security, advocating recognition of the healthcare sector as a critical infrastructure and raising concerns about the lack of commitment towards an actionable and genuine human-centric approach.
- In the Open-Ended Working Group on security of and in the use of information and communications technologies 2021–2025 (OEWG II), the Institute set out three key action areas and related recommendations, and is contributing its expertise in relation to the protection of humanitarian and development organisations from cyberattacks.
Participation in international initiatives: The Paris Call Working Groups
The Paris Call for Trust and Security in Cyberspace is a multistakeholder initiative launched by the French government at the Paris Peace Forum in November 2018. The Call itself sets out nine principles promoting and ensuring the security of cyberspace and the safer use of information and communications technology (ICT).
- To operationalise these principles, in November 2020 six working groups were created to work on various issues that relate to them. The Institute co-led WG5 with colleagues from Geopolitics in the Datasphere [Géopolitique de la Datasphère] and The Hague Centre for Strategic Studies (HCSS).
- The work of this group led to the Final Report published during the Paris Peace Forum 2021. It presents a methodology to facilitate understanding of how the implementation of normative, legal, operational, and technical measures, or the lack thereof, contribute to stability in cyberspace and ultimately to cyberpeace.
- The Institute contributed to WG3: Advancing the UN negotiations with a strong multistakeholder approach, leading to the publication of the final report on Multistakeholder Participation at the UN: The Need for Greater Inclusivity in the UN Dialogues on Cybersecurity.
At the World Economic Forum meeting in Davos, in May 2022, the CyberPeace Institute joined Access Now, the Office of the High Commissioner for Human Rights (OHCHR), Human Rights Watch (HRW), Amnesty International, the International Trade Union Confederation (ITUC), and Consumers International to call on decision-makers to take action and initiate a moratorium limiting the sale, transfer, and use of abusive spyware until people’s rights are safeguarded under international human rights law.
This is in addition to a call made in 2021, in which the Institute joined more than 100 civil society organisations calling for a global moratorium on the sale and transfer of surveillance technology until rigorous human rights safeguards are adopted to regulate such practices and guarantee that governments and non-state actors don’t abuse these capabilities.
Digital technology plays an important role in conflict mediation and global peacebuilding. It can extend inclusion, allowing more women or people from marginalised groups to take part in or follow a mediation process. It can make mediation faster and more efficient and can allow mediators to draw on resources from around the world.
However, digital technology brings risks, too. It can increase polarisation, for example, and allow disinformation to spread to more people, more quickly. It can increase vulnerability to malicious actors, spying, and data breaches. These risks can undermine trust in the process.
Mediators work in low-trust, volatile contexts and don’t always have the knowledge to assess the risks posed by digital technology. A new online platform helps to raise awareness of those risks, as well as offering training on how to deal with them. The Digital Risk Management E-Learning Platform for Mediators was created in 2021 by the CyberPeace Institute, CMI – Martti Ahtisaari Peace Foundation, and the UN Department of Political and Peacebuilding Affairs (UNDPPA) Mediation Support Unit.
As part of the integration and engagement with the stakeholder ecosystem in Geneva, the Institute is a member of the Geneva Chamber of Commerce, Industry and Services (CCIG). Various academic collaborations are ongoing through participation in conferences, workshops, and lectures,
namely with the Ecole Polytechnique Fédérale de Lausanne Centre for digital trust EPFL (C4DT), the University of Geneva (UNIGE), and the Graduate Institute (IHEID). In 2020, the Institute formed a strategic partnership with the SwissTrust Valley for Digital Transformation and Cybersecurity.
The Institute and its staff have received several awards for innovative and continuous efforts promoting cyberpeace including the 2020 Geneva Centre for Security Policy (GCSP), second prize for Innovation in Global Security, and the Prix de l’Economie in 2021 from CCIG.
Address: Palais Wilson 52, Rue des Pâquis, 1201 Geneva, Switzerland
Stakeholder group: International and regional organisations
The Office of the United Nations High Commissioner for Human Rights and other related UN human rights entities, namely the United Nations Human Rights Council, the Special Procedures, and the Treaty Bodies are considered together under this section.
The UN Human Rights Office is headed by the OHCHR and is the principal UN entity on human rights. Also known as UN Human Rights, it is part of the UN Secretariat. UN Human Rights has been mandated by the UN General Assembly (UNGA) to promote and protect all human rights. As such, it plays a crucial role in supporting the three fundamental pillars of the UN: peace and security, human rights, and development. UN Human Rights provides technical expertise and capacity development in regard to the implementation of human rights, and in this capacity assists governments in fulfilling their obligations.
UN Human Rights is associated with a number of other UN human rights entities. To illustrate, it serves as the secretariat for the UN Human Rights Council (UNHRC) and the Treaty Bodies. The UNHRC is a body of the UN that aims to promote the respect of human rights worldwide. It discusses thematic issues, and in addition to its ordinary session, it has the ability to hold special sessions on serious human rights violations and emergencies. The ten Treaty Bodies are committees of independent experts that monitor the implementation of the core international human rights treaties.
The UNHRC established the Special Procedures, which are made up of UN Special Rapporteurs (i.e. independent experts or working groups) working on a variety of human rights thematic issues and country situations to assist the efforts of the UNHRC through regular reporting and advice. The Universal Periodic Review (UPR), under the auspices of the UNHRC, is a unique process that involves a review of the human rights records of all UN member states, providing the opportunity for each state to declare what actions they have taken to improve the human rights situations in their countries. UN Human Rights also serves as the secretariat to the UPR process.
Certain non-governmental organisations (NGOs) and national human rights institutions participate as observers in UNHRC sessions after receiving the necessary accreditation.
Digital issues are increasingly gaining prominence in the work of UN Human Rights, the UNHRC, the Special Procedures, the UPR, and the Treaty Bodies.
A landmark document that provides a blueprint for digital human rights is the UNHRC resolution (A/HRC/20/8) on the promotion, protection, and enjoyment of human rights on the internet, which was first adopted in 2012, starting a string of regular resolutions with the same name addressing a growing number of issues. All resolutions affirm that the same rights that people have offline must also be protected online. Numerous other resolutions and reports from UN human rights entities and experts considered in this overview tackle an ever-growing range of other digital issues including the right to privacy in the digital age; freedom of expression and opinion; freedom of association and peaceful assembly; the rights of older persons; racial discrimination; the rights of women and girls; human rights in the context of violent extremism online; economic, social, and cultural rights; human rights and technical standard-setting; business and human rights; and the safety of journalists.
Digital policy issues
In 2018, the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression presented a report to the UNGA on Artificial Intelligence (AI) Technologies and Implications for the Information Environment. Among other things, the document addresses the role of AI in the enjoyment of freedom of opinion and expression including ‘access to the rules of the game when it comes to AI-driven platforms and websites’ and therefore urges for a human rights-based approach to AI.
For her 2020 thematic report to the Human Rights Council, the UN Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia, and related intolerance analysed different forms of racial discrimination in the design and use of emerging technologies, including the structural and institutional dimensions of this discrimination. She followed up with reports examining how digital technologies, including AI-driven predictive models, deployed in the context of border enforcement and administration reproduce, reinforce, and compound racial discrimination.
In 2020, the Committee on the Elimination of Racial Discrimination published its General Recommendation No. 36 on preventing and combating racial profiling by law enforcement officials (CERD/C/GC/36), which focuses on algorithmic decision-making and AI in relation to racial profiling by law enforcement officials.
Child safety online (1)
The issue of child safety online has garnered the attention of UN human rights entities for some time. A 2016 resolution on Rights of the Child: Information and Communications Technologies and Child Sexual Exploitation adopted by the UNHRC calls on states to ensure ‘full, equal, inclusive, and safe access […] to information and communications technologies by all children and safeguard the protection of children online and offline’, as well as the legal protection of children from sexual abuse and exploitation online. The Special Rapporteur on the sale and sexual exploitation of children, including child prostitution, child pornography, and other child sexual abuse material, mandated by the UNHRC to analyse the root causes of sale and sexual exploitation and pro- mote measures to prevent it, also looks at issues related to child abuse, such as the sexual exploitation of children online, which has been addressed in a report (A/ HRC/43/40) published in 2020, but also in earlier reports.
The Committee on the Rights of the Child published its General Comment No. 25 on Children’s Rights in Relation to the Digital Environment (CRC/C/GC/25), which lays out how states parties should implement the convention in relation to the digital environment and provides guidance on relevant legislative, policy, and other measures to ensure full compliance with their obligations under the convention and optional protocols in the light of opportunities, risks, and challenges in promoting, respecting, protecting, and fulfilling all children’s rights in the digital environment.
UN Human Rights maintains an online platform consisting of a number of databases on anti-discrimination and jurisprudence, as well as the Universal Human Rights Index (UHRI), which provides access to recommendations issued to countries by Treaty Bodies, Special Procedures, and the UPR of the UNHCR.
UN Human Rights also published a report titled A Human Rights-Based Approach to Data – Leaving no one Behind in the 2030 Agenda for Sustainable Development that specifically focuses on issues of data collection and disaggregation in the context of sustainable development.
UN Human Rights has worked closely with partners across the UN system in contributing to the Secretary-General’s 2020 Data Strategy, and co-leads, with the Office of Legal Affairs and UN Global Pulse, work on the subsequent Data Protection and Privacy Program.
UN Human Rights launched the Guiding Principles in Technology Project (B-Tech Project) to provide guidance and resources to companies operating in the technology space with regard to the implementation of the UN Guiding Principles on Business and Human Rights (UNGPs on BHR). Following the publication of a B-Tech scoping paper in 2019, several foundational papers have delved into a broad range of business-related issues, from business-model-related human rights risks to access to remedies. At the heart of the B-Tech project lies multistakeholder engagement, informing all of its outputs. The B-Tech project is enhancing its engagement in Africa, working with technology company operators, investors, and other key digital economy stakeholders, including civil society, across Africa in a set of African economies and their tech hubs to create awareness of implementing the UNGPs on BHR.
Following a multistakeholder consultation held on 7–8 March 2022, the High Commissioner presented her report on UN Guiding Principles on Business and Human Rights and Technology Companies (A/HRC/50/56), which demonstrated the value and practical application of the UNGPs in preventing and addressing adverse human rights impacts by technology companies.
Extreme poverty (2)
The Special Rapporteur on extreme poverty and human rights has in recent years increased his analysis of human rights issues arising in the context of increased digitisation and automation. His 2017 report to the General Assembly tackled the socio-economic challenges in an emerging world where automation and AI threaten traditional sources of income and analysed the promises and possible pitfalls of introducing a universal basic income. His General Assembly report in 2019 addressed worrying trends in connection with the digitisation of the welfare state. Moreover, in his 2022 report to the UNHRC on non-take-up of rights in the context of social protection, the Special Rapporteur highlighted, among other things, the benefits and considerable risks associated with automation of social protection processes.
Geneva-based human rights organisations and mechanisms have consistently addressed content policy questions, in particular in the documents referred under Freedom of Expression and Freedom of Peaceful Assembly and of Association. Other contexts where content policy plays an important role include Rights of the Child, Gender Rights Online, and Rights of Persons with Disabilities. Moreover, the use of digital technologies in the context of terrorism and violent extremism is closely associated with content policy considerations.
UN Human Rights, at the request of the UNHRC, prepared a compilation report in 2016, which explores, among other issues, aspects related to the prevention and countering of violent extremism online, and underscores that responses to violent extremism that are robustly built on human rights are more effective and sustainable.
Additional efforts were made in 2019 when the Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism published a report where she examined the multifaceted impacts of counter-terrorism measures on civic space and the rights of civil society actors and human rights defenders, including measures taken to address vaguely defined terrorist and violent extremist content. In July 2020, she published a report discussing the human rights implications of the use of biometric data to identify terrorists and recommended safeguards that should be taken.
Collaboration within the UN system
UN Human Rights is a member of the Secretary-General’s Reference Group and contributed to the development of his Strategy on New Technologies in 2018. The OHCHR was co-champion of the follow-up on two human-rights-related recommendations of the Secretary-General’s High-Level Panel on Digital Cooperation. The outcomes of this process were the basis of the Secretary-General’s Roadmap on Digital Cooperation, presented in June 2020.
As part of the implementation of the Secretary-General’s Call to Action for Human Rights, UN Human Rights launched the UN Hub for Human rights and Digital Technology, which provides a central repository of authoritative guidance from various UN human rights mechanisms on the application of human rights norms to the use and governance of digital technologies.
Moreover, as requested by the Secretary-General’s Roadmap to Digital Cooperation and the Secretary-General’s Call to Action for Human Rights, UN Human Rights is leading a UN system-wide process to develop a human rights due diligence guidance (HRDD) for digital technology. The HRDD guidance in development pertains to the application of human rights due diligence and human rights impact assessment related to the UN’s design, development, procurement, and use of digital technologies, and is expected to be completed by the end of 2022.
UN Human Rights participated in the UNESCO-led process to develop ethical standards for AI. With its aim to protect human rights and serve as an ethical guidance compass in the use of AI, UNESCO recommendation on the Ethics of AI was adopted by UNESCO member states at UNESCO’s General Conference in November 2021. As a strong contributor to the Inter-Agency Working Group on AI, UN Human Rights also provided feedback on the ethical principles of AI for the UN system.
In addition, UN Human Rights is a member of the Legal Identity Agenda Task Force, which promotes solutions for the implementation of SDG target 16.9 (i.e. by 2030, provide legal identity for all, including birth registration). It leads its work on exclusion and discrimination in the context of digitised identity systems.
Rapid advancements in neurotechnology and neuroscience, while holding promises of medical benefits and scientific breakthroughs, present a number of human rights and ethical challenges. Against this backdrop, UN Human Rights has been contributing significantly to an inter-agency process led by the Executive Office of the Secretary-General to develop a global roadmap for effective and inclusive governance of neurotechnology.
Secretary-General’s Report on Our Common Agenda
Since the adoption of A/RES/76/6 on Our Common Agenda in November 2021, the follow-up by the UN system has been underway. UN Human Rights is co-leading several proposals in collaboration with other entities, notably on the application of the human rights framework in the digital sphere, mitigation and prevention of internet shutdowns, and disinformation.
Privacy and data protection
Challenges to the right to privacy in the digital age, such as surveillance, communications interception, and the increased use of data-intensive technologies, are among some of the issues covered by the activities of UN Human Rights. At the request of the UNGA and the UNHRC, the High Commissioner prepared four reports on the right to privacy in the digital age. The first report, presented in 2014, addressed the threat to human rights caused by surveillance by governments, in particular mass surveillance. The ensuing report, published in September 2018, identified key principles, standards, and best practices regarding the promotion and protection of the right to privacy. It outlined minimum standards for data privacy legal frameworks. In September 2021, the High Commissioner presented a ground-breaking report on AI and the right to privacy (A/HRC/48/31), in which she called for a ban on AI applications that are incompatible with international human rights law, and stressed the urgent need for a moratorium on the sale and use of AI systems that pose serious human rights risks until adequate safeguards are put in place. In September 2022, the High Commissioner presented a report focusing on the abuse of spyware by public authorities, the key role of encryption in ensuring the enjoyment of human rights in the digital age, and the widespread monitoring of public spaces.
The UNHRC also tackles online privacy and data protection. Resolutions on the promotion and protection of human rights on the internet have underlined the need to address security concerns on the internet in accordance with international human rights obligations to ensure the protection of all human rights online, including the right to privacy. The UNHRC has also adopted specific resolutions on the right to privacy in the digital age, addressing issues such as mass surveillance, AI, the responsibility of business enterprises, and the key role of the right to privacy as an enabler of other human rights. Resolutions on the safety of journalists have emphasised the importance of encryption and anonymity tools for journalists to freely exercise their work. Two resolutions on new and emerging technologies (2019 and 2021) have further broadened the lens, for example by asking for a report on the human rights implications of technical standard-setting processes.
The UNHRC has also mandated the Special Rapporteur on the right to privacy to address the issue of online privacy in its Resolution on the Right to Privacy in the Digital Age from 2015 (A/HRC/RES/28/16). To illustrate, the Special Rapporteur has addressed the question of privacy from the stance of surveillance in the digital age (A/HRC/34/60), which becomes particularly challenging in the context of cross-border data flows. More recently, specific attention has been given to the privacy of health data that is being produced more and more in the day and age of digitalisation, and that requires the highest legal and ethical standards (A/HRC/40/63). In this vein, in 2020, the Special Rapporteur examined data protection and surveillance in relation to COVID-19 and contact tracing in his preliminary report (A/75/147), in which he provided a more definitive analysis of how pandemics can be managed with respect to the right to privacy (A/76/220) in 2021. In another 2020 report (A/HRC/43/52), the Special Rapporteur provides a set of recommendations on privacy in the online space calling for, among other things, ‘comprehensive protection for secure digital communications, including by promoting strong encryption and anonymity- enhancing tools, products, and services, and resisting requests for “backdoors” to digital communications’ and recommending that ‘government digital identity programs are not used to monitor and enforce societal gender norms, or for purposes that are not lawful, necessary, and proportionate in a democratic society.’
The Special Rapporteur also addressed the challenges of AI and privacy, as well as children’s privacy, particularly the role of privacy in supporting autonomy and positive participation of children in society, in his report in 2021 (A/HRC/46/37).
Lastly, in 2022, the Special Rapporteur examined developments in privacy and data protection in Ibero- America in her report titled Privacy and Personal Data Protection in Ibero-America: A Step Towards Globalization? (A/HRC/49/55).
Freedom of expression
The High Commissioner and her office advocate for the promotion and protection of freedom of expression, including in the online space. Key topics in this advocacy are the protection of the civic space and the safety of journalists online; various forms of information control, including internet shutdowns and censorship; addressing incitement to violence, discrimination, or hostility; disinformation; and the role of social media platforms in the space of online expression.
Freedom of expression in the digital space also features highly on the agenda of the UNHRC. It has often been underlined that states have a responsibility to ensure adequate protection of freedom of expression online, including when they adopt and implement measures aimed at dealing with issues such as cybersecurity, incitement to violence, and the promotion and distribution of extremist content online. The UNHRC has also been firm in condemning measures to intentionally prevent or disrupt access to or the dissemination of information online, and has called on states to refrain from and cease such measures.
In 2021, at the request of the UNHRC A/HRC/47/22, the High Commissioner prepared a report on internet shutdowns (A/HRC/50/55), which looks at trends in internet shutdowns, analysing their causes, their legal implications, and their impact on a range of human rights, including economic, social, and cultural rights. She called on states to refrain from the full range of internet shutdowns and for companies to uphold their responsibilities to respect human rights. She stressed the need for development agencies, and regional and international organisations to bridge their digital connectivity efforts with efforts related to internet shutdowns.
UN Human Rights also weighs in on a range of law-making processes that are relevant to the exercise of the right to freedom of expression. For example, it has engaged with the development of the EU Digital Services Act, commented extensively on global trends in regulating social media, and participated in the process of elaborating a Comprehensive International Convention on Countering the Use of Information and Communications Technologies for Criminal Purposes.
Special Rapporteurs on the promotion and protection of the right to freedom of opinion and expression have been analysing issues relating to free expression in the digital space for more than a decade. Reports in the first half of the 2010s already addressed the importance of universal access to the internet for the enjoyment of human rights, free expression in the context of elections, and the adverse impacts of government surveillance on free expression. In 2018, the Special Rapporteur published a report on online content regulation. It tackles governments’ regulation of user-generated online content, analyses the role of companies, and recommends that states should ensure an enabling environment for online freedom of expression and that businesses should rely on human rights law when designing their products and services. The same year, he also presented to the UNGA a report addressing freedom of expression issues linked to the use of AI by companies and states. A year later, the Special Rapporteur presented a report to the UNGA on online hate speech that discusses the regulation of hate speech in international human rights law and how it provides a basis for governmental actors considering regulatory options and for companies determining how to respect human rights online.
In 2020, the Special Rapporteur issued Disease Pandemics and the Freedom of Opinion and Expression, a report that specifically tackles issues such as access to the internet, which is highlighted to be ‘a critical element of healthcare policy and practice, public information, and even the right to life’. The report calls for greater international coordination on digital connectivity given the importance of digital access to healthcare information. Other reports addressed the vital importance of encryption and anonymity for the exercise of freedom of opinion and the threats to freedom of expression emanating from widespread digital surveillance.
The Special Rapporteur, while acknowledging the complexities and challenges posed by disinformation in the digital age, noted that responses by states and companies to counter disinformation have been inadequate and detrimental to human rights. In her 2021 report Disinformation and Freedom of Opinion and Expression (A/HRC/47/25), she examined the threats posed by disinformation to human rights, democratic institutions, and development processes, and called for multidimensional and multistakeholder responses to disinformation that are well grounded in the international human rights framework and urged companies to review their business models and states to recalibrate their responses to disinformation.
More recently, in 2022, the Special Rapporteur issued Reinforcing Media Freedom and the Safety of Journalists in the Digital Age (A/HRC/50/29), a report in which she calls on states and the international community to strengthen multistakeholder cooperation to protect and promote media freedom and the safety of journalists in the digital age, and ensure independence, pluralism, and viability of the media. She also calls on digital services companies and social media platforms to respect the UNGPs on BHR.
Online hate speech and discrimination have also been addressed by the Special Rapporteur on freedom of religion and belief. For instance, in a report published in 2019, the online manifestation of antisemitism (including antisemitic hate speech) was underscored, and best practices from the Netherlands and Poland were shared. The report highlights that governments ‘have an affirmative responsibility to address online antisemitism, as the digital sphere is now the primary public forum and marketplace for ideas’. In another document published that same year, the Special Rapporteur assesses the impact of online platforms on discrimination and on the perpetuation of hostile and violent acts in the name of religion, as well as how restrictive measures such as blocking and filtering of websites negatively impact the freedom of expression.
The issue of online blasphemy and undue limitations on expressing critical views of religions and beliefs imposed by governments has also been addressed on a number of occasions, including in a report from 2018.
Gender rights online (3)
UN Human Rights and the UNHRC have reiterated on several occasions the need for countries to bridge the gender digital divide and enhance the use of ICTs, including the internet, to promote the empowerment of all women and girls. It has also condemned gender-based violence committed on the internet. Implementing a 2016 UNHRC resolution on the Promotion, Protection, and Enjoyment of Human Rights on the Internet, the High Commissioner on Human Rights in 2017 prepared a report on Ways to Bridge the Gender Digital Divide from a Human Rights Perspective.
Rights of persons with disabilities
The promotion and protection of the rights of persons with disabilities in the online space have been addressed on several occasions by the UN Special Rapporteur on the rights of persons with disabilities. A report from 2016 underscored that ICTs including the internet can increase the participation of persons with disabilities in public decision-making processes and that states should work towards reducing the access gap between those who can use ICTs and those who cannot.
Nevertheless, a report from 2019 stressed that the shift to e-governance and service delivery in a digital manner can hamper access for older persons with disabilities who may lack the necessary skills or equipment.
The Special Rapporteur also examined the opportunities and risks posed by AI, including discriminatory impacts in relation to AI in decision-making systems. In his 2021 report (A/HRC/49/52), the Special Rapporteur emphasises the importance of disability-inclusive AI and the inclusion of persons with disabilities in conversations about AI.
Freedom of peaceful assembly and association
The exercise of the rights to freedom of peaceful assembly and association in the digital environment in recent years has attracted increased attention. For example, the High Commissioner presented to the 44th session of the UNHRC a report on new technologies such as ICTs and their impact on the promotion and protection of human rights in the context of assemblies, including peaceful protests. The report highlighted many of the great opportunities for the exercise of human rights that digital technologies offer, analysed key issues linked to online content takedowns, and called on states to stop the practice of network disruptions in the context of protests. It also developed guidance concerning the use of surveillance tools, in particular facial recognition technology.
The Human Rights Committee published in July 2020 its General Comment No. 37 on Article 21 of the International Covenant on Civil and Political Rights (ICCPR) (right of peaceful assembly), which addresses manifold aspects arising in the digital context.
The Special Rapporteur on the rights to freedom of peaceful assembly and of association in 2019 published a report for the UNHRC focusing on the opportunities and challenges facing the rights to freedom of peaceful assembly and of association in the digital age. In following reports, he condemned the widespread practice of internet shutdowns and raised concerns about technologically mediated restrictions on free association and assembly in the context of crises.
Technical standard-settings and human rights
The UNHRC adopted resolution A/HRC/RES/47/23 on new and emerging digital technologies and human rights, which requested UN Human Rights to convene an expert consultation and write a report discussing the relationship between human rights and technical standard-setting processes for new and emerging digital technologies. The expert consultation and the report will be presented in 2023 at the UNHRC.
As requested by the UNHRC, in its resolution A/HRC/ RES/47/23, the High Commissioner presented her report on UN Guiding Principles on Business and Human Rights and Technology Companies (A/HRC/50/56), following the multistakeholder consultation held on 7–8 March 2022. The High Commissioner’s report demonstrated the value and practical application of the UNGPs in preventing and addressing adverse human rights impacts by technology companies
The UNHRC has developed an e-learning tool to assist government officials from least-developed countries and small island developing states (SIDS) as per the mandate of the Trust Fund to develop competencies on the UNHRC and its mechanisms.