Strengthening a human-centred transformation: Rules for the digital world.




 The digital transformation is advancing rapidly. For managing the benefits and risks of digital technologies, especially those based on artificial intelligence, the digital world needs rules that must be based on certain principles which are globally acknowledged like inclusivity, fairness, transparency or respect for human rights. On a global level, the United Nations has been shaping a process of furthering digital cooperation, which culminated in the adoption of the “Global Digital Compact” in 2024. This has been accompanied by national and regional initiatives to regulate digital technologies leading to a growing fragmentation of digital governance. At the same time, more and more countries try to assert their “digital sovereignty” to control their own digital infrastructure, data, and technology. However, this is contradicted by the technological dominance of the United States and China and the oligopolies of their leading tech companies. Therefore, this chapter argues for the promotion of global digital public goods to enlarge the selection of digital choices countries can make.



When the High-Level Panel on Digital Cooperation, which was appointed by the United Nations (UN) Secretary-General, issued its report “The Age of Digital Interdependence” in 2019, it already predicted that the “speed and scale of change is increasing” and that “the agility, responsiveness and scope of cooperation and governance mechanisms needs rapidly to improve”. The actual speed, however, with which especially applications based on artificial intelligence (AI) have developed since, not only raises high expectations of furthering human progress, but it also arouses fear of misuse and deepening social divisions. Therefore, the former “laissez-faire” global governance regime that was applied to providers of digital technologies has come under pressure, since the social impacts of digital platforms and digital technologies, like a growing amount of disinformation and polarization within and between societies or a systematic violation of data privacy, are felt more profoundly than before. Both the benefits and risks transcend national borders. We can see various models of regulating the development and application of digital technologies globally. However, the ways and principles of how to do this differ, especially between the three main economic contenders, the United States, China, and the European Union (EU). There has been a stark increase in regulating digital technologies, especially AI systems, over the past years, mainly at the national or regional level. Consequently, the term “digital cooperation” was coined to capture “the ways of working together to address the societal, ethical, legal and economic impacts of digital technologies in order to maximize benefits to society and minimize harms”. In 2020, amidst the COVID-19-pandemic, the UN devised a “Roadmap for Digital Cooperation”, emphasizing that this will be instrumental in achieving the Sustainable Development Goals (SDGs) as a joint multi-stakeholder effort. The UN’s work to further digital cooperation culminated in the “Global Digital Compact” that was adopted by the UN General Assembly as part of the “Pact for the Future” in September 2024. Geopolitical tensions, the growing spread of autocracies combined with the demise of the liberal international order, however, have led to more and more countries and regional players emphasizing their “digital sovereignty”, a concept related to regulating digital spaces and strengthening technological autonomy. This chapter engages with this trend of a growing fragmentation of digital governance with competing institutional and policy approaches in the face of a general backlash of multilateral cooperation. Nevertheless, for managing the benefits and risks of digital technologies, especially those based on AI, the digital world needs rules that must be based on certain principles which are globally acknowledged like inclusivity, fairness, transparency or respect for human rights. In an age of digital interdependence, however, the exertion of “digital sovereignty” can only be extended beyond the range of economic and political powerful actors when the number of global digital public goods will increase



The simple fact that digital technologies transcend national borders renders purely national approaches insufficient. Historically, global cooperation on information and communication technologies (ICTs) started with regulating global connectivity issues by providing the necessary infrastructure. This also led to shared standards and securing interoperability, which predominantly fell into the domain of the International Telecommunication Union (ITU) and later, with the advent of digital technologies, other technical bodies like the Internet Corporationfor Assigned Names and Numbers (ICANN). Technical compatibility means that networks, devices, and services work across borders and that harmonized frameworks for cross-border data enable, for instance, global supply chains and other economic or social exchanges. Multilateral cooperation is also needed for two additional reasons: On the one hand, to cope with the risks and harms associated with digital technologies, especially infringements of individual rights, like data privacy violations, or cybersecurity threats, e.g. cyberattacks, cybercrimes and disinformation campaigns. On the other hand, digitalization is also seen as a means to further the achievement of the UN Sustainable Development Goals (SDGs) . The “SDG Digital Acceleration Agenda”, a joint initiative led by ITU and the UN Development Programme (UNDP) expects “game-changing digital solutions [… to] accelerate progress in climate action, education, hunger, poverty and at least 70 per cent of the 169 SDG targets”. For the UN, digital cooperation has to focus on a “human-centred digital transformation”, which is explicitly linked to the 17 SDGs. Moreover, with progress on implementing the SDGs stalling, aligning the digital transformation with the SDGs and environmental sustainability is one of the overarching guiding principles that is also shared, amongst others, by the G20 and the Organization for Economic Co-operation and Development (OECD) [see Figure 1].


Principles guiding the global digital transformation on a normative and operational level


Since much of the work of UN and other multilateral bodies revolves around norm-setting, safeguarding civil and human rights in the digital age has become one of the most-cited principles in digital cooperation. With the “Framework Convention on Artificial Intelligence”, the Council of Europe opened the first-ever international legally binding treaty for signature in September 2024, which “aims to ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law, while being conducive to technological progress and innovation”. According to the UN Global Digital Compact the goal of the current digital transformation should be “an inclusive, open, sustainable, fair, safe and secure digital future for all” (UN 2024: Annex I, Para. 4). The realities of today’s global access to information and communication technologies (ICTs), however, still reflect the ongoing digital divide between high-income and low-income countries, when you look at key ICT indica tors like internet access [see Figure 2]. This is why ensuring universal and equitable digital access has been ranked highly amongst the list of normative principles. On an operational level, you can summarize various catalogues of principles into four principles that also reflect the historical development of digital technologies. These principles focus on 1) open and interoperable systems, 2) safe and trustworthy digital environments, 3) equitable data governance, and 4) responsible AI.

Percentage of individuals using the internet for the World and special regions (2005– 2024)



Open and interoperable systems like the internet are necessary to preserve net neutrality and global digital spaces. Governing the internet has never been a purely technical issue, as the history of the World Summit on the Information Society shows, which led to the creation of the multistakeholder Internet GovernanceForum in 2006 as the principal discussion forum on issues of internet governance. To protect users against online harm, misinformation and security challenges it is necessary to create safe and trustworthy digital environments. Many consider data as the most valuable resource that can be mined in the digital age. Hence, the principle of equitable data governance is key to enable privacy-respecting, interoperable, and inclusive use of data. Finally, with the global roll-out of more and more AI-based applications, The question of “responsible AI” has become of utmost importance. This has led the UN Secretary-General’s multistakeholder High-Level Advisory Body on Artificial Intelligence to call for “Governing AI for humanity”, based on inclusive and risk-based approaches. Although the risks and problems of digital technologies were known before, they have been exacerbated with the rapid development of AI-based application and, subsequently, have resulted in numerous national and multilateral regulations worldwide based on different regulatory approaches.




Before the first version of ChatGPT was released in November 2022, the governance of digitalization was widely characterized by an industry-driven, market-based approach with little centralized regulation (“laissez-faire”). With more and more technical, especially AI-based, applications being rolled out globally, it became obvious that not only digital technologies, but also the digital industry itself have to be regulated – to cope with the problems that already were visible and to deal with the emerging risks of the new technologies [see Table 1]. The more so, since a small number of big tech companies (like Alphabet/ Google, Amazon, Apple, Meta or Microsoft in the United States and Alibaba, Baidu, Tencent or Huawei in China) have established their global oligopolistic power over data flows, platform technologies, social media, or e-commerce by which they generate not only economic, but also political and cultural power. Their economic size and financial wealth allow them to buy any competitor that might threaten their market position, thus contradicting the liberal assumption of competition in free markets. Their wealth also gives them the opportunity to lobby for the kind of (non-)regulation suitable to protect their business models. By the time Elon Musk bought Twitter and transformed it into his political mouthpiece X, it became clear that control over content also meant influence over public discourse. The emergence of more and more AI-based applications has led to a consolidation of big tech power, since they can 43 recruit necessary human resources and they have enough large training datasets and computational power at their disposal.


Technical applications and related problems

The development, deployment, and use of digital technologies and devices involves many resources. Especially so-called generative AI technologies, which can create original content such as text, images, videos, audio, or software code in response to user prompts, leave a considerable larger environmental footprint simply by the 44 amount of computing power, and hence energy that is needed. Therefore AI-based technologies are looked upon as amplifying or exacerbating all the risks that have been associated with individual digital technologies and applications before [see Figure 3]. To cope with these risks, many national and multilateral regulations on AI have been adopted meanwhile. In July 2025 the OECD introduced “GAIIN – theGlobal AI Initiatives Navigator”, a living repository to track public AI policies and initiatives worldwide. When launched, it listed more than 1,300 entries from over 80 jurisdictions and international organizations.


AI-based technologies are amplifying existing risks of digital technologies

With respect to their economic weight, three players stand out: the United States, China, and the EU. Interestingly, all follow different regulatory models, which Anu Bradford characterized as different approaches of “digital empires” to shape the global digital order: market-driven (United States), state-driven (China) and rights-driven (EU) [see Table 2]. As Bradford points out, “the three jurisdictions have all had to balance their support of technological innovation with the implications those technologies have for civil liberties, the distribution of wealth, international trade, social stability, and national security, among other key policy concerns”. During the Biden administration the market-driven approach of the United States had come under scrutiny. However, the current Trump administration has renewed its commitment to relying on a free market with minimalist government intervention, “to develop AI systems free from ideological bias or engineered social agendas”.4 Although there are also growing concerns about the power concentrated in big tech companies and about data privacy in the United States, the Trump administration favours self-regulation by the tech industry and advocates for almost no political regulation. Along these lines, Trump mandated all barriers to AI innovation should be removed to enhance U.S. leadership in AI research and development with the Executive Order issued shortly after his inauguration in January 2025. This is also due to a techno-libertarian view that any government intervention would also undermine individual freedom, exemplified in the emphasis on protecting free speech.

Three competing regulatory models


China, in contrast, subordinates the digital economy to state control. It encourages technological innovation through active state interventions to maximize the country’ technological dominance. The state itself, however, uses digital technologies for censorship and surveillance of its citizens to ensure – from a Chinese point of view – social harmony. The active promotion of Chinese digital infrastructure and technology in countries of the Global South, e.g. through the Digital Silk Road component of its Belt and Road Initiative, invited criticism that China would export digital authoritarianism. However, also non-China based companies have sold their surveillance technologies worldwide (Heeks et al. 2024: 81–83). Remarkably, as Heeks et al. emphasize, “China’s digital expansion is not merely technological, but also institutional” (Heeks et al. 2024: 84), since the technology is not only related to informal (management culture, views on human rights) and formal norms (like the choice of currency for economic transactions) but also to formal institutions like regulations and standards. The EU starts from a similar assumption like the United States, since digital technologies are looked upon as promoting individual liberty and freedom in society. However, in contrast to the United States and China, the EU puts protecting fundamental rights, like data privacy, and preserving the democratic structures of its societies at the centre of its regulatory approach. The human-centric approach of the EU also emphasizes that fair markets are needed, especially to guarantee a fair distribution of the benefits reaped by the digital economy. This also leads to a different conception of the roles of the tech industry and the state, since EU regulation also tries to protect the rights of citizens both towards the tech companies and the state. In terms of influencing the global regulation of digital technologies, however, the rights-based approach of the EU has also been shaping norms beyond its borders, since major firms already preemptively tried to align with its standards, even before the respective regulations entered into force. Or, as was the case with the EU’s General Data Protection Regulation (GDPR), big tech companies not only implemented the regulation in the EU but globally, since it saved them adapting their software and services to different jurisdictions with less requirements. This kind of regulatory diffusion is also called the “Brussels effect”, which reflects the EU’s regulatory power in the global marketplace.With the “Artificial Intelligence Act” (AI Act) the EU acted as norm-setter on AI regulation by establishing a binding comprehensive regulatory and legalframework for the development and use of AI within the EU. Itcame into force in August 2024, with its different degrees of regulation becoming gradually operative within the next 36 months. The AI Act reflects a risk-based regulatory approach that tries to protect the rights of individuals and to ensure that AI systems will do no harm without compromising the tech companies’ abilities to innovate.6 The regulatory instruments which are applied are based on the level of risk which leads to either no specific regulation, information and transparency obligations, conformity assessments or – at the strictest level – the prohibition of certain applications and uses [see Figure 4]. In conjunction with the Framework Convention on Artificial Intelligence of the Council of Europe, the EU AI Act is an important binding regulation in a still weak, but nevertheless existing, regime complex on governing AI with a polycentric structure characterized by many decision centres. Amidst the different approaches and aspirations of the United States, China, and the EU to shape a global digital order,8 the UN has instigated a process of countering the fragmentation that has become especially visible in regulating AI. The High-Level Advisory Board on Artificial Intelligence, which was appointed by the UN Secretary-General in October 2023, held several rounds of consultations across stakeholder groups to come up with recommendations on the governance of AI. In the wake of this process, the UN prepared a White Paper on the work of the various bodies and entities of the UN system on AI governance. The focus of this inventory was on the institutional models applied, their related functions, and the normative framework provided by the UN system (UN Chief Executives Board for Coordination).


Classifying AI systems into several risk categories with different degrees of regulation applying

Note: The AI act defines GPAI models as “trained with a large amount of data using self-supervision at scale” and those that display “significant generality” and are “capable to competently perform a wide range of distinct tasks” and “can be integrated into a variety of downstream systems or applications”.


The White Paper identified four key functions the UN system already performs with respect to AI governance: 1) scientific consensus-building, 2) norm-setting, consensus-building around risks and opportunities, 3) regulatory coordination, monitoring, enforcement and 4) development and diffusion of technology. In its final report, the High-Level Advisory Board on Artificial Intelligence broke down the different functions into a prospective AI governance architecture emphasizing the role of the UN therein as an “enabling connector” for forging a common understanding of AI, finding some common ground by initiating a governance dialogue and standards exchange and, above all, reaping common benefits by establishing a capacity development network, an AI data framework, and a global fund for AI. The UN General Secretary already submitted a proposal for “Innovative voluntary financing options for artificial intelligence capacity-building” to the UN General Assembly in July 2025. The call to create a new international science-driven AI body to forge the desired common understanding was answered by the UN General Assembly in August 2025 by establishing a multidisciplinary “Independent International Scientific Panel on Artificial Intelligence” which is tasked to present its annual report at the “Global Dialogue on AI Governance”, a newly created platform for governments and relevant stakeholder “to discuss international cooperation, share best practices and lessons learned”. The question of governing AI on a global level enjoys the highest priority at the UN, since “[t]here is a pressing need to put a floor under the AI divide so as to ensure that the benefits of AI are available to all peoples. This is a critical moment for the building of knowledge, tools and infrastructure, so that no one is left behind in relation to the defining technological revolution of the present decade”. The UN’s push for closing the AI capacity divide comes at a time when more and more countries talk about strengthening their “digital sovereignty”.



In a very basic sense “digital sovereignty” refers to the ability of states or organizations to control their own digital infrastructure, data, and technology without undue dependence on or influence from foreign entities. Digital sovereignty is about maintaining (or very often regaining) autonomy in the digital space, which not only comprises digital technologies but also public and private security issues (cyber sovereignty), web content, and internet infrastructure (internet sovereignty), and also touches upon the whole range of data (data sovereignty) and information (information sovereignty) associated with it. Although the term itself dates back to the 1990s, in the political arena, digital sovereignty became salient when the first Trump administration took a protectionist stance toward China and started banning selected Chinese tech companies from its market. It was then that China developed its vision of digital sovereignty as a matter of national security further and promoted the idea of technological self-reliance, also out of fear of external interference. While building a strong tech industry, the Chinese state also established a comprehensive governmental oversight of data, networks, and digital platforms. Ultimately, the Chinese strategy of promoting its own tech champions has been successful. For instance, within a couple of years, the United States and China became the global leaders in developing AI systems [see Figure 5]. The list of countries, in which AI systems have been developed in recent years, reflects the dependence of most countries on a few high-income and upper-middle income countries, with the exception of India as the only lower-middle income country


Country refers to the location of the primary organization with which the authors of a large-scale AI system are affiliated. Data for 2025 are incomplete (as of 9 September 2025)


Interestingly, in the face of the technological dominance of the United States and China, European countries also started to resort to the concept of digital sovereignty to assert the self-determination of European states and societies, espcially to protect their citizens from violations of data privacy, surveillance and cyber criminality. The adoption of the EU’s GDPR in 2016 is looked upon as one critical move in this respect. Therefore, digital sovereignty has become a multidimensional concept addressing not only issues of individual rights and freedoms – as emphasized in the European context – but also collective and infrastructural security problems, questions of political and legal enforceability and, finally, the contentious issue of fair economic competition. Much of the current discussions on digital sovereignty are related to economic competitiveness. Some observers think of it as a strategy through which states or supranational entities like the EU try to protect their values and to assert themselves as equal players in a field, in which other actors like the United States, China and globally acting tech firms have taken over leadership. Although the United States, especially now with the second Trump administration, does not seem to be interested in cooperating on issues of digital governance, the second leading tech country, China, actively engages in shaping the discussions in global and regional forums. Chinese researchers underscore this stance by scientifically arguing for a multi-level governance framework to safeguard digital sovereignty. The openness of China to engage in norm-setting processes, might contribute to “unthinking digital sovereignty”, i.e. reframing the concept, through “debating the procedural frameworks that structure sovereign capabilities and how they can be opened up to public reflection and control” . The proposal of the High-Level Advisory Board on AI can help to evolve global digital governance in such a direction. However, this cannot be achieved in a short time. If digital sovereignty means that you are in a position to make choices regarding digital infrastructures, data storage or applications and services – options that are not open to most countries of the Global South –, another strategy would be to enlarge the selection of choices, e.g. by focusing on the provisions of global digital public good.




Digital public goods include open-source software, open data, open AI models, open standards and open content, as specified in many UN documents and reports. Moreover, as part of a digital public infrastructure, digital identities and registries are becoming increasingly important for social, economic and political participation all over the world. For each type there are already applications available [see Figure  6]. Public goods are characterized by non-rivalry and non-excludability, i.e. if used by one person other persons will still be able to use it, and nobody can be excluded from its usage.


Types of digital public goods and exemplary applications

One global initiative to further the development and application of digital public goods is the “Digital Public Goods Alliance” (DPGA), a multistakeholder partnership of government agencies, international organizations, foundations and opensource platforms (like GitHub). Founded in 2019 as a response to the report “The Age of Digital Interdependence” by iSPIRT, the Indian Software Product Industry Round Table, the governments of Norway and Sierra Leone and the United Nations Children’s Fund (UNICEF), it is part of an implementation plan of the UN, now under the auspices of the newly created “Office for Digital and Emerging Technologies”. The standards developed by DPGA to define if a digital technology conforms to the definition of a digital public good are based on the “Principles for Digital Development”, which were developed in the late 2000s in a multistakeholder effort originally led by UNICEF.  Besides proving the relevance of contributing to achieving the SDGs, and fulfilling some formal requirements like use of approved open licenses, clear ownership, platform independence, documentation and mechanisms for extracting data, there are also some standards that resonate with the principles that are now widely shared in digital governance: the adherence to privacy and applicable laws, the adherence to standards and best practices, and above all, to do no harm by design by respecting data privacy and security, identifying inappropriate and illegal content and protect users from harassment. Global digital public goods can become a building block in strengthening the weak regime complex for digital global governance, because they contribute to enhancing the digital sovereignty of states and individuals by giving them a choice which digital application or service to use. This makes a vast difference compared to protectionist measures or excluding others from using certain digital tools and hence increases the incentives for cooperation across borders.


There is already a regime complex for global digital governance in existence. It is weak and fragmented, but it gives leeway to cooperate in those formats where progress is possible while others are paralyzed by geopolitical tensions. Moreover, if existing institutions are developed further and better coordinated, they can be useful for governing digital transformation for humanity. The more so since many of the multilateral entities include a diversity of stakeholders, which enhances their legitimacy and problem-solving capacity. The current efforts to focus on the governance of AI is a good start, since AI systems encompass and exacerbate all digital risks that have come to the fore and might still arise. Converting the former position of the UN Secretary-General’s Envoy on Technology into the position of an Under-Secretary-General and Special Envoy on Digital and Emerging Technologies in the newly established UN Office for Digital and Emerging Technologies has provided an interface and coordinating mechanism within the UN system and contact point with the relevant external stakeholders. However, some concrete steps would be advisable to develop the global digital governance architecture further: 

Use the ongoing UN process on global AI governance for engaging in a dialogue: Because of current geopolitical tensions, engaging with countries like China and especially countries of the Global South in various multilateral settings is essential to explore common understandings, common interests and common benefits as envisaged in the newly created AI governance architecture at UN level. With its digital infrastructure, China has exported norms and institutions to many countries of the Global South, and the UN process will provide the opportunity to openly discuss the principles and practices guiding the digital transformation, thus also giving lower-middle income and low-income countries a say. 
Strengthen (regional) efforts of regulatory coordination, monitoring and (possibly) enforcement: The OECD has accumulated a considerable amount of expertise on policies, data and analysis of artificial intelligence. However, it must start to reach out beyond the mainly European members. Together with the Global Partnership on AI and the G20 it could become a focal point of developing frameworks for harmonizing policies, indicators for good governance or recommendations for coping with specific risks (Roberts et al. 2024: 1285).
 • The EU should stick to its rights-based regulatory approach: Despite the threats of U.S. president Trump to sanction the EU with new tariffs, the EU should not back down on enforcing its Digital Markets Act on big tech companies like Google as a means of anti-trust regulation. In the same vein, it should not water down the implementation of other digital regulations, like the AI Act, but trust the “Brussels effect”, which has already made itself felt with the EU GDPR before, especially since the AI Act is looked upon as a regulatory role model in many countries.
 • Engage in providing global digital public goods and, associated with this, global public digital infrastructures based on widely shared principles and standards: The UN and other – regional – multilateral bodies and multistakeholder initiatives are adequate forums to promote the exchange on the provision of global public goods and discuss and develop the underlying principles and standards further. As always, this will be part of a broader power struggle, but as the history of international negotiations has shown, sometimes it is the power of the better argument that prevail.



Comments

Popular posts from this blog

The Impact of Unilateralism and Bullying Practices on International Relations - Security Council Arria-formula meeting.

Multilateral cooperation in practice.