Author : Anulekha Nandi

Expert Speak Digital Frontiers
Published on May 22, 2024

The Council of Europe adopted the first international legally binding treaty on AI. However, the regulatory approach and normative ambiguities leave key questions on responsibilities and liabilities unanswered.

The first international AI treaty: Progress with caveats

On 17 May 2024, the Council of Europe (CoE) adopted the first ever international legally binding treaty at the annual meeting of the Committee of Ministers at Strasbourg. The Framework Convention lies at the intersection of Artificial Intelligence (AI), human rights, democracy, and the rule of law. It claims to span the entire AI life-cycle ranging from design and development to the use and decommissioning of AI systems. Like the EU AI Act, it takes a risk-based approach but is also open to non-EU countries. The Framework Convention was coordinated over two years by the Committee on Artificial Intelligence—an intergovernmental body which brought together 46 CoE member states and 11 non-member states along with representatives of the private sector, civil society, and academia who participated as observers.

The treaty promotes the responsible use of AI that is in line with principled objectives of equality, non-discrimination, privacy, and democratic values. It covers the use of AI in the public sector (along with companies acting on their behalf) as well as the private sector. It specifies two ways of complying with this regulation, particularly concerning the private sector, wherein parties can opt to be regulated by the relevant provisions of the convention or take alternative measures of international human rights obligations. The treaty aims to keep implementation flexible in response to different technological, sectoral, and socio-economic conditions. This requires risk assessments to mitigate and assess the need for a moratorium, ban or other appropriate measures. It further necessitates the establishment of an independent oversight mechanism as well as remedial and redressal measures.

The treaty aims to keep implementation flexible in response to different technological, sectoral, and socio-economic conditions.

The treaty also aims to establish a principled compliance through a risk-based approach. While it aims to establish legal standards in this space, the lack of clear specification of the loci of obligations and responsibility beyond adherence to normative principles hinders practical applicability and relevance. Compliance with normative principles is notoriously difficult to enforce, particularly when compounded by the globally interlinked and transnational nature of AI development and production, the inequalities in critical AI resources like data and computing, the ecosystem of stakeholders and their divergent interests and pitfalls of risk-based regulation, and the institutional investment required to develop capabilities to actively monitor, assess, and mitigate dynamic AI risks. This ecosystem of conditions, resources, and actors leads to the dynamic emergence and perpetuation of systemic risks.

Risk-based regulation and normative ambiguities

Regulating AI is unlike traditional areas of regulation for market practices or physical infrastructure. AI goes beyond a particular class or set of technologies spanning a continually evolving frontier of emerging digital capabilities covering  a range of technology classes like machine learning, computer vision, and neural networks, among others. Managing risk-based implications of AI involves negotiating the complex interdependent dynamics in the form of autonomy, learning, and inscrutability as the performance and scope of AI systems continue to evolve. Risk-based regulation aims to optimise the scarce regulatory resources and administrative power. These involve targeting enforcement resources towards the harms that are likely to occur. It involves the identification of risks that regulators are seeking to manage as opposed to the rules that they are seeking to enforce. Risk tolerance differs by regulators across different sectors and involves risk evaluations and assessments, balancing trade-offs between risks and opportunities, and setting thresholds for risk and acceptability.

Risk-based regulations in AI are often linked to normative ambiguities in the form of principled commitments to norms and values which undergird the interpretation of specifying, aggregating, and qualifying risks. Normative ambiguities refer to differing perspectives with regard to risk tolerance which stems from interpretive application of normative rules for evaluation. Ambiguity around fundamental rights and societal values limits their interpretation, specification, and operationalisation for risk assessments. Risk-assessment methodologies might not adequately capture all the relevant parameters or risks that arise at the human-technology interface on implementation. Risk-based regulation ultimately depends on choices by regulators based on their risk tolerance and reflects fundamental assumptions about the nature of the vulnerability of AI systems.

Managing risk-based implications of AI involves negotiating the complex interdependent dynamics in the form of autonomy, learning, and inscrutability as the performance and scope of AI systems continue to evolve.

Normative ambiguities without clear delineation of obligations and responsibilities run the risk of elision into toothless non-binding principles for enforcement. Moreover, the treaty raises important questions on the nature of participation and legitimacy since much of the rest of the world was not involved in the drafting or consultation which can affect its uptake. It requires potential signatories to submit declarations of how they meet their principled obligation when the treaty opens for signature on 5 September 2024—a fairly short turnaround time to draft comprehensive approaches to national AI legislations and regulations. 

Outstanding concerns 

While the framework presents an important step in moving beyond high-level principles in AI governance, it does not address some of the outstanding questions around the location of responsibility and determination of liabilities. Developing consensus in multistakeholder settings needs to go beyond broad normative prescriptions by specifying how these normative principles are contingent on defined responsibilities for given actors depending on their roles in the ecosystem as suppliers, consumers, or intermediaries. This gets complicated within the global AI ecosystem since Big Tech companies control much of the resource pipelines resulting in differential power and dependency structures. Identification and specification of roles of different stakeholders within the ecosystem help highlight the nature of liabilities and modes of enforcement. However, these issues and concerns remain unanswered within the existing framework. Within the convention, this devolves on the respective signatories, with regulatory innovation falling upon the signatory country which raises the question of the relevance and scope of this particular framework.

Developing consensus in multistakeholder settings needs to go beyond broad normative prescriptions by specifying how these normative principles are contingent on defined responsibilities for given actors depending on their roles in the ecosystem as suppliers, consumers, or intermediaries.

Future conventions and frameworks need to acknowledge the dynamic complexity of AI-driven systems and ecosystems to identify the roles and responsibilities within it. The different conditions, actors, and resources represent the interacting systems and components that result in emerging complex behaviour. AI ecosystems need to continually adapt to emerging interactions and adjust to changes in its environment. Managing them and establishing systems of compliance requires confronting and addressing thorny legal questions which need to fix the nature of liabilities. A 2019 European Expert Group recommendation suggested using a product liability regime that assigns responsibility to the entity best capable of controlling and managing an AI-related risk as a single point of entry for litigation to redress AI-driven harms. However, the treaty does not currently specify guidelines for adopting different liability regimes but makes it incumbent on parties to determine the nature of liabilities. This highlights the need for international treaties to take comprehensive and deliberative approaches to understand the social, economic, and legal implications of AI-driven systems to design and implement effective regulatory approaches.


Anulekha Nandi is a Fellow at the Observer Research Foundation

 

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Anulekha Nandi

Anulekha Nandi

Anulekha Nandi is a Fellow at ORF. Her primary area of research includes technology policy and digital innovation policy and management. She also works in ...

Read More +