The Council of Europe has launched a landmark treaty aimed at regulating artificial intelligence globally, inviting participation from both member and non-member states to promote ethical standards.
New International AI Convention Seeks Global Commitment to Ethical Artificial Intelligence Development
In a significant move towards unifying global efforts in regulating artificial intelligence, the Council of Europe (CoE) introduced the Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law in September 2024. This landmark treaty, colloquially referred to as the AI Convention or AI Treaty, represents the first international framework aimed at ensuring the ethical development and use of AI technologies, mirroring but distinct from the European Union’s AI Act.
A Global Reach
Unlike the EU AI Act, which applies strictly to member states with a focus on market regulations and consumer safety, the AI Convention boasts a potentially global scope. It is open for signatures from both CoE and non-member countries, thereby inviting a wider spectrum of international participation. Significant global players such as the United States, Canada, Japan, Australia, and the United Kingdom have already signed the treaty, demonstrating strong international interest and engagement.
While both frameworks utilise a risk-based approach, the AI Convention does not delineate specific risk categories, instead encouraging assessments based on potential impacts on human rights, democracy, and the rule of law. This broader approach aligns with its aim of establishing foundational ethical principles across different legal and cultural landscapes.
Core Principles and Flexibility
The AI Convention lays out several ethical commitments for its signatories. These include prioritising human rights by upholding human dignity and non-discrimination and ensuring data protection and privacy. It emphasises the need for transparency in AI-generated content and interactions, requiring systematic documentation and oversight.
One of the treaty’s distinctive elements is its flexibility for the private sector. While signatories commit to certain obligations, they are also permitted to develop alternative appropriate measures that align with the treaty’s principles. This flexibility is extended to activities involving national security and research, provided they continue to respect human rights.
Critics and Implementation Challenges
Despite its ambitious scope, the AI Convention has faced criticism regarding its enforceability. Francesca Fanucci, a legal expert at the European Center for Not-for-Profit Law, expressed concerns about the potential loopholes due to the broadness of the principles, especially regarding exemptions for national security and the limited oversight of private entities. These challenges pose questions on how the treaty’s principles will be uniformly and effectively enforced.
Path to Ratification
For the AI Convention to come into force, it requires at least three CoE member states as signatories, alongside a minimum of five ratifications – a process which involves domestic legislative consent. Following these requirements, a three-month waiting period is instituted before the treaty can be fully enacted. As of October 2024, the signatories have not completed the ratification process.
Looking Forward
The AI Convention offers a framework for international cooperation in AI regulation, putting ethical considerations at the forefront of AI development. As discussions continue, the CoE Secretary General, Marija Pejčinović Burić, has encouraged more countries to sign and ratify the treaty to hasten its enactment. The success and impact of the AI Convention will largely depend on its widespread adoption and the resolution of concerns regarding enforcement, marking a crucial step towards responsible AI innovation.
Source: Noah Wire Services