This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Michael Bond

Editor, International In-house Counsel Journal

UK signs first AI risk treaty

News
Share:
UK signs first AI risk treaty

By

The UK signs the first legally-binding treaty to manage AI risks and protect human rights

The UK's signing of the first legally-binding international treaty on artificial intelligence marks a major step toward ensuring that AI development is managed responsibly. Lord Chancellor Shabana Mahmood signed the treaty at a Council of Europe meeting, where she emphasised the importance of safeguarding human rights, democracy, and the rule of law in the face of rapidly advancing AI technologies.

AI has the potential to revolutionise various sectors, from healthcare to public services, by improving productivity and enhancing decision-making. However, the same technologies pose significant risks if not properly managed. The spread of misinformation, data bias leading to unfair decisions, and breaches of privacy are among the key concerns that this treaty seeks to address.

The treaty establishes a framework for international cooperation in managing AI risks. It commits signatories to regulate AI products, monitor their development, and take action against any misuse of AI that could harm public services or erode democratic institutions. Importantly, the agreement sets out three primary safeguards: protecting human rights, preventing AI from undermining democracy, and ensuring that the rule of law is upheld.

In the UK, existing laws such as the Online Safety Act will be strengthened by the treaty's provisions. For example, AI's use of biased data, which can lead to discriminatory outcomes, will be more effectively addressed. The UK is also encouraging non-European countries like the United States and Australia to become signatories, reflecting the global nature of the challenge posed by AI.

Lord Chancellor Mahmood stressed that while AI has the potential to transform society for the better, it must be carefully controlled. "We must not let AI shape us – we must shape AI," she stated, highlighting the need for responsible governance to prevent technology from infringing on the values of human rights and justice.

Secretary of State for Science, Innovation, and Technology Peter Kyle echoed this sentiment, pointing out that AI could drive economic growth and public sector efficiency but only if the public has confidence in the safety and security of these innovations. The treaty, according to Kyle, is a vital step in building that trust on both a national and global level.

The UK has been actively involved in shaping global AI policy, having hosted the AI Safety Summit and co-founded the AI Safety Institute. This new treaty builds on those efforts, placing the UK at the forefront of the international movement toward responsible AI governance.

In addition to AI, Lord Chancellor Mahmood also reiterated the UK's commitment to supporting Ukraine in its defense against Russia's invasion. She underscored the importance of holding Russia accountable through the establishment of a Special Tribunal for the Crime of Aggression, aimed at prosecuting the leaders responsible for the war. This dual focus on AI safety and international justice underscores the UK's broader commitment to global security and human rights.

As the treaty moves toward ratification, the UK government will work closely with regulators, devolved administrations, and local authorities to ensure the agreement’s requirements are effectively implemented. The treaty not only strengthens the UK's domestic approach to AI regulation but also contributes to the global effort to ensure AI is used safely and responsibly, in line with shared values of human dignity, justice, and transparency.