This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Quotation Marks
“Therefore, the provision stated, responsibilities for actions carried out by algorithms must be attributed to the individuals involved, as algorithms can execute but decisions ultimately rest with people.”

AI in Latin America: Searching for a balance regulatory strategy

International
Share:
AI in Latin America: Searching for a balance regulatory strategy

By and

Sebastian Ferreyra Romea and Ariel Garay analyze Latin America's regulatory responses to AI

The use of artificial intelligence has proliferated outside the realms of commerce, energy and finance. It has already started affecting people's daily lives across the world through the creation of software applications. As the technology continues to rapidly advance, further changes to people’s lives are expected.

AI has no universally accepted definition to date. According to the Royal Spanish Academy, it is a scientific field that focuses on creating computer programs that do activities akin to those carried out by the human mind, such as learning or logical reasoning.

The concept of AI is not completely new; it was developed from the mid-20th century until the beginning of the 80s, although without significant advances. However, in November 2022, AI captured global attention with the introduction of ChatGPT. The chatbot’s wide range of capabilities sparked widespread interest and concerns about AI on a global scale, especially with Open AI’s upcoming model GPT-5 expected to be released within the next year.

The G7’s AI strategy

AI has compelled various scientific fields to adapt their existing models. For instance, significant advancements have been made in medicine, such as the interpretation of X-rays, as well as in the development of design systems for engineers and architects, including plans for buildings, residences, industries, structures, and infrastructure. This impact has triggered global debates regarding the need for regulation of AI.

During the recent G7 summit named "Hiroshima AI Process,” governments made the decision to govern AI and establish a legal framework for this technology. Consequently, the current question is not whether AI should be regulated, but rather how it should be done.

The United States and China have faced regulatory pressure as leading developers of AI technology. However, despite not being at the forefront of the AI race, the European Union (EU) took the initiative by proposing regulations for AI in 2021, through the Artificial Intelligence Act. This act aims to ensure legal certainty, promoting investment and innovation in AI, while also ensuring compliance with legal norms and fundamental rights.

More recently, on June 14, 2023, the EU Parliament took a crucial step towards regulating Artificial Intelligence in Europe. One of the primary goals of the developing EU Artificial Intelligent Act is to safeguard against AI threats to health and safety and protect fundamental rights and values.

In contrast, the AI strategy of the UK focuses on "supporting growth" and "avoiding unnecessary barriers imposed on firms." The proposed regulation is based on the reform of the Data Reform Law, which comprises six sections: data protection, digital verification services, user data, provisions on digital information, regulation and supervision, and final provisions.

Latin America's position in the AI process

Across Latin America, there is significant concern to balance the responsible use and growth of AI without compromising established laws and social rights. Efforts must be made at all levels, including national constitutions and parliamentary laws, to address this issue.

Brazil took the lead in this discussion by enacting MCTI Ordinance No. 4,617 on April 6, 2021, which established the Brazilian Artificial Intelligence Strategy (EIBA). According to the ordinance: "the impacts of Artificial Intelligence (AI) in countless sectors of human life are already visible, with changes in the current paradigms of industrial production, personal relationships, and life care." Meanwhile, it proposed the creation of ethical norms, responsible use of AI, continuous investments in AI, and the removal of obstacles to AI innovation.

Other countries in Latin America, such as Chile, Mexico, Colombia, Uruguay, and Argentina, have also responded by passing regulations, policies, or strategic plans to address the progress of AI. In Argentina, for instance, the Undersecretariat of Information Technologies (SSTI) recently published Provision 2/2023 in the Official Gazette due to the increasing development and implementation of AI projects. The provision emphasizes the need for clear rules to ensure that the benefits of technological advancements are accessible to all sectors of society. It also highlights the importance of a broader strategy that prioritizes technological sovereignty and addresses social, productive and environmental challenges.

Point 3.1 of the provision in Argentina focuses on the accountability of AI actions. It clarifies that algorithms do not possess self-determination or agency to make decisions freely, although the concept of "decision" is often used colloquially to describe algorithmic classifications. Therefore, the provision stated, responsibilities for actions carried out by algorithms must be attributed to the individuals involved, as algorithms can execute but decisions ultimately rest with people.

The regulations in Latin America align with the principles outlined by UNESCO and address various aspects of AI, including proportionality and safety, security and protection, equity and non-discrimination, sustainability, the right to privacy and data protection, human supervision and decision-making, transparency and explainability, responsibility and accountability, sensitization and education, as well as adaptive governance and multi-stakeholder collaboration.

AI perspectives

In May 2019, the 36 OECD countries, along with Argentina, Brazil, Colombia, Costa Rica and Peru, signed five AI Principles. These principles include promoting inclusive growth, sustainable development, and well-being; upholding human-centered values and equity; ensuring transparency and explainability; maintaining robustness, security, and protection; and emphasizing responsibility.

In light of these principles, there are repeated calls that any solution, whether on a global or regional scale, for regulating AI applications should be grounded in ethical guidelines, legal resources, economic incentives and technical protection mechanisms. These technical protection mechanisms are critical to the process. Expert evaluation of AI decision-making systems and implementing these three potential normative measures is critical, as is political discourse is crucial. Therefore, it is important to acknowledge the need for experts in implementing these three potential normative measures.

Consequently, all regulations around the world prioritize ethics as the basis for further AI development, one of the most important points being the protection of personal data.

Expert perspectives currently suggest that, per Latin America’s regulatory dynamics, AI development will intensify in the next two years. Simultaneously, regulations will deepen to ensure the appropriate coexistence of technology with human beings in developing democracies that may be somewhat fragile, in accordance with international standards. These efforts align with international standards.

Sebastián Ferreyra Romea is a corporate attorney, fintech, data protection and compliance specialist at Ferreyra Romea Legal & Compliance. Ariel R Garay is an attorney and blockchain developer, specialized in banking, capital markets and sustainability at Ferreyra Romea Legal & Compliance ferreyraromea.com