PL 2338/2023: A legal milestone for artificial intelligence in Brazil

imagem arquivos/RegTechNews-06.png

Considered one of the pioneers in regulating the subject, the European Union has been discussing legislative proposals regarding the human and ethical implications of implementing and using artificial intelligence systems. In 2020, the European Commission published the White Paper on Artificial Intelligence with the aim of defining policy options to promote the adoption of AI, as well as to address the risks associated with the development and diffusion of this technology. Also in 2020, the European Parliament adopted a series of resolutions on the subject, notably on ethics, responsibility, copyright, and in 2021, in the field of criminal law as well as education, culture, and the audiovisual sector.

Presented by the European Commission in 2021, the Regulation on Artificial Intelligence, also known as the AI Act, identified the following specific objectives: i) to ensure that artificial intelligence systems are deployed and safely used in the European market and in respect to fundamental rights and values of the European Union; ii) to provide legal certainty for investments and innovations in the field of artificial intelligence; iii) to improve governance and enforcement of existing legislation regarding fundamental rights and security requirements applicable to artificial intelligence and; iv) to facilitate the development of a single market for legitimate, safe, and trustworthy applications.

Recently, in May 2023, the Committee on Internal Market and Consumer Protection and the Committee on Civil Liberties, Justice and Home Affairs approved a report that includes a set of relevant amendments to the text of the AI Act, such as the prohibition of predictive policing systems and the addition of several elements for classifying an artificial intelligence system as high-risk. In addition, the report proposes further harmonization of its precepts and provisions with aspects related to the GDPR - General Data Protection Regulation.

While the AI Act is heading for a plenary vote in the European Parliament, which is expected to take place in mid-June 2023, in Brazil, a new bill, Bill 2338/2023, was presented by the President of the Federal Senate, Rodrigo Pacheco (PSD-MG), on May 3 of this year, with the aim of establishing a legal framework for artificial intelligence in Brazil.

The Bill 2338/2023, which is expected to replace previous projects of lesser scope and complexity that were being processed in the National Congress, such as Bills 5.051/2019, 21/2020, and 872/2021, was based on a preliminary draft prepared throughout 2022 by a committee of jurists, led by Ricardo Villas Bôas Cuevas of the Superior Court of Justice (STJ). This draft resulted from an extensive process of studies by this commission and other numerous stakeholders, including public hearings, written contributions from experts in the field, study of regulatory authorities of artificial intelligence in OECD member countries, and an international seminar.

This topic is of great relevance in the global context, given the numerous applications and functionalities that can be encompassed by it, both in digital and physical and analog environments, and the profound social and economic transformations that result from its advancement and popularization, which is why experts in the field, as well as jurists, legislators, politicians, and entrepreneurs from various countries, as we have seen, are rushing to discuss, elaborate, and propose rules that regulate and ensure the safe and ethical use of this technology.

The Brazilian model proposed by Bill 2338/2023 aims to establish “general national norms for the development, implementation, and responsible use of artificial intelligence (AI) systems in Brazil, with the objective of protecting fundamental rights and ensuring the implementation of safe and reliable systems, for the benefit of human beings, the democratic regime, and scientific and technological development” [1].

As stated in the Justification of the Bill 2338/2023, the main objective of the project was to promote rights for the protection of individuals directly impacted by artificial intelligence systems, as well as to establish governance tools and institutional arrangements for their oversight and supervision.

Below, we summarize some of the pillars of this legislative proposal:

(i) Rights of individuals affected by artificial intelligence systems
Bill 2338/2023 begins with the basic foundations for the development, implementation, and use of artificial intelligence and its guiding principles. Similar to what we have seen in the European Union, Brazil rightly places the human beings and their dignity at the center of its regulation ("human-centric").

Among the primary concerns are ensuring appropriate access to prior information regarding human interactions with artificial intelligence systems, adequate understanding of decisions made by these systems, the right to challenge automated decisions, the right to human intervention in system decisions considering the context and the state of the art of technological development, and the right to non-discrimination and correction of discriminatory, illegal, or abusive biases of the systems.

(ii) Risk categorization
Bill 2338/2023 provides that before being introduced to the market, artificial intelligence systems must undergo a preliminary assessment by their suppliers regarding the degree of risk of the application. Such algorithmic impact assessment may be reviewed by the competent authority.

Once a system has been deemed of excessive risk, its implementation and use in the national territory will be prohibited if it employs subliminal techniques to induce people to adopt dangerous behaviors to health or safety; exploits vulnerabilities of specific groups of people associated with age or physical or mental conditions to induce harmful behaviors; or aims to evaluate, classify, or rank natural persons by the government based on social behavior or personality attributes for access to public goods and services, if applied illegitimately or disproportionately. Notably, in the context of public security, systems categorized as of excessive risk may be used if expressly permitted by specific federal law and with judicial authorization in cases of individual criminal prosecution.

Furthermore, the bill introduces the category of high-risk systems, which are understood to be used for purposes such as security in the management and operation of infrastructure; education and vocational training; recruitment, screening, and evaluation of candidates and decision-making regarding employment contracts; evaluation of criteria for access, eligibility, grant, review, reduction, or revocation of private and public services considered as essential; assessment of individuals′ debt capacity or establishment of their credit rating; administration of justice; applications in the healthcare sector aimed at assisting diagnoses and medical procedures, as well as priority-setting systems for emergency response services; biometric identification systems; criminal investigation and public security; and migration management and border control.

In both the categories of excessive risk and high risk, the competent authority will be responsible for updating the list of artificial intelligence systems, following consultation with the relevant regulatory body, if any, as well as conducting public consultations and hearings and performing regulatory impact analysis.

(iii) Governance Systems
Reporting Serious Incidents, and Civil Liability. The bill reserves two chapters for the Governance of Artificial Intelligence Systems and Codes of Good Governance Practices. As mandatory measures for high-risk systems, the bill provides for the establishment of governance structures and internal processes that are capable of ensuring system security and the rights of affected individuals. These include transparency measures in the use of systems, data management to mitigate discriminatory biases, compliance with existing legislation for the handling of personal data, adoption of appropriate parameters for data separation and organization for training, testing, and validation of system results, and information security measures.

Similar to the Brazilian General Data Protection Law (“LGPD”)′s obligation to report serious incidents, Bill 2338/2023 provides for a communication by agents to the competent authority in the event of security incidents, including risks to the life and physical integrity of individuals, critical infrastructure disruptions, property or environmental damage, and serious violations of fundamental rights. A broad text that will still be subject to specific regulation.

Finally, regarding the issue of civil liability, in the event of damages caused by high-risk or excessive risk systems, the text imposes on the supplier or operator of the system strict liability for the damages caused, to the extent of their participation. Liability will be presumed, with a burden of proof reversal in favor of the victim, in the case of systems not classified as high-risk or of excessive risk.

(iv) Supervision and Oversight
Regarding the regulation of artificial intelligence, Bill 2338/2023 stipulates that the competent authority, a Federal Public Administration body or entity, shall ensure compliance with the enacted rules, as well as specify its duties, and impose administrative sanctions.


As a measure to promote artificial intelligence innovation, the proposed text allows the designated competent authority to authorize the operation of an experimental regulatory environment for artificial intelligence innovation (regulatory sandbox) for entities that request it and meet the specified requirements in the bill and applicable specific regulations.

Lastly, among the administrative sanctions applicable to artificial intelligence agents for the infractions committed, the bill lists the following possible penalties: warning, public disclosure of the offense after its proper investigation and confirmation, a simple fine limited to a total of R$ 50 million per infraction for individuals, and for legal entities, a fine of up to 2% of their revenue, or the revenue of their group or conglomerate in Brazil in the last fiscal year, excluding taxes. Additional sanctions include prohibition or restriction on participating in a regulatory sandbox regime for up to 5 years, partial or total temporary or permanent suspension of the development, provision, or operation of the artificial intelligence system, and prohibition from processing certain databases.

By seeking a legal milestone for artificial intelligence, Brazil takes an important step in the discussion and understanding of the economic and social impacts that this technology can bring to the country. It also aims to regulate its use and application in order to protect our fundamental rights and provide legal certainty for the technological development of these systems and their various functionalities.

In the early stages of the legislative process, the bill presented to the Plenary of the Brazilian Federal Senate on May 3, 2023, is awaiting further action by the Legislative Secretariat regarding the next steps, which will involve its referral to one or more standing committees for examination.

Kamila Ribeiro Lima
Beatriz Lindoso
Cesar R. Carvalho

[1] Article 1 of Bill 2338/2023, available on https://www25.senado.leg.br/web/atividade/materias/-/materia/157233