VIETNAMESE ARTIFICIAL INTELLIGENCE LAW - Some observations on the risk-based management approach

VIETNAMESE ARTIFICIAL INTELLIGENCE LAW - Some observations on the risk-based management approach

On 10 December 2025, Viet Nam officially passed the Artificial Intelligence Law, which is expected to take effect on 1 March 2026. This is a foundational legal instrument that, for the first time, establishes a unified framework for the management, control, and promotion of the development of artificial intelligence systems in Viet Nam.

January 13, 2026

After three significant rounds of revision, it can be observed that the draft law has been designed to better align with Viet Nam’s practical conditions—a country that currently focuses mainly on the application, deployment, and operation of AI, rather than on the large-scale development of foundational models.

This article focuses on discussing the risk-management approach adopted by the Vietnamese AI Law, clarifying its positive aspects while also pointing out certain limitations that need to be further refined during the implementation process.

1.Some positive aspects of the risk-management approach

In terms of regulatory form, Viet Nam’s AI Law adopts a risk-based management approach. This approach allows the State to concentrate supervisory resources on AI systems that pose a high risk of significant impact, while limiting unnecessary intervention in low-risk applications. At the initial drafting stage, this regulatory model was constructed in a manner quite similar to the EU AI Act, demonstrating an effort to approximate international standards of AI governance.

In the official version of the law, the risk-classification framework has been adjusted in a more streamlined direction.

Specifically, the law reduces the number of risk levels from four to three: high risk, medium risk, and low risk (Article 9, Chapter II). At the same time, instead of maintaining a category of “unacceptable risk” as in the November 2025 draft, the law introduces a separate article regulating prohibited acts (Article 7, Chapter I), in order to clearly define the minimum legal boundaries that must be controlled.

In addition, several types of “unacceptable risks” included in the draft law were removed and do not appear among the prohibited acts in the official law, specifically:

– Social credit scoring of individuals on a broad scale by state authorities, leading to adverse or unfair treatment in social contexts that are unrelated. (Clause 3, Article 11 of the Draft Law)

– The use of real-time remote biometric identification systems in public spaces for law-enforcement purposes, except in special cases provided for by sector-specific laws for the prevention and combating of serious crime, and subject to authorization by a competent state authority under a special procedure. (Clause 4, Article 11 of the Draft Law)

– The use of emotion-recognition systems in workplaces and educational institutions, except where permitted by sector-specific laws for medical or safety reasons under strict conditions. (Clause 6, Article 11 of the Draft Law)

It can be observed that the Vietnamese AI Law has been developed in a more streamlined manner compared with the EU AI Act, with a lower level of supervision and intervention.

However, this also raises a question: why were certain risks that are prohibited in the EU included in the draft law but then removed from the official law? And how might the removal of certain AI systems that were previously considered “unacceptable risk” affect the protection of fundamental human rights in the future, especially in the context of the increasingly widespread deployment of AI in the public sector and in state-management activities?

Meanwhile, other prohibited acts under the EU AI Act include:

  • The placing on the market, the putting into service for this specific purpose, or the use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; this prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity (point (d), Clause 1, Article 5, EU AI Act)
  • The placing on the market, the putting into service for this specific purpose, or the use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; this prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement (point (g), Clause 1, Article 5, EU AI Act)

2.Some limitations and issues requiring further clarification

2.1. A framework law heavily dependent on subordinate legislation

The current AI Law is a framework law, under which many important matters are delegated to the Government and line ministries for detailed regulation. This approach enhances flexibility, but also entails risks if:

  • Subordinate legal instruments are issued slowly;
  • Guidance is inconsistent among authorities;
  • The content of the guidance departs from the law’s spirit of encouraging innovation.

For a rapidly evolving field such as AI, delays or a lack of clarity at the level of subordinate legislation may undermine the regulatory effectiveness of the entire legal framework.

2.2. Unclear boundaries in the post-market supervision regime

Although the Artificial Intelligence Law adopts a management approach based on notification and post-market supervision, several regulatory tools designed in the law, such as:

  • Requirements to establish and retain technical documentation;
  • Obligations to report and provide information upon request;
  • Powers to suspend, withdraw, or re-evaluate systems where necessary.

If these tools are not accompanied by specific guidance on the conditions for application, scope, and degree of intervention, they may lead in practice to overly cautious interpretations and implementation. This may increase compliance costs and affect the pace of testing and deployment of AI systems, particularly for small-scale innovative organisations.

Therefore, the issuance of detailed implementing instruments that transparently define triggering criteria and ensure supervisory principles is crucial to preserving the post-market supervision approach and the law’s objective of promoting innovation.

2.3. The self-risk-classification mechanism requires very clear technical guidance

Pursuant to Clause 1, Article 10 on the classification and notification of artificial intelligence systems, providers are responsible for self-classifying the risk level of AI systems prior to placing them on the market or deploying them. This mechanism helps reduce the administrative burden on enterprises.

However, the effectiveness of this mechanism depends heavily on:

  • The clarity of classification criteria;
  • The issuance of technical annexes and detailed guidelines;
  • Mechanisms for reviewing and correcting misclassification.

According to Clause 5, Article 10, inspection and supervision are conducted based on the system’s risk level: a) High-risk systems are subject to periodic inspections or inspections triggered by signs of violations; b) Medium-risk systems are supervised through reporting, sample inspections, or assessments by independent organisations.

For point (a), clarification is needed as to:

  • What “periodic” means: annually or according to the usage cycle?
  • What constitutes “signs of violations”: this cannot be left to subjective assessment.

For point (b), clarification is needed as to:

  • What type of reports are required: technical reports, compliance reports, or incident reports?
  • The criteria for sample inspections: random, sector-based, or impact-based?
  • Who qualifies as an “independent organisation”: who accredits it, who pays for the assessment, and who bears responsibility in cases of erroneous evaluation?

2.4. Overlapping responsibilities among stakeholders

The law distinguishes between different actors such as developers, providers, deployers, and users. However, in Article 12 and other provisions, the boundaries of responsibility are not sufficiently clear or robust to allocate liability when incidents occur.

This may create practical difficulties in:

  • Determining whether faults arise from design, data, or use;
  • Identifying primary responsibility in complex incidents;
  • Resolving disputes and compensating for damages.

2.5. The “human-centred” principle has not been sufficiently operationalised

The law affirms the human-centred principle in its programmatic provisions. However, at the level of specific norms, the rights of individuals affected by AI are still addressed mainly indirectly through the obligations of providers and deployers.

The law has not yet clearly established a distinct set of rights for:

  • Individuals evaluated or classified by AI;
  • Individuals affected by AI-influenced decisions;
  • Individuals suffering indirect harm from the operation of AI systems.

There is a need to systematise these into a separate group of rights or a dedicated protection mechanism within the law, and to further concretise this principle through detailed provisions or implementing guidance during the implementation phase.

2.6. Technical terminology lacks clear legal definitions

The AI Law refers to many important concepts such as risk, bias, human oversight, transparency, and accountability. However, these concepts have not yet been clearly defined or accompanied by specific guidance. As a result, in practice, different organisations may interpret and apply them differently, creating difficulties for the consistent deployment and supervision of AI systems.

3.Issues requiring further research

From the perspective of contributing to the improvement of the law

  • Developing specific criteria for differentiating levels of risk and applying appropriate management measures. This is the direction currently being researched and developed by the Omicron team.
  • Recommending principles for drafting laws governing the application of AI in key sectors such as education, healthcare, law enforcement, and finance.
  • Developing a list of sectors and cases of artificial intelligence applications classified as high risk, including: healthcare; education and competency assessment; transport and the management of critical infrastructure; finance, banking, and credit; labour, recruitment, and human resource management; public administration and essential public services; the judiciary and law enforcement; social security; energy; and other sectors with comparable risks (Clause 2, Article 13 of the Draft Law).
  • Developing a recommended framework for assessing safety, reliability, and transparency in the process of building AI systems.
  • Establishing mechanisms for allocating responsibility among stakeholders in the handling of violations and the resolution of disputes.

From the perspective of contributing to academic research

  • Studying the impact of developing artificial intelligence infrastructure and national autonomous capabilities on socio-economic development.
  • Studying the impact of the use of emotion-recognition systems on user responses, particularly in the healthcare and education sectors.
  • Studying the impact of AI systems on psychological and behavioural outcomes.
  • Assessing the effectiveness of supporting small and medium-sized enterprises in the use of shared digital infrastructure.
  • Research on AI in public governance.

BAI Lab seeks to receive comments, critical feedback, and contributions from the community in order to jointly contribute to the effective and sustainable implementation of the AI Law.

Vietnamese artificial intelligence law: https://duthaoonline.quochoi.vn/dt/luat-tri-tue-nhan-tao/251009091536864496