Thailand: Recent updates on AI Legislation
Since the adoption of EU’s AI Act in 2021, various countries have adopted and amended their version of AI law.
For Thailand, it has been almost two years since the Electronic Transactions Development Agency (“ETDA”) proposed the below three draft laws (“Previous AI Draft Laws”) to regulate AI in Thailand:
- Draft Artificial Intelligence Innovation Promotion Act (“Draft Act”)
- Draft Notification of the ETDA regarding AI Sandbox ( “Draft AI Sandbox Notification”)
- Draft Notification of the ETDA regarding AI Risk Assessment ( “Draft AI Risk Assessment Notification”)
We have covered these Previous AI Draft Laws in our previous newsletter. To summarize, the Previous AI Draft Laws introduced AI regulatory sandbox, data sharing, AI standards, contract standards, and risk assessment.
Since recent AI developments pose novel legal challenges to regulators, it is necessary to revise the principles of the Previous AI Draft Laws to be appropriate to today’s context. Subsequently, ETDA launched a public hearing on May 2025 for law on regulating AI: draft of the principles for AI legislation (“Draft Principles Law”) to introduce new principles and expand the previous principles in the Previous AI Draft Laws as detailed below.
A.Expansion/Reaffirmation of existing (in draft laws) principles
1.AI Regulatory Sandbox
The Draft Principles Law recognizes the need to support the development of new AI and reaffirmed the AI regulatory Sandbox framework introduced in the Previous AI Draft Laws which include:
- Definition of “AI Entrepreneur” and “AI Innovation,”
- Voluntary participation
- ETDA’s oversight in case where overseeing is necessary.
In addition to the existing principles in the Previous AI Draft Laws, the safe harbor principle has been introduced to the regulatory Sandbox framework:
- Entities participating in the sandbox and acting in good faith will not face penalties for any harm that occurred during the testing phase.
- Does not include civil damages done to the public.
EDTA also suggested that AI Sandbox should be considered in 3-issues frameworks:
- Testing in Real World Condition: Allow testing in a real-world environment under supervision; ensuring that the design of the rules is consistent with reality; There must be an agreement between the private sector and the government agency that will set up the environment.
- Reuse privacy data in development of AI for public interest: Allow the use of personal data previously collected for other purposes to be used to develop or test AI that will be used for public benefit; criteria must be established to ensure privacy of data owner.
- Safe Harbor: Governmental agencies should not penalize testers who complied with all testing rules and recommendations and acted honestly; however, the tester still needs to cover the civil damage done to the public.
The table below shows differences between the Previous Draft Laws and the Draft Principles Law:
Subject | Previous Draft Laws | Draft Principles Law |
---|---|---|
Definitions | ✓ | ✓ |
Voluntary Participation | ✓ | ✓ |
ETDA Oversight | ✓ | ✓ |
Safe Harbor | ✖ | ✓ |
3-Issues Frameworks | ✖ | ✓ |
2.Data Sharing
The proposed Draft Principles Law recognizes the Previous AI Draft Laws’ data sharing regulation regimes to bolster the development of AI as follows:
Subject | Detail | Previous Draft Laws | Draft Principles Law |
---|---|---|---|
ETDA’s Role | ETDA must promote, support, and assist governmental entities and individuals in data sharing for the purpose of innovation | ✓ | ✓ |
Regulation of intermediaries | Intermediaries involved in the sharing, exchanging, selling, and buying of data used in AI development must be regulated | ✓ | ✓ |
Distinction between non-commercial use and commercial use | Non-commercial use will be permitted to mine online data—uses online data for AI purpose that is non-commercial. Commercial use, however, will be subjected to data owner’s right. | ✖ | ✓ |
3.AI Risk Assessment
The Draft Principles law reaffirms the mechanism in the Previous AI Draft Laws that delegated power to relevant authorities, such as ETDA, to decide whether an AI poses high risk. The Draft Principles provide more details on the duties of high-risk AI providers:
- Risk Management: High-risk AI providers must implement risk management framework. ETDA cite ISO/IEC42001:2023 and NIST Risk Management Framework as an example. Failure to provide such a risk management framework does not instantly result in a violation since the duty to implement risk management serves only as a clarification of duty of care — meaning that if a person suffers from a high-risk AI that does not have risk management, the high-risk AI provider must be liable for the damage.
- Legal Representative: Foreign high-risk AI providers will be required to appoint a local representative in Thailand and notify the relevant authorities of such appointments.
- Serious Incident: High-risk AI providers will be required to report serious incidents to the enforcement agency.
In addition to duties introduced for high-risk AI providers, the Draft Principles Law also introduced duties for high-risk AI deployers:
- Human oversight: Must have human oversight of the AI.
- Log: Keep a reasonable documentation of operational log file.
- Input data: Ensure the quality of input data.
- Notification: Notify affected individuals in cases where the AI system may cause serious damage.
- Compliance: Must comply with relevant authorities on application of the AI.
B.Newly introduced principles
- AI Governance Center (“AIGC”) as an Enforcement Institution
Under the Draft Principles Law, AIGC, which operates under ETDA, will play a more significant role in regulating AI, including, among others, developing AI governance, providing recommendations to developers, supporting AI Sandbox testing, creating AI readiness statistics, and resolving general AI situations.
- General Principles
While there are no explicitly stated general principles in the Previous AI Draft Laws, the Draft Principles Law introduced several general principles which applicable in broad context which include:
- Non-discrimination: Legal status of actions done solely by AI—done without human intervention—is binding unless there is an explicit exception.
- AI as a tool: Confirm the assumption that all acts done by AI must be attributable to a human. Developers would not be able to appeal to unpredictability.
- Unexpected Action: If the acting party could not expect the consequences of their action while the other party could, this would constitute an Unexpected Action. Resulting in the acting party will not be liable.
- Right to Explain/Appeal: Confirm the right to know how AI systems are developed and the right to appeal its decision. These principles are not yet confirmed to be added to the draft; there might be changes such as limitation of its applicability to only high-risk AI.
While the Draft Principles Law has just passed its first public hearing which ended on 9 June 2025, stakeholders should stay updates on the revision of the Draft Principles as ETDA may introduce more principles as the situation changes. We will keep following up on the development in AI regulation in Thailand and will provide such update as soon as they become available.
Authors: Praewpan (Am) Hinchiranan / Tanik Tangburanakij