In this context, Law No. 27 of 2022 concerning Personal Data Protection (UU PDP) serves as your "operating license". This law, which has been fully in effect since October 2024, strictly regulates how you may obtain, process, and use this "fuel".9 Compliance with the PDP Law is not an option, but an absolute prerequisite for operating AI in Indonesia.

Navigating Law No. 27 of 2022 concerning Personal Data Protection (UU PDP)

The PDP Law introduces a series of strict obligations for Personal Data Controllers (your company) and grants strong rights to Data Subjects (customers, employees, or other individuals). For businesses using AI, several aspects of the PDP Law are crucial.

Identify Your "Fuel": General vs. Specific Personal Data

The first step in compliance is understanding the type of data you process. The PDP Law distinguishes between two categories of data 10:

Personal Data of a General Nature:

Includes full name, gender, nationality, religion, marital status, and/or personal data combined to identify a person (e.g., telephone number and IP address).

  1. Personal Data of a Specific Nature: This is data whose processing can have a greater impact and risk for individuals, such as discrimination. This category includes:
  2. Health data and information. Biometric data (e.g., facial scans, fingerprints).
    • Data dan informasi kesehatan.
    • Data biometrik (misalnya, pemindaian wajah, sidik jari).
    • Genetic data.
    • Crime records.
    • Child data.
    • Personal financial data.
    • Other data in accordance with laws and regulations.

The practical implications are clear: if your AI model is trained or operates using specific personal data—such as in a fintech application that processes financial data or health-tech that analyzes health data—you are in a high-risk zone. This demands a much stricter level of protection, security, and legal justification.

You can't just take data and feed it into an AI model. The PDP Law requires a legitimate legal basis for each data processing activity.12 For the AI context, the two most relevant legal bases are:

  • Consent: This is the safest foundation. However, consent must be explicit, specific, and informed. A general "I agree to the Terms and Conditions" checkbox is no longer sufficient, especially if the data will be used to train complex AI models. You must clearly state for what purpose the data will be used (e.g., "to train our product recommendation algorithm") and obtain explicit consent for that purpose.
  • Legitimate Interest: This legal basis offers greater flexibility but carries a higher risk. To use it, companies must conduct a "balancing test": prove that your legitimate interest in processing the data (e.g., to detect fraud) outweighs the rights and freedoms of the data subject. Relying on this basis to extensively train AI models on personal data is a very risky strategy and demands very strong justification and documentation through a Data Protection Impact Assessment (DPIA).13

Risk Mitigation Techniques: Anonymization and Pseudonymization

To mitigate legal risks, especially during the model training phase, companies can use two main techniques:

  1. Anonymization: The process of permanently removing all personal identifiers from a dataset so that individuals can no longer be identified. If data is truly anonymized, it is no longer considered personal data and falls outside the scope of the PDP Law. This is the gold standard for risk mitigation.
  2. Pseudonymization: The process of replacing personal identifiers (such as names or NIK) with pseudonyms or artificial tokens. While this is a good security measure, pseudonymized data is still considered personal data under the PDP Law because the original identity can theoretically still be recovered. Therefore, all obligations of the PDP Law remain in effect.

Understanding this distinction is crucial. Many companies mistakenly consider pseudonymization to be the same as anonymization, when from a legal perspective, the level of risk is very different.

Biggest Challenge: Fulfilling Data Subject Rights in AI Models

The PDP Law grants a series of rights to individuals, including the right to access, correct, and, most challenging for AI, the right to erasure (Right to Erasure).12

The challenge is both technical and fundamental. Deleting a person's data from a traditional database is as simple as deleting a single row. However, in machine learning complex models, individual training data has been integrated into millions or billions of interconnected parameters (weights) that form the "intelligence" of the model. Removing the impact of a single individual's data is similar to trying to extract a single egg from a baked cake. This process, known as machine unlearning, is very complex, expensive, and still in the active research phase.15

Failure to have a technical strategy in place to fulfill the right to erasure means that your AI system is inherently non-compliant with the PDP Law. This is no longer just a legal issue, but an engineering and product design issue (privacy engineering) that must be addressed by the technical and product teams from the outset of development.

Proactive Obligation: Data Protection Impact Assessment (DPIA)

The PDP Law requires Data Controllers to conduct a DPIA before carrying out data processing activities that have a "high potential risk" to data subjects.12 Some explicit triggers for DPIA are:

  • Automated decision-making with significant impact.
  • Processing of specific personal data.
  • Large-scale data processing.
  • Use of new technologies.

Almost every significant AI implementation in business will trigger at least one, if not all, of these conditions. Therefore, conducting a comprehensive DPIA is not an option, but a non-negotiable legal obligation. DPIA serves as a proactive risk management tool to identify and mitigate potential privacy hazards before AI systems are launched.

Although the PDP Law has come into full effect, the independent supervisory institution mandated by law to enforce administrative sanctions—including fines of up to 2% of annual revenue 17—has not yet been formed.18 Currently, the Ministry of Communication and Informatics (Kominfo) acts as the interim authority.9 This delay creates an "enforcement gap" (enforcement gap) which may give some companies a false sense of security. However, this is the calm before the storm. Once the new supervisory institution is formed, it will likely be under pressure to demonstrate its effectiveness, which could lead to high-level investigations and sanctions against non-compliant companies. Companies that proactively build compliant AI systems from the outset (privacy by design) will not only avoid future sanctions, but will also have a significant defensive advantage when the era of strict enforcement begins.

Two Central Questions Haunting Every Generative AI User

For businesses using generative AI to create content—be it images for marketing campaigns, text for websites, or code for software—there are two fundamental legal questions lurking behind every "generate" click:

  1. From the input side: Is it legal for the AI platform we use to train its models by absorbing billions of images and texts from the internet, most of which are copyrighted?
  2. From the output side: Who actually owns the copyright to the logo, article, or music generated by AI? Is it our company, the AI developer, or no one at all?

The answers to these questions under Indonesian law have profound implications for the value and protection of the intellectual property assets you create.

Input Risks: Using Copyrighted Material to Train AI

Sophisticated generative AI models, such as large language models (LLMs) or image diffusion models, learn by analyzing patterns from massive datasets. This process often involves "scraping" (scraping) data massively from the internet, which inevitably includes millions of copyrighted works such as news articles, photos, artwork, and music, without explicit permission from the owners.21

In some jurisdictions such as the European Union or Japan, there are specific legal exceptions called Text and Data Mining (TDM) that allow the use of copyrighted material for research and data analysis purposes under certain conditions.22 However, Law No. 28 of 2014 on Copyright (Copyright Law) in Indonesia does not have an explicit exception for TDM.23

This creates a significant legal risk. The AI training process that uses copyrighted works from Indonesia without permission can be considered an act of duplication and copyright infringement on a massive scale. Although the primary responsibility will likely fall on the developers or providers of the AI platform 21, user companies can also be drawn into disputes, especially if they know or should have known how the platform they use works.

This is the core of the AI copyright issue. Let's break it down based on the applicable law in Indonesia.

The status of copyright protection in Indonesia depends on the fundamental definitions of "Creator" and "Creation".

  • Article 1 number 2 of the Copyright Law defines Creator as "a person or several people" that individually or jointly produce a creation that is distinctive and personal.21
  • Article 1 number 3 of the Copyright Law defines Creation as "every work of creation in the fields of science, art, and literature produced by inspiration, ability, thought, imagination, dexterity, skill, or expertise expressed in tangible form".

Keywords in this definition are "person" and "human mind". Indonesian law, in the current framework, explicitly links the act of creation to the human legal subject. An AI system, no matter how sophisticated, is not a "person" in the legal sense. It has no mind, inspiration, or personal will; it is a tool that executes complex algorithms based on the data given to it.

The conclusion is unavoidable: because AI is not a human legal subject, it cannot be a "Creator". Consequently, a work generated purely and autonomously by AI, without significant creative contribution from humans, does not qualify as a "Creation" that can be protected by copyright under Indonesian law.21

Official Confirmation: Views of the Directorate General of Intellectual Property (DJKI)

This legal analysis is not merely an academic interpretation; it aligns with the official view of the government. The Directorate General of Intellectual Property (DJKI) has explicitly stated that AI is viewed as a legal object (a tool, like a camera or a brush), not a legal subject (an entity that can hold rights and obligations like a creator).27 The computer program that underlies the AI itself can be copyrighted as software, but the work it produces cannot be attributed to the AI.

Critical Business Implications: Your Assets May Be Public Domain

The consequences of this legal vacuum are very real for businesses. If your company uses a generative AI platform to create logos, advertising slogans, or product designs, and the human contribution is limited to providing a prompt simple, then the resulting output is most likely public domain.28

This means that anyone, including your competitors, can legally copy, use, and modify these assets without permission and without paying royalties. Imagine investing significant resources to build a brand identity around a logo created by AI, only to find that your competitor launches a product with an identical logo, and you have no legal basis to stop them. This fundamentally undermines the value of intellectual property as a competitive defense.

This phenomenon creates the "AI Copyright Paradox": companies invest in tools to create valuable digital assets, but they essentially cannot own or legally protect those assets. This forces business leaders to rethink their protection strategies, perhaps by focusing more on trademark registration (which protects brand identity, not artistic works) or by carefully documenting the process of substantial human intervention and modification to strengthen copyright claims.

It should be noted that this legal landscape is not uniform globally. The legal positions of Indonesia and the United States, which require human authorship (human authorship), contrast sharply with the UK.29 UK copyright law specifically provides protection for "computer-generated works" (computer-generated works"), where ownership is given to the person who made the necessary arrangements for the creation of the work.29 This difference is crucial for Indonesian companies operating in the global market, as the protectability of their digital assets can change drastically depending on the jurisdiction in which they operate.

Narrative Case Study: Who is Responsible?

Imagine an increasingly likely scenario. A fintech leading platform in Indonesia uses a sophisticated AI system to assess creditworthiness. Unbeknownst to them, the algorithm, trained on historical data, has learned to associate certain postal codes—which happen to be predominantly inhabited by ethnic minority groups—with higher credit risk. As a result, the system systematically rejects all applications from the area, triggering accusations of discrimination, severe reputational damage, and potential lawsuits.

Or, consider another scenario: a hospital adopts medical diagnostic AI to help doctors analyze radiology images. Due to undetected data anomalies in its training, the AI misidentifies benign tumors as malignant, leading to recommendations for unnecessary and harmful invasive procedures for patients.

In both of these cases, real harm has occurred. The crucial question for business leaders, boards of directors, and legal teams is: Who should be legally responsible?

Chain of Responsibility: Dissecting Potential Guilty Parties

The autonomous and distributed nature of AI complicates traditional lines of accountability. Unlike conventional defective products, AI errors can stem from various points in its lifecycle. Current legal frameworks, designed in the pre-AI era, are often inadequate to address these complexities.32 Potential parties that could be held liable include:

  • Developers: Parties who design, code, and train AI models. Their responsibility may arise from product liability. If losses are caused by design defects, biased algorithms, or careless selection of training data, developers may be held liable.32
  • Deployer/Operator: This is your company—the party that selects, integrates, and operates the AI system in its business processes. Responsibility may arise from negligent implementation. For example, using an AI system for purposes it was not designed for, failing to conduct adequate due diligence before adoption, or not providing sufficient human oversight of high-risk decisions.
  • End-User: In some cases, end-users (e.g., employees or customers interacting with the AI) may have contributory negligence. If they intentionally misuse the AI system or ignore clear warnings and instructions, their responsibility may be factored in to reduce the liability of other parties.

In Indonesia, the main basis for claiming damages for non-contractual fault is the doctrine of Tort, which is regulated in Article 1365 of the Civil Code. This article states, "Every unlawful act that causes harm to another person obliges the person who caused the harm through their fault to compensate for that harm."33

To succeed in a tort claim, the plaintiff must prove four cumulative elements 34:

  1. Existence of an Unlawful Act: Harmful AI actions or decisions (e.g., discriminatory decisions or incorrect diagnoses) can be considered unlawful acts, either because they violate the law or because they are contrary to "propriety, prudence, and morality" in society.
  2. Existence of Fault/Negligence: This is the most difficult element to prove in the context of AI. The fault may lie in the developer's negligence in designing the algorithm or in the user company's negligence in implementing it.
  3. Existence of Damage: The plaintiff must be able to prove that they suffered real damages, both material (e.g., financial loss) and immaterial (e.g., reputational damage).
  4. Existence of Causation: There must be a direct causal relationship between the fault (element 2) and the damage suffered (element 3).

The biggest challenge for the plaintiff is proving the elements of "fault" and "causation". The "black box" nature of many advanced AI systems makes it almost impossible for outsiders to pinpoint exactly where the defect lies in the algorithm and how that defect directly caused the damage.Beyond Tort: ConceptsStrict Liability

This doctrine, which is rooted in

  • Article 1367 of the Civil Code Doktrin ini, yang berakar pada Pasal 1367 KUH Perdata, stating that a person is responsible for damages caused by goods under their supervision, regardless of fault.35 In this context, complex and potentially dangerous AI systems can be analogized as "goods" that impose strict liability on their operators. The argument is that those who benefit from the use of advanced technology must also bear the inherent risks.35
  • Vicarious Liability: This doctrine typically applies in employment relationships, where an employer is liable for the actions of their employees. In the context of ITE Law, AI can be considered an "electronic agent" organized by a party.32 Thus, companies using AI can be held liable for the "actions" of their electronic agents, just as they are liable for the actions of human employees.36

The "black box" nature of AI fundamentally challenges the traditional burden of proof in civil law. The difficulty faced by plaintiffs in proving specific fault may encourage Indonesian courts to adopt a less defendant-friendly approach. Instead of dismissing claims for lack of evidence, judges may be inclined to shift the burden of proof to AI companies to prove that their systems are not at fault, or more formally, to apply the doctrine of strict liability (strict liability).

The implications for businesses are clear: hiding behind the technical complexity of AI is not a valid defense strategy. Rather, the complexity itself becomes a source of liability risk. The best defense is not secrecy (opacity), but transparency and the ability to explain how the system works (explainability or XAI). Companies that can proactively document their due diligence, bias audits, and AI decision-making processes will be in a much stronger position to defend themselves in court.

Pillar 4: Maintaining Fairness - Mitigating the Risks of Discrimination and Algorithmic Bias

Understanding "Algorithmic Bias": When Past Data Creates Future Risks

One of the most dangerous legal and reputational risks of AI is algorithmic bias. In simple business terms, algorithmic bias occurs when an AI system produces outputs that are systematically unfair or discriminatory towards certain groups of people. The root of the problem often lies in the data used to train the AI. If the training data reflects existing historical biases in society, the AI will not only learn those biases, but will also replicate and even amplify them on a large scale and at machine speed.16

Real-world examples of this risk are already emerging in Indonesia. In the sector fintech, there are concerns that credit scoring algorithms may unfairly discriminate against potential borrowers. The AI system may not use explicit variables such as race or religion, but it may use proxy variables such as zip code, education level in an area, or even the type of mobile device used. If these variables are strongly correlated with socio-economic status or ethnicity, the result could be hidden but systematic discrimination.38

For companies, the result is a double disaster: it not only damages reputation and erodes customer trust, but also opens the door to lawsuits on the grounds of discrimination, which can violate various regulations, including the principle of fairness in consumer protection and human rights.

National Ethical Guidelines: Circular Letter of the Minister of Communication and Information No. 9 of 2023

Recognizing this risk, the Indonesian government has taken initial steps by issuing Circular Letter (SE) of the Minister of Communication and Informatics Number 9 of 2023 concerning Artificial Intelligence Ethics.8 Although it is a "soft regulation" and not legally binding like a law, this SE serves as a crucial moral compass and practical guide. It sets out the government's expectations for the responsible development and use of AI.

Unpacking Key Principles for Business

The SE of the Minister of Communication and Information outlines several ethical values that should guide companies. For business leaders, these values can be translated into concrete actions 39:

  • Inclusivity and Fairness: This means more than just good intentions. Companies must actively test their AI models to ensure that there are no detrimental outcomes for specific demographic groups. This involves statistical analysis to detect disproportionate impacts.
  • Transparency: Companies must be able to explain, at least at a comprehensible level, how their AI systems arrive at a decision or recommendation. This does not mean having to open source code, but being able to explain the main factors that influence the output.
  • Accountability and Credibility: There must be a clear person in charge within the organization for each AI system and the results it produces. When AI makes a mistake, there must be a clear path of accountability, not just blaming the "algorithm."
  • Security: The AI system itself must be protected from attacks or manipulation that could alter its behavior and cause unfair or harmful results.

In Indonesia, AI ethics is no longer just a philosophical debate or a topic for CSR reports. It has become a direct proxy for legal liability. The principles outlined in SE Menkominfo No. 9 Tahun 2023 are likely to be adopted by courts as a "standard of care" (Pillar 1: Data as Fuel - PDP Law Compliance in the AI Ecosystem) in determining whether a company has acted negligently in cases of tort. If a company fails to take reasonable steps to ensure fairness and transparency in its AI systems—as outlined in the SE—then that failure may be considered evidence of negligence that caused harm. Thus, your company's AI ethics framework is one of its most important legal risk mitigation tools.

Practical Mitigation Steps to Tame Bias

Addressing algorithmic bias requires a proactive and multi-disciplinary approach. Here are practical steps that companies can take:

  1. Data Audit and Model Testing: Before launching an AI model, conduct a thorough audit of the training data to identify and mitigate historical biases. After launch, conduct periodic testing to monitor "model drift" (model drift) and the emergence of new biases over time.38
  2. Building Strong Internal Governance: Form a cross-functional internal AI ethics review committee or board. This team should consist of representatives from legal, technology, product, and business teams. Their task is to review and approve high-risk AI projects, set standards, and oversee compliance with the company's ethical policies.43
  3. Promoting Team Diversity: Homogeneous development teams are more likely to unconsciously embed their own biases into the systems they build. Recruiting and retaining a diverse team from various demographic backgrounds, disciplines, and perspectives can significantly reduce the risk of creating biased products.42
  4. Implement Fairness by Design: Integrate fairness metrics (such as equalized odds or demographic parity) directly into the AI model development and evaluation process. This ensures that fairness is considered a design goal, not an afterthought.38

By taking these steps, companies can significantly reduce the legal and reputational risks associated with algorithmic bias, as well as build fairer, more reliable, and trustworthy AI products.

Pillar 5: Fortress of Defense - Contract Clauses and Internal Policies

Contracts as a Key Risk Allocation Tool

After navigating the complexities of data, copyright, and liability, the final pillar is about building a solid fortress of defense. In the world of AI, your primary defense is carefully crafted contracts and clear internal policies. Contracts are no longer just static legal documents; they are dynamic risk management tools for allocating responsibilities, protecting assets, and limiting your exposure to losses.

When Purchasing AI Services from Vendors (e.g., OpenAI, Google Cloud, SAP)

When your company subscribes to AI services from a third party, you are essentially inviting the vendor's technology into the core of your operations. Without careful negotiation, you may unknowingly accept all the risks associated with that technology. It is important to proactively negotiate clauses that shift the risk back to the party best able to control it: the vendor itself.

Mandatory Clauses You Should Negotiate

Here is a checklist of critical clauses that should be the focus of your legal and business teams when reviewing AI service procurement contracts.45

Critical ClausesWhy Is This Important to You?
Indemnity ClauseThe vendor must agree to defend and indemnify your company from any third-party claims. This is especially important for two main scenarios: (1) Copyright Infringement: If the vendor's AI is trained on illegally copyrighted data and its output infringes on the copyrights of others. (2) Data Privacy Breach: If the vendor's service causes a violation of the PDP Law. Without this clause, you will bear the legal costs and damages.
Compliance WarrantiesThe vendor must explicitly warrant that their services, including how they process data, fully comply with all applicable laws in Indonesia, especially the PDP Law. This gives you a legal basis to sue the vendor if their services turn out to be illegal.
Data Confidentiality & UseThe contract must expressly prohibit the vendor from using confidential data that you enter (input/prompt) for any purpose other than providing services to you. Specifically, prohibit them from using your data to train their AI models for other customers. This prevents your trade secrets from becoming part of the collective "intelligence" of your competitors.
Intellectual Property Ownership (IP Ownership)The contract should clearly state that you retain full ownership of your input data, and most importantly, that you own the proprietary rights (to the extent permitted by law) to the output generated by the AI from your input. Avoid provisions that give the vendor rights to your output.
Limitation of LiabilityAll vendors will try to limit their liability. Pay close attention to this clause. Avoid unreasonable limitations (e.g., the vendor's liability is limited only to the service fees paid in the last three months). Negotiate higher limits, and ensure that the limitations do not apply to breaches of confidentiality or indemnification obligations.
Audit RightsYou should have the contractual right to audit the vendor's compliance with their data security and privacy obligations. This gives you a verification tool to ensure the vendor is actually doing what they promised.

When Providing AI Services to Your Customers

If your company is the party developing and providing AI services, the perspective reverses. Your goal is to draft Terms and Conditions (Terms of Service) that protect your company from unlimited liability. Some key elements to include are:

  • Accuracy Disclaimers: Clearly state that AI output may contain inaccuracies or "hallucinations" and should not be relied upon for critical advice (e.g., medical, legal, or financial) without verification by a human professional.
  • Acceptable Use Policy: Prohibit users from using your service for illegal purposes, infringing copyrights, or creating harmful content.
  • Usage Limitations: Define clear limitations on how the output can be used.
  • Limitation of Liability: Just as you should be wary of this clause when buying, you should use it effectively when selling. Limit your liability for losses arising from your customers’ use of your service, subject to applicable law.

For Internal Company Use: Responsible AI Usage Policy

One of the biggest and fastest-emerging risks of AI comes not from the outside, but from within: your employees. Without clear guidance, employees may inadvertently enter highly confidential company information—such as financial reports, client lists, product strategies, or source code—into public generative AI platforms like the free version of ChatGPT.47 Once that data is entered, it potentially becomes part of the model’s training data and could reappear in responses to other users, resulting in a massive and irreparable data leak.

Every company, regardless of size, needs Internal AI Usage Policy right now. This policy should be a clear, practical, and widely communicated document.

Core Elements of an Internal AI Usage PolicyPurpose and Details
Objectives and ScopeExplain why this policy exists (to protect company assets, ensure compliance, and encourage responsible innovation) and who it covers (all employees, contractors, etc.).41
Approved vs. Prohibited AI ToolsCreate a clear list. For example, "The company-licensed Enterprise version of Microsoft Copilot is approved. The use of free versions of ChatGPT, Google Gemini, or other public AI platforms for work is strictly prohibited.".47
Confidential & Private Data ProtectionThis is the most important rule. Establish an absolute prohibition against entering confidential company information, customer personal data, or employee personal data into unapproved AI tools that lack adequate data security guarantees.47
Critical Verification of OutputRequire employees to always review, fact-check, and critically edit any AI-generated output. Employees must understand that they are fully responsible for the accuracy and quality of the final work they submit.47
Copyright & Intellectual Property ComplianceRemind employees that they must not use AI to generate content that infringes on third-party copyrights. Explain that AI outputs may not be copyrightable and should not be used for core company assets without legal review.
AccountabilityEmphasize that AI is an aid, not a replacement. Employees using AI remain fully responsible for their work, including any errors or biases contained in the AI-assisted output.
Reporting and SanctionsProvide a clear channel for employees to ask questions or report potential misuse. Explain the consequences of policy violations, which can range from warnings to more serious disciplinary action.

By building this three-layered defense—strong purchase contracts, protective terms of service, and strict internal policies—companies can significantly mitigate the legal and operational risks inherent in AI technology adoption.

Closing: Towards Responsible and Legally Sound AI Adoption

An Integrated Framework for AI Governance

The journey of navigating the legal landscape of AI in Indonesia demands more than just a partial understanding. The five pillars we have dissected—Data, Copyright, Liability, Ethics, and Contracts—are not separate silos, but interconnected components of a AI Governance Framework that is integrated. Compliance with the PDP Law (Pillar 1) directly affects your legal liability (Pillar 3). The copyright status of AI output (Pillar 2) must be explicitly regulated in your contract (Pillar 5). And ethical principles (Pillar 4) serve as a standard of care that courts will use to assess your negligence (Pillar 3).

Successful business leaders in the AI era are those who can see this big picture, integrating legal, ethical, and commercial considerations into every stage of the AI lifecycle, from conception and procurement to deployment and oversight.

An Ever-Moving Landscape

It is important to remember that AI regulation in Indonesia is a moving target. Some key developments that business leaders should continue to monitor include:

  • Establishment of the PDP Supervisory Institution: The establishment of an independent institution mandated by the PDP Law will be a turning point, marking the beginning of an era of more aggressive enforcement of administrative sanctions.20
  • Sectoral Regulations: Authorities such as the Financial Services Authority (OJK) have begun to issue specific guidelines for the use of AI in the banking sector and fintech, and will likely be followed by regulators in other sectors such as health.50
  • Regulatory Sandbox: The government, through Kominfo and OJK, uses the mechanism regulatory sandbox as a forum to test AI innovations in a controlled environment, the results of which will shape future regulations.50

Vigilance and the ability to adapt to regulatory changes will be key to long-term success.

Compliance Is Not a Burden, But a Competitive Advantage

Ultimately, navigating AI law should not be seen as a burden that hinders innovation. Rather, it is a strategic investment to build a solid foundation for sustainable growth. Studies show that legal risks and misinformation are one of the biggest concerns businesses have in adopting AI.3 By proactively addressing these risks, companies not only protect themselves from potential losses.

Companies that demonstrate a strong commitment to data privacy, algorithmic fairness, and transparency will build the most valuable asset in the digital age: trust. Trust from customers, trust from business partners, and trust from regulators. In the long term, it is this trust that will be a true competitive advantage, allowing your company to not only survive, but also lead in the artificial intelligence revolution in Indonesia.14

Works cited

  1. AI Technology Adoption Becomes a Major Issue in the Business World in 2024 - www.rumahmedia.com, accessed July 11, 2025, https://www.rumahmedia.com/insights/adopsi-teknologi-ai-jadi-isu-utama-dunia-bisnis-di-2024
  2. AI at Scale: The Key to Growth for Businesses in Indonesia, accessed July 11, 2025, https://www.cnbcindonesia.com/opini/20241224154619-14-598512/ai-dalam-skala-besar-kunci-pertumbuhan-bagi-bisnis-di-indonesia
  3. SAP reveals low-growth Indonesian midmarket businesses more ..., accessed July 11, 2025, https://news.sap.com/sea/2024/10/sap-reveals-low-growth-indonesian-midmarket-businesses-more-likely-to-prioritise-gen-ai/
  4. What are the Challenges of AI Adoption for Businesses in Indonesia? - Tempo.co, accessed July 11, 2025, https://www.tempo.co/ekonomi/apa-saja-tantangan-adopsi-ai-bagi-pelaku-usaha-di-indonesia--1663161
  5. Legal Reform of Restrictions on the Use of Artificial Intelligence (AI) in Order to Maintain Public Law in Indonesia - EUDL, accessed July 11, 2025, https://eudl.eu/pdf/10.4108/eai.25-5-2024.2349444
  6. The Urgency of Artificial Intelligence Regulation in Indonesia from the Perspective of Responsive Legal Theory and Sadd Al-Dzariah, accessed July 11, 2025, https://urj.uin-malang.ac.id/index.php/albalad/article/download/6329/1815/
  7. Indonesia: Kominfo announces Government approach for AI governance | News, accessed July 11, 2025, https://www.dataguidance.com/news/indonesia-kominfo-announces-government-approach-ai
  8. Circular Letter of the Minister of Communication and Informatics Number 9 of 2023 concerning the Ethics of Artificial Intelligence Tebit, accessed July 11, 2025, https://www.schoolmedia.id/lipsus/4089/surat-edaran-menkominfo-nomor-9-tahun-2023-tentang-etika-kecerdasan-buatan-tebit
  9. The Personal Data Protection Law Officially Takes Effect: Strict Sanctions for Personal Data Violators - Naval CSIRT, accessed July 11, 2025, https://naval-csirt.tnial.mil.id/uu-pdp-resmi-berlaku-sanksi-tegas-bagi-pelanggar-data-pribadi/
  10. Law No. 27/2022: Protection of Personal Data, accessed July 11, 2025, https://jdih.maritim.go.id/en/uu-no-272022-pelindungan-data-pribadi
  11. Law No. 27 of 2022 - BPK Regulations, accessed July 11, 2025, https://peraturan.bpk.go.id/Details/229798/uu-no-27-tahun-2022
  12. Law Number 27 of 2022 - JDIH Kemkomdigi, accessed July 11, 2025, https://jdih.komdigi.go.id/produk_hukum/view/id/832/t/undangundang+nomor+27+tahun+2022
  13. USE OF PERSONAL DATA AS A MEANS OF ARTIFICIAL INTELLIGENCE TRAINING, accessed July 11, 2025, https://ejournal.cahayailmubangsa.institute/index.php/causa/article/download/148/123
  14. Law Number 27 of 2022 concerning Personal Data Protection (PDP): Maintaining the Security and Privacy of Citizens' Data - JDIH - Kota Semarang, accessed July 11, 2025, https://jdih.semarangkota.go.id/artikel/view/undang-undang-nomor-27-tahun-2022-tentang-pelindungan-data-pribadi-pdp-menjaga-keamanan-dan-privasi-data-warga-negara
  15. Indonesia's Legal Policy on Protecting Personal Data from Artificial Intelligence Abuse, accessed July 11, 2025, https://www.shs-conferences.org/articles/shsconf/pdf/2024/24/shsconf_diges-grace2024_07002.pdf
  16. (PDF) Indonesia's Legal Policy on Protecting Personal Data from Artificial Intelligence Abuse - ResearchGate, accessed July 11, 2025, https://www.researchgate.net/publication/386107720_Indonesia's_Legal_Policy_on_Protecting_Personal_Data_from_Artificial_Intelligence_Abuse
  17. PDP Law Sanctions and How to Avoid Data Breaches - Logique, accessed July 11, 2025, https://www.logique.co.id/blog/2024/11/13/sanksi-uu-pdp/
  18. Mr. Prabowo, The Establishment of a Personal Data Protection Agency is Urgent!, accessed July 11, 2025, https://www.youtube.com/watch?v=N53BBxqAHPo
  19. The Urgency of Establishing a Personal Data Supervisory Institution as a Legal Protection Effort against Cross-Border Personal Data Transfer - Ejournal Undip, accessed July 11, 2025, https://ejournal2.undip.ac.id/index.php/jphi/article/view/23307
  20. Establishment of a PDP Institution as an Independent Institution? - HeyLaw.id, accessed July 11, 2025, https://heylaw.id/blog/pembentukan-lembaga-pdp-sebagai-lembaga-independen
  21. Analysis of Artificial Intelligence Creations According to the Law ..., accessed July 11, 2025, https://rayyanjurnal.com/index.php/jleb/article/view/1763/1647
  22. Copyright Protection Against Songs Involving Artificial Intelligence (AI) In the Music Industry Based on Indonesian Copyright L - CORE, accessed July 11, 2025, https://core.ac.uk/download/647872986.pdf
  23. Issues and Possibilities in Regulating Artificial Intelligence (AI) Related To Copyright in Indonesia - ResearchGate, accessed July 11, 2025, https://www.researchgate.net/publication/381570020_Issues_and_Possibilities_in_Regulating_Artificial_Intelligence_AI_Related_To_Copyright_in_Indonesia
  24. Law No. 28 of 2014 on Copyright, Indonesia, WIPO Lex, accessed July 11, 2025, https://www.wipo.int/wipolex/en/legislation/details/15600
  25. Undang-Undang Nomor 28 Tahun 2014 Tentang ... - JDIH DGIP - DJKI, accessed July 11, 2025, https://jdih.dgip.go.id/produk_hukum/view/id/3/t/undangundang+nomor+28+tahun+2014+tentang+hak+cipta
  26. Copyright Law Reform: Challenges and Opportunities in the Era of Artificial Intelligence, accessed July 11, 2025, https://lk2fhui.law.ui.ac.id/portfolio/reformasi-undang-undang-hak-cipta-tantangan-dan-peluang-era-kecerdasan-buatan/
  27. Artificial Intelligence (AI) as a Legal Object vs ... - Liputan Humas, accessed July 11, 2025, https://dgip.go.id/artikel/detail-artikel-berita/kecerdasan-buatan-ai-sebagai-objek-hukum-vs-subjek-hukum-dalam-pelindungan-kekayaan-intelektual?kategori=liputan-humas
  28. Can Artificial Intelligence (AI) Works Be Registered for Copyright? - Kunci Hukum - Artikel, accessed July 11, 2025, https://www.kuncihukum.com/artikelpage/92
  29. US ruling on AI and copyright reflects ongoing UK debate - Pinsent Masons, accessed July 11, 2025, https://www.pinsentmasons.com/out-law/news/us-ruling-on-ai-and-copyright-reflects-ongoing-uk-debate
  30. Copyright Regulation for AI-Generated Images Legal Approaches in Indonesia and the United States - ResearchGate, accessed July 11, 2025, https://www.researchgate.net/publication/387571040_Copyright_Regulation_for_AI-Generated_Images_Legal_Approaches_in_Indonesia_and_the_United_States
  31. PROBLEMATIKA PERLINDUNGAN HAK CIPTA YANG DIHASILKAN ARTIFICIAL INTELLIGENCE STUDI PERBANDINGAN KONSEP HUKUM INDONESIA DENGAN UNI, accessed July 11, 2025, https://digilib.uin-suka.ac.id/id/eprint/63210/1/17103040029_BAB-I_IV-atau-V_DAFTAR-PUSTAKA.pdf
  32. AI Product Liability in the Indonesian Legal System: Implications for ..., accessed July 11, 2025, https://jurnal.bundamediagrup.co.id/index.php/iuris/article/download/808/522
  33. Unlawful Acts in Introductory Civil Law, accessed July 11, 2025, https://www.pa-sungguminasa.go.id/pdf/Artikel_Pengadilan/73-Perbuatan%20Melawan%20Hukum%20dalam%20Hukum%20Perdata.pdf
  34. Review of Lawsuits Against Acts Against ... - Website DJKN, accessed July 11, 2025, https://www.djkn.kemenkeu.go.id/artikel/baca/14384/Tinjauan-terhadap-Gugatan-Perbuatan-Melawan-Hukum.html
  35. CIVIL LIABILITY FOR ARTIFICIAL INTELLIGENCE THAT CAUSES LOSSES ACCORDING TO LAW IN INDONESIA, accessed July 11, 2025, https://journal.unpar.ac.id/index.php/veritas/article/download/6037/4048/21423
  36. The Urgency of Regulating Artificial Intelligence in the Online Business Sector in Indonesia, accessed July 11, 2025, https://ojs.rewangrencang.com/index.php/JHLG/article/download/104/53/523
  37. RISKS AND MITIGATION OF ARTIFICIAL INTELLIGENCE USE IN EDUCATION - Proceeding of Pekalongan University, accessed July 11, 2025, https://proceeding.unikal.ac.id/index.php/kip/article/download/1640/1186/
  38. Fintech Platform Injustice: AI Bias in Credit Offering and Segmentation, accessed July 11, 2025, https://kumparan.com/yudhi-mada/ketidakadilan-platform-fintech-bias-ai-dalam-penawaran-kredit-dan-segmentasi-24Tbd2eNGTp
  39. Circular Letter of the Minister of Communication and ... - JDIH Kemkomdigi, accessed July 11, 2025, https://jdih.komdigi.go.id/produk_hukum/view/id/883/t/surat+edaran+menteri+komunikasi+dan+informatika+nomor+9+tahun+2023
  40. Don't Misuse It! Here are the Ethics of AI Usage Based on the Circular Letter of the Minister of Communication and Information - Sah! News, accessed July 11, 2025, https://news.sah.co.id/jangan-disalahgunakan-berikut-etika-penggunaan-ai-berdasarkan-surat-edaran-menkominfo/
  41. ARTIFICIAL INTELLIGENCE (AI) USAGE POLICY IN THE WORK ENVIRONMENT (“POLICY”) PT Elang Mahkota Teknologi Tbk (“Pers, accessed July 11, 2025, https://cms.emtek.co.id/media/policy_attachment_en/fajb/Utilizing-Artificial-Intelegence-%28AI%29-In-The-Workplace_Policy_IND.pdf
  42. What is responsible AI? - IBM, accessed July 11, 2025, https://www.ibm.com/id-id/think/topics/responsible-ai
  43. AI In the Workplace Guide and Policy Template, accessed July 11, 2025, https://www.nlbenefits.com/learning-center/guides-templates/comprehensive/ai-in-the-workplace-guide-and-policy-template
  44. AI Strategy - Cloud Adoption Framework - Learn Microsoft, accessed July 11, 2025, https://learn.microsoft.com/id-id/azure/cloud-adoption-framework/scenarios/ai/strategy
  45. Implementation Service Agreement bookmark_border - Google Cloud, accessed July 11, 2025, https://cloud.google.com/terms/professional-services?hl=id
  46. VA Contract Template Bilingual | PDF - Scribd, accessed July 11, 2025, https://id.scribd.com/document/654987403/VA-Contract-Template-Bilingual
  47. Generative AI Use Policy Template for the Social Sector 2024 - NTEN, accessed July 11, 2025, https://word.nten.org/wp-content/uploads/2024/07/GAI-Policy-Template.pdf
  48. AI Usage Policy Template - Lattice, accessed July 11, 2025, https://lattice.com/templates/ai-usage-policy-template
  49. Personal Data Protection Agency, Key to Enforcing PDP Law Compliance - ELSAM, accessed July 11, 2025, https://www.elsam.or.id/siaran-pers/lembaga-pelindungan-data-pribadi--kunci-penegakan-kepatuhan-uu-pdp
  50. Navigating Innovation: Indonesia's Regulatory Sandbox Journey - Tech For Good Institute, accessed July 11, 2025, https://techforgoodinstitute.org/blog/expert-opinion/navigating-innovation-indonesias-regulatory-sandbox-journey/
  51. OJK publishes AI guidance for Indonesian banks - TechTrade Asia, accessed July 11, 2025, https://www.techtradeasia.com/2025/05/ojk-publishes-ai-guidance-for.html
  52. Indonesia's AI Moment for Southeast Asia: Powering Innovation and Accelerating Digital Economy - Source Asia - Microsoft News, accessed July 11, 2025 https://news.microsoft.com/source/asia/features/momen-ai-indonesia-untuk-asia-tenggara-mendukung-inovasi-dan-mengakselerasi-ekonomi-digital/