Taiwan's Approach to AI Governance
2024/06/19
In an era where artificial intelligence (AI) reshapes every facet of life, governance plays a pivotal role in harnessing its benefits while mitigating associated risks. Taiwan, recognizing the dual-edged nature of AI, has embarked on a comprehensive strategy to ensure its development is both ethical and effective. This article delves into Taiwan's AI governance framework, exploring its strategic pillars, regulatory milestones, and future directions.
I. Taiwan's AI Governance Vision: Taiwan AI Action Plan 2.0
Taiwan has long viewed AI as a transformative force that must be guided with a careful balance of innovation and regulation. With the advent of technologies capable of influencing democracy, privacy, and social stability, Taiwan's approach is rooted in human-centric values. The nation's strategy is aligned with global movements towards responsible AI, drawing lessons from international standards like those set by the European Union's Artificial Intelligence Act.
The "Taiwan AI Action Plan 2.0" is the cornerstone of this strategy. It is a multi-faceted plan designed to boost Taiwan's AI capabilities through five key components:
1. Talent Development: Enhancing the quality and quantity of AI professionals while improving public AI literacy through targeted education and training initiatives.
2. Technological and Industrial Advancement: Focusing on critical AI technologies and applications to foster industrial growth and creating the Trustworthy AI Dialogue Engine (TAIDE) that communicates in Traditional Chinese.
3. Supportive Infrastructure: Establishing robust AI governance infrastructure to facilitate industry and governmental regulation, and to foster compliance with international standards.
4. International Collaboration: Expanding Taiwan's role in international AI forums, such as the Global Partnership on AI (GPAI), to collaborate on developing trustworthy AI practices.
5. Societal and Humanitarian Engagement: Utilizing AI to tackle pressing societal challenges like labor shortages, an aging population, and environmental sustainability.
II. Guidance-before-legislation
To facilitate a gradual adaptation to the evolving legal landscape of artificial intelligence and maintain flexibility in governance, Taiwan employs a "guidance-before-legislation" approach. This strategy prioritizes the rollout of non-binding guidelines as an initial step, allowing agencies to adjust before any formal legislation is enacted as needed.
Taiwan adopts a proactive approach in AI governance, facilitated by the Executive Yuan. This method involves consistent inter-departmental collaborations to create a unified regulatory landscape. Each ministry is actively formulating and refining guidelines to address the specific challenges and opportunities presented by AI within their areas of responsibility, spanning finance, healthcare, transportation, and cultural sectors.
III. Next step: Artificial Intelligence Basic Act
The drafting of the "Basic Law on Artificial Intelligence," anticipated for legislative review in 2024, marks a significant step towards codifying Taiwan’s AI governance. Built on seven foundational principles—transparency, privacy, autonomy, fairness, cybersecurity, sustainable development, and accountability—this law will serve as the backbone for all AI-related activities and developments in Taiwan.
By establishing rigorous standards and evaluation mechanisms, this law will not only govern but also guide the ethical deployment of AI technologies, ensuring that they are beneficial and safe for all.
IV. Conclusion
As AI continues to evolve, the need for robust governance frameworks becomes increasingly critical. Taiwan is setting a global standard for AI governance that is both ethical and effective. Through legislation, active international cooperation, and a steadfast commitment to human-centric values, Taiwan is shaping a future where AI technology not only thrives but also aligns seamlessly with societal norms and values.
Blockchain and General Data Protection Regulation (GDPR) compliance issues (2019) I. Brief Blockchain technology can solve the problem of trust between data demanders and data providers. In other words, in a centralized mode, data demanders can only choose to believe that the centralized platform will not contain the false information. However, in the decentralized mode, data isn’t controlled by one individual group or organization[1], data demanders can directly verify information such as data source, time, and authorization on the blockchain without worrying about the correctness and authenticity of the data. Take the “immutable” for example, it is conflict with the right to erase (also known as the right to be forgotten) in the GDPR.With encryption and one-time pad (OTP) technology, data subjects can make data off-chain storaged or modified at any time in a decentralized platform, so the problem that data on blockchain not meet the GDPR regulation has gradually faded away. II. What is GDPR? The purpose of the EU GDPR is to protect user’s data and to prevent large-scale online platforms or large enterprises from collecting or using user’s data without their permission. Violators will be punished by the EU with up to 20 million Euros (equal to 700 million NT dollars) or 4% of the worldwide annual revenue of the prior financial year. The aim is to promote free movement of personal data within the European Union, while maintaining adequate level of data protection. It is a technology-neutral law, any type of technology which is for processing personal data is applicable. So problem about whether the data on blockchain fits GDPR regulation has raise. Since the blockchain is decentralized, one of the original design goals is to avoid a large amount of centralized data being abused. Blockchain can be divided into permissioned blockchains and permissionless blockchains. The former can also be called “private chains” or “alliance chains” or “enterprise chains”, that means no one can join the blockchain without consent. The latter can also be called “public chains”, which means that anyone can participate on chain without obtaining consent. Sometimes, private chain is not completely decentralized. The demand for the use of blockchain has developed a hybrid of two types of blockchain, called “alliance chain”, which not only maintains the privacy of the private chain, but also maintains the characteristics of public chains. The information on the alliance chain will be open and transparent, and it is in conflict with the application of GDPR. III. How to GDPR apply to blockchain ? First, it should be determined whether the data on the blockchain is personal data protected by GDPR. Second, what is the relationship and respective responsibilities of the data subject, data controller, and data processor? Finally, we discuss the common technical characteristics of blockchain and how it is applicable to GDPR. 1. Data on the blockchain is personal data protected by GDPR? First of all, starting from the technical characteristics of the blockchain, blockchain technology is commonly decentralized, anonymous, immutable, trackable and encrypted. The other five major characteristics are immutability, authenticity, transparency, uniqueness, and collective consensus. Further, the blockchain is an open, decentralized ledger technology that can effectively verify and permanently store transactions between two parties, and can be proved. It is a distributed database, all users on the chain can access to the database and the history record, also can directly verify transaction records. Each nodes use peer-to-peer transmission for upload or transfer information without third-party intermediation, which is the unique “decentralization” feature of the blockchain. In addition, the node or any user on the chain has a unique and identifiable set of more than 30 alphanumeric addresses, but the user may choose to be anonymous or provide identification, which is also a feature of transparency with pseudonymity[2]; Data on blockchain is irreversibility of records. Once the transaction is recorded and updated on the chain, it is difficult to change and is permanently stored in the database, that is to say, it has the characteristics of “tamper-resistance”[3]. According to Article 4 (1) of the GDPR, “personal data” means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person. Therefore, if data subject cannot be identified by the personal data on the blockchain, that is an anonymous data, excluding the application of GDPR. (1) What is Anonymization? According to Opinion 05/2014 on Anonymization Techniques by Article 29 Data Protection Working Party of the European Union, “anonymization” is a technique applied to personal data in order to achieve irreversible de-identification[4]. And it also said the “Hash function” of blockchain is a pseudonymization technology, the personal data is possible to be re-identified. Therefore it’s not an “anonymization”, the data on the blockchain may still be the personal data stipulated by the GDPR. As the blockchain evolves, it will be possible to develop technologies that are not regulated by GDPR, such as part of the encryption process, which will be able to pass the court or European data protection authorities requirement of anonymization. There are also many compliance solutions which use technical in the industry, such as avoiding transaction data stored directly on the chain. 2. International data transmission Furthermore, in accordance with Article 3 of the GDPR, “This Regulation applies to the processing of personal data in the context of the activities of an establishment of a controller or a processor in the Union, regardless of whether the processing takes place in the Union or not. This Regulation applies to the processing of personal data of data subjects who are in the Union by a controller or processor not established in the Union, where the processing activities are related to: (a) the offering of goods or services, irrespective of whether a payment of the data subject is required, to such data subjects in the Union; or (b) the monitoring of their behaviour as far as their behaviour takes place within the Union”.[5] In other words, GDPR applies only when the data on the blockchain is not anonymized, and involves the processing of personal data of EU citizens. 3. Identification of data controllers and data processors Therefore, if the encryption technology involves the public storage of EU citizens' personal data and passes it to a third-party controller, it may be identified as the “data controller” under Article 4 of GDPR, and all nodes and miners of the platform may be deemed as the “co-controller” of the data, and be assumed joint responsibility with the data controller by GDPR. For example, the parties can claim the right to delete data from the data controller. In addition, a blockchain operator may be identified as a “processor”, for example, Backend as a Service (BaaS) products, the third parties provide network infrastructure for users, and let users manage and store personal data. Such Cloud Services Companies provide online services on behalf of customers, do not act as “data controllers”. Some commentators believe that in the case of private chains or alliance chains, such as land records transmission, inter-bank customer information sharing, etc., compared to public chain applications: such as cryptocurrencies (Bitcoin for example), is not completely decentralized, and more likely to meet GDPR requirements[6]. For example, in the case of a private chain or alliance chain, it is a closed platform, which contains only a small number of trusted nodes, is more effective in complying with the GDPR rules. 4. Data subject claims In accordance with Article 17 of the GDPR, The data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay and the controller shall have the obligation to erase personal data without undue delay under some grounds. Off-chain storage technology can help the blockchain industry comply with GDPR rules, allowing offline storage of personal data, or allow trusted nodes to delete the private key of encrypted information, which leaving data that cannot be read and identified on the chain. If the data is in accordance with the definition of anonymization by GDPR, there is no room for GDPR to be applied. IV. Conclusion In summary, it’s seem that the application of blockchain to GDPR may include: (a) being difficulty to identified the data controllers and data processors after the data subject upload their data. (b) the nature of decentralized storage is transnational storage, and Whether the country where the node is located, is meets the “adequacy decision” of Article 45 of the GDPR. If it cannot be met, then it needs to consider whether it conforms to the transfers subject to appropriate safeguards of Article 46, or the derogations for specific situations of Article 49 of the GDPR. Reference: [1] How to Trade Cryptocurrency: A Guide for (Future) Millionaires, https://wikijob.com/trading/cryptocurrency/how-to-trade-cryptocurrency [2] DONNA K. HAMMAKER, HEALTH RECORDS AND THE LAW 392 (5TH ED. 2018). [3] Iansiti, Marco, and Karim R. Lakhani, The Truth about Blockchain, Harvard Business Review 95, no. 1 (January-February 2017): 118-125, available at https://hbr.org/2017/01/the-truth-about-blockchain [4] Article 29 Data Protection Working Party, Opinion 05/2014 on Anonymisation Techniques (2014), https://www.pdpjournals.com/docs/88197.pdf [5] Directive 95/46/EC (General Data Protection Regulation), https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016R0679&from=EN [6] Queen Mary University of London, Are blockchains compatible with data privacy law? https://www.qmul.ac.uk/media/news/2018/hss/are-blockchains-compatible-with-data-privacy-law.html
After the European Union's Artificial Intelligence Law, the draft of AI Basic Law is announced in Taiwan.After the European Union's Artificial Intelligence Law, the draft of AI Basic Law is announced in Taiwan. 2024/09/19 Countries around the world are currently seeking to establish AI governance principles. The U.S. currently has only AI executive orders and state bills, and the European Union (EU) first AI law came into effect in August 2024. Taiwan has announced a draft of AI Basic Law for public comments on July 15, 2024, which, if passed by the Legislative Yuan, will become the world's second special legislation on AI. Taiwan's Coming AI Basic Law - Legislative Development and Progress With the successful conclusion of the 2024 Paris Olympics, AI technology has demonstrated its potential on the global stage, bringing new experiences to the public in varied areas, such as sport competition analysis, athlete training, and referee assisting, and showing that AI has also crossed over into the sports industry, in addition to its known applications in areas such as healthcare, finance, transportation, arts and culture fields. As AI will be apply in various industries, it may also bring new risks or impacts to individuals or society. Countries are seeking to establish guidelines and principles for AI governance. The EU’s Artificial Intelligence Act, which was announced to take effect in August 2024. Even in the AI pioneer, the U.S., there are currently only U.S. President’s AI Executive Orders and bills introduced by Congress or state governments. When Taiwan President Lai announced the promotion of the Island of Artificial Intelligence, Taiwan also had a draft of the AI Basic Law announced for public comments by the National Science and Technology Council (NSTC) on July 15, 2024, proposing the principles of basic values for the development of AI in Taiwan. What is the Basic Law in Taiwan? There are 11 basic laws/acts in Taiwan, including the Fundamental Science and Technology Act, and the Ocean Basic Act, etc. A basic law/act is a legislative model of principle, progress, or guideline for a specific important matter. The AI Basic Law serves as a declaration of policy integration, reveals the government's goals and principles, and regulates the executive branch without directly regulating the people, or deriving the rights for substantive claims. Why Taiwan need a Basic Law on Artificial Intelligence? AI is evolving rapidly, and its applications are spreading to a wider range of areas. If all sectors and administrations have different values, there will be no way for a country to develop AI. NSTC has announced a total of 18 articles in the draft, in Article 3 first set out 7 common principles, such as human autonomy, from the AI research and development to the final market application must comply with the basic values; and in the following provisions of Article 4 to declare that the government's 4 major promotional focuses. The most important provision is found in Article 17, which requires that government ministries should review and adjust the functions, businesses and regulations under their scope in accordance with the Basic Law, so as to enable the executive branch to accelerate its response to the changes brought about by AI, and to share a common set of values: the promotion of innovation while taking human rights into consideration. 7 basic principles The draft AI Basic Law in the announcement contains the following 7 basic principles: 1. Sustainable development and well-being: Social equity and environmental sustainability should be taken into account. Appropriate education and training should be provided to minimize the possible digital gap, so that people can adapt to the changes brought about by AI. 2. Human autonomy: It shall support human autonomy, respect for fundamental human rights and cultural values such as the right to personal integrity, and allow for human oversight, thereby implementing a human-based approach that upholds the rule of law and democratic values. 3. Privacy Protection and Data Governance: The privacy of personal data should be properly protected to avoid the risk of data leakage, and the principle of data minimization should be adopted; at the same time, the opening and reuse of non-sensitive data should be promoted. 4. Security and safety: In the process of AI research and development and application, security measures should be established to prevent security threats and attacks and to ensure the robustness and safety of the system. 5. Transparency and explainability: The output of AI should be appropriately disclosed or labeled to facilitate the assessment of possible risks and the understanding of the impact on related rights and interests, thereby enhancing the trustworthiness of AI. 6. Fairness and non-discrimination: In the process of AI research and development and application, the risks of bias and discrimination in algorithms should be avoided as much as possible, and should not lead to discriminatory results for specific groups. 7. Accountability: Ensure the assumption of corresponding responsibilities, including internal governance responsibilities and external social responsibilities. 4 key areas of promotion 1. Innovative Collaboration and Talent Cultivation: Ensuring the resources and talent needed for AI. 2. Risk management and application responsibility: Risks must be identified and managed before AI systems can be safely applied. 3. Protection of rights and access to data: People's basic rights, such as privacy, cannot be compromised. 4. Regulatory Adaptation and Business Review: Policies and regulations must be agile to keep pace with AI development. The AI Basic Law is paving the way for Taiwan's future opportunities and challenges. AI development requires sufficient resources, data and a friendly environment; to ensure the safe application of AI, it is necessary to first identify and plan for different possible risks, and the draft AI Basic Law has initially drawn a blueprint for the above innovative development and safe application. In the future, various government ministries will need to work together to keep up with the wave of AI innovation in terms of business and legal regulations for multiple fields and industries. It is believed that Taiwan can leverage the advantages in the semiconductor industry and talent resources to gain a favorable global strategic position for the development of AI, as well as to help achieve the goal of "AI for good" to enhance the well-being of Taiwan people through a sound legal environment.
Hard Law or Soft Law? –Global AI Regulation Developments and Regulatory ConsiderationsHard Law or Soft Law? –Global AI Regulation Developments and Regulatory Considerations 2023/08/18 Since the launch of ChatGPT on November 30, 2022, the technology has been disrupting industries, shifting the way things used to work, bringing benefits but also problems. Several law suits were filed by artists, writers and voice actors in the US, claiming that the usage of copyright materials in training generative AI violates their copyright.[1] AI deepfake, hallucination and bias has also become the center of discussion, as the generation of fake news, false information, and biased decisions could deeply affect human rights and the society as a whole.[2] To retain the benefits of AI without causing damage to the society, regulators around the world have been accelerating their pace in establishing AI regulations. However, with the technology evolving at such speed and uncertainty, there is a lack of consensus on which regulation approach can effectively safeguard human rights while promoting innovation. This article will provide an overview of current AI regulation developments around the world, a preliminary analysis of the pros and cons of different regulation approaches, and point out some other elements that regulators should consider. I. An overview of the current AI regulation landscape around the world The EU has its lead in legislation, with its parliament adopting its position on the AI ACT in June 2023, heading into trilogue meetings that aim to reach an agreement by the end of this year.[3] China has also announced its draft National AI ACT, scheduled to enter its National People's Congress before the end of 2023.[4] It already has several administration rules in place, such as the 2021 regulation on recommendation algorithms, the 2022 rules for deep synthesis, and the 2023 draft rules on generative AI.[5] Some other countries have been taking a softer approach, preferring voluntary guidelines and testing schemes. The UK published its AI regulation plans in March, seeking views on its sectoral guideline-based pro-innovation regulation approach.[6] To minimize uncertainty for companies, it proposed a set of regulatory principles to ensure that government bodies develop guidelines in a consistent manner.[7] The US National Institute of Standards and Technology (NIST) released the AI Risk Management Framework in January[8], with a non-binding Blueprint for an AI Bill of Rights published in October 2022, providing guidance on the design and use of AI with a set of principles.[9] It is important to take note that some States have drafted regulations on specific subjects, such as New York City’s Final Regulations on Use of AI in Hiring and Promotion came into force in July 2023.[10] Singapore launched the world’s first AI testing framework and toolkit international pilot in May 2022, with the assistance of AWS, DBS Bank, Google, Meta, Microsoft, Singapore Airlines, etc. After a year of testing, it open-sourced the software toolkit in July 2023, to better develop the system.[11] There are also some countries still undecided on their regulation approach. Australia commenced a public consultation on its AI regulatory framework proposal in June[12], seeking views on its draft AI risk management approach.[13] Taiwan’s government announced in July 2023 to propose a draft AI basic law by September 2023, covering topics such as AI-related definition, privacy protections, data governance, risk management, ethical principles, and industrial promotion.[14] However, the plan was recently postponed, indicating a possible shift towards voluntary or mandatory government principles and guidance, before establishing the law.[15] II. Hard law or soft law? The pros and cons of different regulatory approaches One of the key advantages of hard law in AI regulation is its ability to provide binding legal obligations and legal enforcement mechanisms that ensure accountability and compliance.[16] Hard law also provides greater legal certainty, transparency and remedies for consumers and companies, which is especially important for smaller companies that do not have as many resources to influence and comply with fast-changing soft law.[17] However, the legislative process can be time-consuming, slower to update, and less agile.[18] This poses the risk of stifling innovation, as hard law inevitably cannot keep pace with the rapidly evolving AI technology.[19] In contrast, soft law represents a more flexible and adaptive approach to AI regulation. As the potential of AI still remains largely mysterious, government bodies can formulate principles and guidelines tailored to the regulatory needs of different industry sectors.[20] In addition, if there are adequate incentives in place for actors to comply, the cost of enforcement could be much lower than hard laws. Governments can also experiment with several different soft law approaches to test their effectiveness.[21] However, the voluntary nature of soft law and the lack of legal enforcement mechanisms could lead to inconsistent adoption and undermine the effectiveness of these guidelines, potentially leaving critical gaps in addressing AI's risks.[22] Additionally, in cases of AI-related harms, soft law could not offer effective protection on consumer rights and human rights, as there is no clear legal obligation to facilitate accountability and remedies.[23] Carlos Ignacio Gutierrez and Gary Marchant, faculty members at Arizona State University (ASU), analyzed 634 AI soft law programs against 100 criteria and found that two-thirds of the program lack enforcement mechanisms to deliver its anticipated AI governance goals. He pointed out that credible indirect enforcement mechanisms and a perception of legitimacy are two critical elements that could strengthen soft law’s effectiveness.[24] For example, to publish stem cell research in top academic journals, the author needs to demonstrate that the research complies with related research standards.[25] In addition, companies usually have a greater incentive to comply with private standards to avoid regulatory shifts towards hard laws with higher costs and constraints.[26] III. Other considerations Apart from understanding the strengths and limitations of soft law and hard law, it is important for governments to consider each country’s unique differences. For example, Singapore has always focused on voluntary approaches as it acknowledges that being a small country, close cooperation with the industry, research organizations, and other governments to formulate a strong AI governance practice is much more important than rushing into legislation.[27] For them, the flexibility and lower cost of soft regulation provide time to learn from industries to prevent forming rules that aren’t addressing real-world issues.[28] This process allows preparation for better legislation at a later stage. Japan has also shifted towards a softer approach to minimize legal compliance costs, as it recognizes its slower position in the AI race.[29] For them, the EU AI Act is aiming at regulating Giant Tech companies, rather than promoting innovation.[30] That is why Japan considers that hard law does not suit the industry development stage they’re currently in.[31] Therefore, they seek to address legal issues with current laws and draft relevant guidance.[32] IV. Conclusion As the global AI regulatory landscape continues to evolve, it is important for governments to consider the pros and cons of hard law and soft law, and also country-specific conditions in deciding what’s suitable for the country. Additionally, a regular review on the effectiveness and impact of their chosen regulatory approach on AI’s development and the society is recommended. Reference: [1] ChatGPT and Deepfake-Creating Apps: A Running List of Key AI-Lawsuits, TFL, https://www.thefashionlaw.com/from-chatgpt-to-deepfake-creating-apps-a-running-list-of-key-ai-lawsuits/ (last visited Aug 10, 2023); Protection for Voice Actors is Artificial in Today’s Artificial Intelligence World, The National Law Review, https://www.natlawreview.com/article/protection-voice-actors-artificial-today-s-artificial-intelligence-world (last visited Aug 10, 2023). [2] The politics of AI: ChatGPT and political bias, Brookings, https://www.brookings.edu/articles/the-politics-of-ai-chatgpt-and-political-bias/ (last visited Aug 10, 2023); Prospect of AI Producing News Articles Concerns Digital Experts, VOA, https://www.voanews.com/a/prospect-of-ai-producing-news-articles-concerns-digital-experts-/7202519.html (last visited Aug 10, 2023). [3] EU AI Act: first regulation on artificial intelligence, European Parliament, https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence (last visited Aug 10, 2023). [4] 中國國務院發布立法計畫 年內審議AI法草案,經濟日報(2023/06/09),https://money.udn.com/money/story/5604/7223533 (last visited Aug 10, 2023). [5] id [6] A pro-innovation approach to AI regulation, GOV.UK, https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper (last visited Aug 10, 2023). [7] id [8] AI RISK MANAGEMENT FRAMEWORK, NIST, https://www.nist.gov/itl/ai-risk-management-framework (last visited Aug 10, 2023). [9] The White House released an ‘AI Bill of Rights’, CNN, https://edition.cnn.com/2022/10/04/tech/ai-bill-of-rights/index.html (last visited Aug 10, 2023). [10] New York City Adopts Final Regulations on Use of AI in Hiring and Promotion, Extends Enforcement Date to July 5, 2023, Littler https://www.littler.com/publication-press/publication/new-york-city-adopts-final-regulations-use-ai-hiring-and-promotionv (last visited Aug 10, 2023). [11] IMDA, Fact sheet - Open-Sourcing of AI Verify and Set Up of AI Verify Foundation (2023), https://www.imda.gov.sg/-/media/imda/files/news-and-events/media-room/media-releases/2023/06/7-jun---ai-annoucements---annex-a.pdf (last visited Aug 10, 2023). [12] Supporting responsible AI: discussion paper, Australia Government Department of Industry, Science and Resources,https://consult.industry.gov.au/supporting-responsible-ai (last visited Aug 10, 2023). [13] Australian Government Department of Industry, Science and Resources, Safe and responsible AI in Australia (2023), https://storage.googleapis.com/converlens-au-industry/industry/p/prj2452c8e24d7a400c72429/public_assets/Safe-and-responsible-AI-in-Australia-discussion-paper.pdf (last visited Aug 10, 2023). [14] 張璦,中央通訊社,AI基本法草案聚焦隱私保護、應用合法性等7面向 擬設打假中心,https://www.cna.com.tw/news/ait/202307040329.aspx (最後瀏覽日:2023/08/10)。 [15] 蘇思云,中央通訊社,2023/08/01,鄭文燦:考量技術發展快應用廣 AI基本法延後提出,https://www.cna.com.tw/news/afe/202308010228.aspx (最後瀏覽日:2023/08/10)。 [16] supra, note 13, at 27. [17] id. [18] id., at 28. [19] Soft law as a complement to AI regulation, Brookings, https://www.brookings.edu/articles/soft-law-as-a-complement-to-ai-regulation/ (last visited Aug 10, 2023). [20] supra, note 5. [21] Gary Marchant, “Soft Law” Governance of Artificial Intelligence (2019), https://escholarship.org/uc/item/0jq252ks (last visited Aug 10, 2023). [22] How soft law is used in AI governance, Brookings,https://www.brookings.edu/articles/how-soft-law-is-used-in-ai-governance/ (last visited Aug 10, 2023). [23] supra, note 13, at 27. [24] Why Soft Law is the Best Way to Approach the Pacing Problem in AI, Carnegie Council for Ethics in International Affairs,https://www.carnegiecouncil.org/media/article/why-soft-law-is-the-best-way-to-approach-the-pacing-problem-in-ai (last visited Aug 10, 2023). [25] id. [26] id. [27] Singapore is not looking to regulate A.I. just yet, says the city-state’s authority, CNBC,https://www.cnbc.com/2023/06/19/singapore-is-not-looking-to-regulate-ai-just-yet-says-the-city-state.html#:~:text=Singapore%20is%20not%20rushing%20to,Media%20Development%20Authority%2C%20told%20CNBC (last visited Aug 10, 2023). [28] id. [29] Japan leaning toward softer AI rules than EU, official close to deliberations says, Reuters, https://www.reuters.com/technology/japan-leaning-toward-softer-ai-rules-than-eu-source-2023-07-03/ (last visited Aug 10, 2023). [30] id. [31] id. [32] id.
A Preferred Model for Taiwan’s agency level AI risk categorization and management: A Cross-Jurisdictional PerspectiveA Preferred Model for Taiwan’s agency level AI risk categorization and management: A Cross-Jurisdictional Perspective 2025/09/15 Taiwan’s draft Artificial Intelligence Basic Law includes a provision allowing each government agency to establish its own risk-based AI management rules tailored to sector-specific regulatory needs[1]. To strike an effective balance between innovation and oversight, selecting an appropriate reference model is essential. After comparing major jurisdictions, this research argues that the United States Office of Management and Budget (OMB) Memorandum M-25-21—Accelerating Federal Use of AI through Innovation, Governance, and Public Trust[2]—offers the most balanced and practical approach for Taiwan’s agencies to refer to at this initial stage of developing AI regulation and promoting AI adoption. This article will first present an overview of the U.S. M-25-21 framework and its key features. It will then explain why the U.S. model is more suitable for Taiwan than those of other jurisdictions. Finally, it will conclude with recommendations for the government. I. Overview of the U.S. M-25-21 Framework Issued in April 2025 under Executive Order 14179, M-25-21 directs federal agencies to accelerate the adoption of artificial intelligence while maintaining a set of minimum safeguards. The memorandum identifies three priorities—innovation, governance, and public trust—and structures AI oversight around these principles. It requires every executive branch agency to designate a Chief AI Officer (CAIO), a senior official empowered to promote AI innovation, maintain a current inventory of AI use cases, and ensure that processes such as determining “high-impact” uses are in place. Rather than imposing a centralized management system, M-25-21 allows each agency to make context-sensitive determinations and to accept or waive risk management requirements. This approach recognizes that agencies vary widely in mission and capacity and are best positioned to understand the potential risks and benefits of AI within their own domains. The memorandum defines high-impact AI as systems whose outputs serve as a principal basis for decisions or actions with legal, material, binding, or significant rights and safety consequences. It offers a non-exhaustive list of presumed high-impact categories, including safety-critical functions of critical infrastructure, traffic management, patient diagnosis, blocking protected speech, and law enforcement applications. If an agency official determines that a specific AI use within these categories does not meet the high-impact definition, they must submit written documentation to notify the CAIO. By tying the definition to the effect of an AI system’s output rather than to a fixed sectoral list, M-25-21 provides a flexible method for identifying high-risk AI applications while preserving room for innovation. II. Key Features of the U.S. M-25-21 Framework A. Minimum Risk Management Practices To ensure protection without creating excessive barriers, M-25-21 specifies a set of minimum risk management practices that each agency must apply when using high-impact AI. Agencies are required to conduct pre-deployment testing under realistic conditions to confirm that AI systems perform as intended and to prepare appropriate risk mitigation plans. Even when agencies lack access to source code or training data, they are expected to use alternative testing methods—such as querying the AI service and observing its outputs—to assess performance and potential risks. Before deploying a high-impact AI system, agencies must complete an AI impact assessment. This assessment must explain the system’s intended purpose and expected benefits, analyze the quality and appropriateness of the data used, and evaluate potential impacts on privacy, civil rights, and civil liberties. It should also include a cost analysis, planned reassessment schedules and procedures, and comments highlighting potential concerns or gaps from an independent reviewer who was not involved in the system’s development. Importantly, the assessment must carry the signature of an accountable official who formally accepts the risk of deploying the AI system. Once deployed, agencies are expected to monitor AI systems continuously for performance drift, security vulnerabilities, or unforeseen adverse effects, and to implement appropriate mitigations and maintain documentation. Human oversight is equally essential: operators must receive specific training to interpret AI outputs, intervene when necessary, and use fail-safes or override mechanisms to minimize the risk of significant harm in high-impact situations. To protect the public, M-25-21 insists that individuals affected by AI-enabled decisions have access to timely human review and opportunities to appeal adverse outcomes. Appeals should not impose unnecessary burdens on individuals or the administration. Furthermore, agencies are expected to seek feedback from end users and the public to inform AI-related decision-making. These combined practices—testing, assessment, independent review, monitoring, human oversight, remedies, and feedback—form a balanced foundation for responsible AI use. The memorandum also requires agencies to safely discontinue any high-impact use cases that fail to comply with the minimum practices. B. Waiver System: Purpose and Conditions A distinctive feature of M-25-21 is its formal system for waivers from the minimum risk management practices. The waiver mechanism exists to reconcile two priorities: ensuring safety and rights protections on the one hand, and enabling innovation and rapid response on the other. Waivers may be considered when following a particular requirement would actually increase risks to safety or rights overall, or when compliance would create an unacceptable impediment to critical agency operations. For example, during a natural disaster or public health emergency, strict adherence to every procedural requirement might delay the deployment of an AI application that could save lives. In such situations, the CAIO may authorize a waiver to permit rapid deployment while still tracking and reassessing the use. Waivers for pilot programs are equally important for encouraging experimentation and innovation. They allow agencies to conduct small-scale, time-limited AI projects without implementing all minimum risk management practices, provided certain conditions are met: the pilot must be certified by the CAIO, centrally tracked, offer opt-in and opt-out options for individual participation, and apply minimum risk management practices where practicable. The memorandum imposes safeguards on this flexibility. Every waiver must be documented with a written determination explaining the reasoning, centrally tracked, and reassessed annually or whenever significant changes to the AI application’s conditions or context occur. CAIOs retain the power to revoke waivers at any time, and agencies must report any granted or revoked waiver to OMB annually and within 30 days of significant modifications. This approach maintains accountability while preventing rigid rules from becoming obstacles to effective governance. C. Disclosure Requirements for High-Impact Use and Waivers M-25-21 strongly emphasizes transparency as a pillar of public trust. Each agency must maintain an inventory of all AI use cases, submit it to OMB, and post a public version on the agency’s website. This inventory should be updated annually and, ideally, throughout the year to reflect the agency’s current use of AI. Transparency ensures that the public, civil society, and oversight bodies can understand where AI is influencing important government decisions without exposing sensitive or classified details. Similarly, agencies must publicly release summaries of each waiver or determination, including the justification, or explicitly indicate when no determinations or waivers are active. By making these summaries visible, the system builds confidence that waivers are granted for legitimate reasons. At the same time, OMB retains the authority to request detailed records concerning exception determinations within presumed high-impact categories. This combination of public disclosure and federal oversight helps maintain trust while safeguarding privacy, national security, and proprietary information. III. Why M-25-21 Stands Out for Taiwan’s AI Governance among Global Approaches Taiwan’s draft AI Basic Law envisions a decentralized system in which each agency determines its own risk classification and management practices[3]. The U.S. framework aligns closely with this philosophy. By empowering agencies to identify high-risk AI use cases tailored to their specific contexts, M-25-21 helps ensure that AI governance remains grounded in operational realities. At the same time, adopting M-25-21’s baseline practices, waiver safeguards, and disclosure requirements would provide consistency and public accountability across agencies. The combination of minimum risk management practices and transparent waiver use would encourage innovation while reassuring the public that any exceptions are justified, continuously monitored, and effectively controlled. Furthermore, embracing an approach that reflects emerging international consensus—particularly the emphasis on transparency in both U.S. and EU regimes—would position Taiwan to harmonize with global AI governance trends and strengthen its credibility in international markets. In contrast, the European Union’s AI Act predefines high-risk categories and mandates strict conformity assessments, CE Marking, and post-market monitoring[4]—an approach that is comprehensive but resource-intensive and may not suit all agencies equally. Australia’s ongoing discussions had been trending toward a similarly comprehensive model, but there has recently been backlash against this approach. Korea’s AI Basic Act[5] references high-risk AI only in broad terms and leaves most operational details undefined. M-25-21 strikes a middle ground, offering minimum yet concrete safeguards while preserving the flexibility agencies need to tailor governance to their specific domains. IV. Recommendations and Conclusion Based on this analysis, this research recommends that each agency designate a senior AI leader similar to a CAIO, maintain a public inventory of high-impact AI use cases, and publish summaries of waivers or determinations while safeguarding sensitive information. Agencies should also be encouraged to share AI resources and lessons learned to reduce duplication and strengthen governance maturity across government. Over time, these risk management practices can be refined in response to operational experience and evolving international standards. By adopting these principles, Taiwan can empower its agencies to innovate responsibly, protect citizens’ rights, and build public trust—ensuring that AI deployment across government remains both effective and aligned with global best practices. [1]〈政院通過「人工智慧基本法」草案 建構AI發展與應用良善環境 打造臺灣成為AI人工智慧島〉,行政院,https://www.ey.gov.tw/Page/9277F759E41CCD91/5d673d1e-f418-47dc-ab35-a06600f77f07(最後瀏覽日期︰2025/09/15)。 [2] United States Office of Management and Budget (OMB), M-25-21 Accelerating Federal Use of AI through Innovation, Governance, and Public Trust, https://www.whitehouse.gov/wp-content/uploads/2025/02/M-25-21-Accelerating-Federal-Use-of-AI-through-Innovation-Governance-and-Public-Trust.pdf (last visited Sept 15, 2025). [3] 蘇文彬,〈行政院通過AI基本法草案,將不設立AI專責機關〉,iThome,https://www.ithome.com.tw/news/170874(最後瀏覽日期︰2025/09/15)。 [4] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689 (last visited Sept 15, 2025). [5] 인공지능발전과신뢰기반조성등에관한기본법안,https://www.law.go.kr/%EB%B2%95%EB%A0%B9/%EC%9D%B8%EA%B3%B5%EC%A7%80%EB%8A%A5%20%EB%B0 %9C%EC%A0%84%EA%B3%BC%20%EC%8B%A0%EB%A2%B0%20%EA%B8%B0%EB%B0%98%20%EC%A1%B0 %EC%84%B1%20%EB%93%B1%EC%97%90%20%EA%B4%80%ED%95%9C%20%EA%B8%B0%EB%B3%B8%EB%B2 %95/(20676,20250121) (last visited Sept 15, 2025).