The Institutionalization of the Taiwan Personal Data Protection Committee - Triumph of Digital Constitutionalism: A Legal Positivism Analysis

The Institutionalization of the Taiwan Personal Data Protection Committee - Triumph of Digital Constitutionalism: A Legal Positivism Analysis

2023/07/13

The Legislative Yuan recently passed an amendment to the Taiwan Personal Data Protection Act, which resulted in the institutionalization of the Taiwan Personal Data Protection Commission (hereunder the “PDPC”)[1]. This article aims to analyze the significance of this institutionalization from three different perspectives: legal positivism, digital constitutionalism, and Millian liberalism. By examining these frameworks, we can better understand the constitutional essence of sovereignty, the power dynamics among individuals, businesses, and governments, and the paradox of freedom that the PDPC addresses through governance and trust.

I.Three Layers of Significance

1.Legal Positivism

The institutionalization of the PDPC fully demonstrates the constitutional essence of sovereignty in the hands of citizens. Legal positivism emphasizes the importance of recognizing and obeying (the sovereign, of which it is obeyed by all but does not itself obey to anyone else, as Austin claims) laws that are enacted by legitimate authorities[2]. In this context, the institutionalization of the PDPC signifies the recognition of citizens' rights to control their personal data and the acknowledgment of the sovereign in protecting their privacy. It underscores the idea that the power to govern personal data rests with the individuals themselves, reinforcing the principles of legal positivism regarding sovereign

Moreover, legal positivism recognizes the authority of the state in creating and enforcing laws. The institutionalization of the PDPC as a specialized commission with the power to regulate and enforce personal data protection laws represents the state's recognition of the need to address the challenges posed by the digital age. By investing the PDPC with the authority to oversee the proper handling and use of personal data, the state acknowledges its responsibility to protect the rights and interests of its citizens.

2.Digital Constitutionalism

The institutionalization of the PDPC also rebalances the power structure among individuals, businesses, and governments in the digital realm[3]. Digital constitutionalism refers to the principles and norms that govern the relationship between individuals and the digital sphere, ensuring the protection of rights and liberties[4]. With the rise of technology and the increasing collection and use of personal data, individuals often find themselves at a disadvantage compared to powerful entities such as corporations and governments[5].

However, the PDPC acts as a regulatory body that safeguards individuals' interests, rectifying the power imbalances and promoting digital constitutionalism. By establishing clear rules and regulations regarding the collection, use, and transfer of personal data, the PDPC may set a framework that ensures the protection of individuals' privacy and data rights. It may enforce accountability among businesses and governments, holding them responsible for their data practices and creating a level playing field where individuals have a say in how their personal data is handled.

3.Millian Liberalism

The need for the institutionalization of the PDPC embodies the paradox of freedom, as raised in John Stuart Mill’s “On Liberty”[6], where Mill recognizes that absolute freedom can lead to the infringement of others' rights and well-being. In this context, the institutionalization of the PDPC acknowledges the necessity of governance to mitigate the risks associated with personal data protection.

In the digital age, the vast amount of personal data collected and processed by various entities raises concerns about privacy, security, and potential misuse. The institutionalization of the PDPC represents a commitment to address these concerns through responsible governance. By setting up rules, regulations, and enforcement mechanisms, the PDPC ensures that individuals' freedoms are preserved without compromising the rights and privacy of others. It strikes a delicate balance between individual autonomy and the broader social interest, shedding light on the paradox of freedom.

II.Legal Positivism: Function and Authority of the PDPC

1.John Austin's Concept of Legal Positivism: Sovereignty, Punishment, Order

To understand the function and authority of the PDPC, we turn to John Austin's concept of legal positivism. Austin posited that laws are commands issued by a sovereign authority and backed by sanctions[7]. Sovereignty entails the power to make and enforce laws within a given jurisdiction.

In the case of the PDPC, its institutionalization by the Legislative Yuan reflects the recognition of its authority to create and enforce regulations concerning personal data protection. The PDPC, as an independent and specialized committee, possesses the necessary jurisdiction and competence to ensure compliance with the law, administer punishments for violations, and maintain order in the realm of personal data protection.

2.Dire Need for the Institutionalization of the PDPC

There has been a dire need for the establishment of the PDPC following the Constitutional Court's decision in August 2022, holding that the government needed to establish a specific agency in charge of personal data-related issues[8]. This need reflects John Austin's concept of legal positivism, as it highlights the demand for a legitimate and authoritative body to regulate and oversee personal data protection. The PDPC's institutionalization serves as a response to the growing concerns surrounding data privacy, security breaches, and the increasing reliance on digital platforms. It signifies the de facto recognition of the need for a dedicated institution to safeguard the individual’s personal data rights, reinforcing the principles of legal positivism.

Furthermore, the institutionalization of the PDPC demonstrates the responsiveness of the legislative branch to the evolving challenges posed by the digital age. The amendment to the Taiwan Personal Data Protection Act and the subsequent institutionalization of the PDPC are the outcomes of a democratic process, reflecting the will of the people and their desire for enhanced data protection measures. It signifies a commitment to uphold the rule of law and ensure the protection of citizens' rights in the face of emerging technologies and their impact on privacy.

3.Authority to Define Cross-Border Transfer of Personal Data

Upon the establishment of the PDPC, it's authority to define what constitutes a cross-border transfer of personal data under Article 21 of the Personal Data Protection Act will then align with John Austin's theory on order. According to Austin, laws bring about order by regulating behavior and ensuring predictability in society.

By granting the PDPC the power to determine cross-border data transfers, the legal framework brings clarity and consistency to the process. This promotes order by establishing clear guidelines and standards, reducing uncertainty, and enhancing the protection of personal data in the context of international data transfers.

The PDPC's authority in this regard reflects the recognition of the need to regulate and monitor the cross-border transfer of personal data to protect individuals' privacy and prevent unauthorized use or abuse of their information. It ensures that the transfer of personal data across borders adheres to legal and ethical standards, contributing to the institutionalization of a comprehensive framework for cross-border data transfer.

III.Conclusion

In conclusion, the institutionalization of the Taiwan Personal Data Protection Committee represents the convergence of legal positivism, digital constitutionalism, and Millian liberalism. It signifies the recognition of citizens' sovereignty over their personal data, rebalances power dynamics in the digital realm, and addresses the paradox of freedom through responsible governance. By analyzing the PDPC's function and authority in the context of legal positivism, we understand its role as a regulatory body to maintain order and uphold the principles of legal positivism. The institutionalization of the PDPC serves as a milestone in Taiwan's commitment to protect individuals' personal data and safeguard the digital rights. In essence, the institutionalization of the Taiwan Personal Data Protection Committee represents a triumph of digital constitutionalism, where individuals' rights and interests are safeguarded, and power imbalances are rectified. It also embodies the recognition of the paradox of freedom and the need for responsible governance in the digital age in Taiwan.

 

Reference:

[1] Lin Ching-yin & Evelyn Yang, Bill to establish data protection agency clears legislative floor, CNA English News, FOCUS TAIWAN, May 16, 2023, https://focustaiwan.tw/society/202305160014 (last visited, July 13, 2023).

[2] Legal positivism, Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/entries/legal-positivism/?utm_source=fbia (last visited July 13, 2023).

[3] Edoardo Celeste, Digital constitutionalism: how fundamental rights are turning digital, (2023): 13-36, https://doras.dcu.ie/28151/1/2023_Celeste_DIGITAL%20CONSTITUTIONALISM_%20HOW%20FUNDAMENTAL%20RIGHTS%20ARE%20TURNING%20DIGITAL.pdf  (last visited July 3, 2023).

[4] GIOVANNI DE GREGORIO, DIGITAL CONSTITUTIONALISM IN EUROPE: REFRAMING RIGHTS AND POWERS IN THE ALGORITHMIC SOCIETY 218 (2022).

[5] Celeste Edoardo, Digital constitutionalism: how fundamental rights are turning digital (2023), https://doras.dcu.ie/28151/1/2023_Celeste_DIGITAL%20CONSTITUTIONALISM_%20HOW%20FUNDAMENTAL%20RIGHTS%20ARE%20TURNING%20DIGITAL.pdf (last visited July 13, 2023).

[6] JOHN STUART MILL, On Liberty (1859), https://openlibrary-repo.ecampusontario.ca/jspui/bitstream/123456789/1310/1/On-Liberty-1645644599.pdf (last visited July 13, 2023).

[7] Legal positivism, Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/entries/legal-positivism/?utm_source=fbia (last visited July 13, 2023).

[8] Lin Ching-yin & Evelyn Yang, Bill to establish data protection agency clears legislative floor, CNA English News, FOCUS TAIWAN, May 16, 2023, https://focustaiwan.tw/society/202305160014 (last visited, July 13, 2023).

※The Institutionalization of the Taiwan Personal Data Protection Committee - Triumph of Digital Constitutionalism: A Legal Positivism Analysis,STLI, https://stli.iii.org.tw/en/article-detail.aspx?no=105&tp=2&i=168&d=9023 (Date:2025/11/12)
Quote this paper
You may be interested
The use of automated facial recognition technology and supervision mechanism in UK

The use of automated facial recognition technology and supervision mechanism in UK I. Introduction   Automatic facial recognition (AFR) technology has developed rapidly in recent years, and it can identify target people in a short time. The UK Home Office announced the "Biometrics Strategy" on June 28, 2018, saying that AFR technology will be introduced in the law enforcement, and the Home Office will also actively cooperate with other agencies to establish a new oversight and advisory board in order to maintain public trust. AFR technology can improve law enforcement work, but its use will increase the risk of intruding into individual liberty and privacy.   This article focuses on the application of AFR technology proposed by the UK Home Office. The first part of this article describes the use of AFR technology by the police. The second part focuses on the supervision mechanism proposed by the Home Office in the Biometrics Strategy. However, because the use of AFR technology is still controversial, this article will sort out the key issues of follow-up development through the opinions of the public and private sectors. The overview of the discussion of AFR technology used by police agencies would be helpful for further policy formulation. II. Overview of the strategy of AFR technology used by the UK police   According to the Home Office’s Biometrics Strategy, the AFR technology will be used in law enforcement, passports and immigration and national security to protect the public and make these public services more efficient[1]. Since 2017 the UK police have worked with tech companies in testing the AFR technology, at public events like Notting Hill Carnival or big football matches[2].   In practice, AFR technology is deployed with mobile or fixed camera systems. When a face image is captured through the camera, it is passed to the recognition software for identification in real time. Then, the AFR system will process if there is a ‘match’ and the alarm would solicit an operator’s attention to verify the match and execute the appropriate action[3]. For example, South Wales Police have used AFR system to compare images of people in crowds attending events with pre-determined watch lists of suspected mobile phone thieves[4]. In the future, the police may also compare potential suspects against images from closed-circuit television cameras (CCTV) or mobile phone footage for evidential and investigatory purposes[5].   The AFR system may use as tools of crime prevention, more than as a form of crime detection[6]. However, the uses of AFR technology are seen as dangerous and intrusive by the UK public[7]. For one thing, it could cause serious harm to democracy and human rights if the police agency misuses AFR technology. For another, it could have a chilling effect on civil society and people may keep self-censoring lawful behavior under constant surveillance[8]. III. The supervision mechanism of AFR technology   To maintaining public trust, there must be a supervision mechanism to oversight the use of AFR technology in law enforcement. The UK Home Office indicates that the use of AFR technology is governed by a number of codes of practice including Police and Criminal Evidence Act 1984, Surveillance Camera Code of Practice and the Information Commissioner’s Office (ICO)’s Code of Practice for surveillance cameras[9]. (I) Police and Criminal Evidence Act 1984   The Police and Criminal Evidence Act (PACE) 1984 lays down police powers to obtain and use biometric data, such as collecting DNA and fingerprints from people arrested for a recordable offence. The PACE allows law enforcement agencies proceeding identification to find out people related to crime for criminal and national security purposes. Therefore, for the investigation, detection and prevention tasks related to crime and terrorist activities, the police can collect the facial image of the suspect, which can also be interpreted as the scope of authorization of the  PACE. (II) Surveillance Camera Code of Practice   The use of CCTV in public places has interfered with the rights of the people, so the Protection of Freedoms Act 2012 requires the establishment of an independent Surveillance Camera Commissioner (SCC) for supervision. The Surveillance Camera Code of Practice  proposed by the SCC sets out 12 principles for guiding the operation and use of surveillance camera systems. The 12 guiding principles are as follows[10]: A. Use of a surveillance camera system must always be for a specified purpose which is in pursuit of a legitimate aim and necessary to meet an identified pressing need. B. The use of a surveillance camera system must take into account its effect on individuals and their privacy, with regular reviews to ensure its use remains justified. C. There must be as much transparency in the use of a surveillance camera system as possible, including a published contact point for access to information and complaints. D. There must be clear responsibility and accountability for all surveillance camera system activities including images and information collected, held and used. E. Clear rules, policies and procedures must be in place before a surveillance camera system is used, and these must be communicated to all who need to comply with them. F. No more images and information should be stored than that which is strictly required for the stated purpose of a surveillance camera system, and such images and information should be deleted once their purposes have been discharged. G. Access to retained images and information should be restricted and there must be clearly defined rules on who can gain access and for what purpose such access is granted; the disclosure of images and information should only take place when it is necessary for such a purpose or for law enforcement purposes. H. Surveillance camera system operators should consider any approved operational, technical and competency standards relevant to a system and its purpose and work to meet and maintain those standards. I. Surveillance camera system images and information should be subject to appropriate security measures to safeguard against unauthorised access and use. J. There should be effective review and audit mechanisms to ensure legal requirements, policies and standards are complied with in practice, and regular reports should be published. K. When the use of a surveillance camera system is in pursuit of a legitimate aim, and there is a pressing need for its use, it should then be used in the most effective way to support public safety and law enforcement with the aim of processing images and information of evidential value. L. Any information used to support a surveillance camera system which compares against a reference database for matching purposes should be accurate and kept up to date. (III) ICO’s Code of Practice for surveillance cameras   It must need to pay attention to the personal data and privacy protection during the use of surveillance camera systems and AFR technology. The ICO issued its Code of Practice for surveillance cameras under the Data Protection Act 1998 to explain the legal requirements operators of surveillance cameras. The key points of ICO’s Code of Practice for surveillance cameras are summarized as follows[11]: A. The use time of the surveillance camera systems should be carefully evaluated and adjusted. It is recommended to regularly evaluate whether it is necessary and proportionate to continue using it. B. A police force should ensure an effective administration of surveillance camera systems deciding who has responsibility for the control of personal information, what is to be recorded, how the information should be used and to whom it may be disclosed. C. Recorded material should be stored in a safe way to ensure that personal information can be used effectively for its intended purpose. In addition, the information may be considered to be encrypted if necessary. D. Disclosure of information from surveillance systems must be controlled and consistent with the purposes for which the system was established. E. Individuals whose information is recoded have a right to be provided with that information or view that information. The ICO recommends that information must be provided promptly and within no longer than 40 calendar days of receiving a request. F. The minimum and maximum retention periods of recoded material is not prescribed in the Data Protection Act 1998, but it should not be kept for longer than is necessary and should be the shortest period necessary to serve the purposes for which the system was established. (IV) A new oversight and advisory board   In addition to the aforementioned regulations and guidance, the UK Home Office mentioned that it will work closely with related authorities, including ICO, SCC, Biometrics Commissioner (BC), and Forensic Science Regulator (FSR) to establish a new oversight and advisory board to coordinate consideration of law enforcement’s use of facial images and facial recognition systems[12].   To sum up, it is estimated that the use of AFR technology by law enforcement has been abided by existing regulations and guidance. Firstly, surveillance camera systems must be used on the purposes for which the system was established. Secondly, clear responsibility and accountability mechanisms should be ensured. Thirdly, individuals whose information is recoded have the right to request access to relevant information. In the future, the new oversight and advisory board will be asked to consider issues relating to law enforcement’s use of AFR technology with greater transparency. IV. Follow-up key issues for the use of AFR technology   Regarding to the UK Home Office’s Biometrics Strategy, members of independent agencies such as ICO, BC, SCC, as well as civil society, believe that there are still many deficiencies, the relevant discussions are summarized as follows: (I) The necessity of using AFR technology   Elizabeth Denham, ICO Commissioner, called for looking at the use of AFR technology carefully, because AFR is an intrusive technology and can increase the risk of intruding into our privacy. Therefore, for the use of AFR technology to be legal, the UK police must have clear evidence to demonstrate that the use of AFR technology in public space is effective in resolving the problem that it aims to address[13].   The Home Office has pledged to undertake Data Protection Impact Assessments (DPIAs) before introducing AFR technology, including the purpose and legal basis, the framework applies to the organization using the biometrics, the necessity and proportionality and so on. (II)The limitations of using facial image data   The UK police can collect, process and use personal data based on the need for crime prevention, investigation and prosecution. In order to secure the use of biometric information, the BC was established under the Protection of Freedoms Act 2012. The mission of the BC is to regulate the use of biometric information, provide protection from disproportionate enforcement action, and limit the application of surveillance and counter-terrorism powers.   However, the BC’s powers do not presently extend to other forms of biometric information other than DNA or fingerprints[14]. The BC has expressed concern that while the use of biometric data may well be in the public interest for law enforcement purposes and to support other government functions, the public benefit must be balanced against loss of privacy. Hence, legislation should be carried to decide that crucial question, instead of depending on the BC’s case feedback[15].   Because biometric data is especially sensitive and most intrusive of individual privacy, it seems that a governance framework should be required and will make decisions of the use of facial images by the police. (III) Database management and transparency   For the application of AFR technology, the scope of biometric database is a dispute issue in the UK. It is worth mentioning that the British people feel distrust of the criminal database held by the police. When someone is arrested and detained by the police, the police will take photos of the suspect’s face. However, unlike fingerprints and DNA, even if the person is not sued, their facial images are not automatically deleted from the police biometric database[16].   South Wales Police have used AFR technology to compare facial images of people in crowds attending major public events with pre-determined watch lists of suspected mobile phone thieves in the AFR field test. Although the watch lists are created for time-limited and specific purposes, the inclusion of suspects who could possibly be innocent people still causes public panic.   Elizabeth Denham warned that there should be a transparency system about retaining facial images of those arrested but not charged for certain offences[17]. Therefore, in the future the UK Home Office may need to establish a transparent system of AFR biometric database and related supervision mechanism. (IV) Accuracy and identification errors   In addition to worrying about infringing personal privacy, the low accuracy of AFR technology is another reason many people oppose the use of AFR technology by police agencies. Silkie Carlo, director of Big Brother Watch, said the police must immediately stop using the AFR technology and avoid mistaking thousands of innocent citizens as criminals; Paul Wiles, Biometrics Commissioner, also called for legislation to manage AFR technology because of its accuracy is too low and the use of AFR technology should be tested and passed external peer review[18].   In the Home Office’s Biometric Strategy, the scientific quality standards for AFR technology will be established jointly with the FSR, an independent agency under the Home Office. In other words, the Home Office plans to extend the existing forensics science regime to regulate AFR technology.   Therefore, the FSR has worked with the SCC to develop standards relevant to digital forensics. The UK government has not yet seen specific standards for regulating the accuracy of AFR technology at the present stage. V. Conclusion   From the discussion of the public and private sectors in the UK, we can summarize some rules for the use of AFR technology. Firstly, before the application of AFR technology, it is necessary to complete the pre-assessment to ensure the benefits to the whole society. Secondly, there is the possibility of identifying errors in AFR technology. Therefore, in order to maintain the confidence and trust of the people, the relevant scientific standards should be set up first to test the system accuracy. Thirdly, the AFR system should be regarded as an assisting tool for police enforcement in the initial stage. In other words, the information analyzed by the AFR system should still be judged by law enforcement officials, and the police officers should take the responsibilities.   In order to balance the protection of public interest and basic human rights, the use of biometric data in the AFR technology should be regulated by a special law other than the regulations of surveillance camera and data protection. The scope of the identification database is also a key point, and it may need legislators’ approval to collect and store the facial image data of innocent people. Last but not least, the use of the AFR system should be transparent and the victims of human rights violations can seek appeal. [1] UK Home Office, Biometrics Strategy, Jun. 28, 2018, https://www.gov.uk/government/publications/home-office-biometrics-strategy (last visited Aug. 09, 2018), at 7. [2] Big Brother Watch, FACE OFF CAMPAIGN: STOP THE MET POLICE USING AUTHORITARIAN FACIAL RECOGNITION CAMERAS, https://bigbrotherwatch.org.uk/all-campaigns/face-off-campaign/ (last visited Aug. 16, 2018). [3] Lucas Introna & David Wood, Picturing algorithmic surveillance: the politics of facial recognition systems, Surveillance & Society, 2(2/3), 177-198 (2004). [4] Supra note 1, at 12. [5] Id, at 25. [6] Michael Bromby, Computerised Facial Recognition Systems: The Surrounding Legal Problems (Sep. 2006)(LL.M Dissertation Faculty of Law University of Edinburgh), http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.197.7339&rep=rep1&type=pdf , at 3. [7] Owen Bowcott, Police face legal action over use of facial recognition cameras, The Guardian, Jun. 14, 2018, https://www.theguardian.com/technology/2018/jun/14/police-face-legal-action-over-use-of-facial-recognition-cameras (last visited Aug. 09, 2018). [8] Martha Spurrier, Facial recognition is not just useless. In police hands, it is dangerous, The Guardian, May 16, 2018, https://www.theguardian.com/commentisfree/2018/may/16/facial-recognition-useless-police-dangerous-met-inaccurate (last visited Aug. 17, 2018). [9] Supra note 1, at 12. [10] Surveillance Camera Commissioner, Surveillance camera code of practice, Oct. 28, 2014, https://www.gov.uk/government/publications/surveillance-camera-code-of-practice (last visited Aug. 17, 2018). [11] UK Information Commissioner’s Office, In the picture: A data protection code of practice for surveillance cameras and personal information, Jun. 09, 2017, https://ico.org.uk/for-organisations/guide-to-data-protection/encryption/scenarios/cctv/ (last visited Aug. 10, 2018). [12] Supra note 1, at 13. [13] Elizabeth Denham, Blog: facial recognition technology and law enforcement, Information Commissioner's Office, May 14, 2018, https://ico.org.uk/about-the-ico/news-and-events/blog-facial-recognition-technology-and-law-enforcement/ (last visited Aug. 14, 2018). [14] Monique Mann & Marcus Smith, Automated Facial Recognition Technology: Recent Developments and Approaches to Oversight, Automated Facial Recognition Technology, 10(1), 140 (2017). [15] Biometrics Commissioner, Biometrics Commissioner’s response to the Home Office Biometrics Strategy, Jun. 28, 2018, https://www.gov.uk/government/news/biometrics-commissioners-response-to-the-home-office-biometrics-strategy (last visited Aug. 15, 2018). [16] Supra note 2. [17] Supra note 13. [18] Jon Sharman, Metropolitan Police's facial recognition technology 98% inaccurate, figures show, INDEPENDENT, May 13, 2018, https://www.independent.co.uk/news/uk/home-news/met-police-facial-recognition-success-south-wales-trial-home-office-false-positive-a8345036.html (last visited Aug. 09, 2018).

Draft of AI Product and System Evaluation Guidelines Released by the Administration for Digital Industries to Enhance AI Governance

Draft of AI Product and System Evaluation Guidelines Released by the Administration for Digital Industries to Enhance AI Governance 2024/08/15 I. AI Taiwan Action Plan 2.0 In 2018, the Executive Yuan launched the “AI Taiwan Action Plan” to ensure that the country keeps pace with AI developments. This strategic initiative focuses on attracting top talent, advancing research and development, and integrating AI into critical sectors such as smart manufacturing and healthcare. The action plan has sparked growing discussion on AI regulation. Through these efforts, Taiwan aims to position itself as a frontrunner in the global smart technology landscape. Later in 2023, the Executive Yuan updated the action plan, releasing “AI Taiwan Action Plan 2.0” to further strengthen AI development. “AI Taiwan Action Plan 2.0” outlines five main pillars: 1. Talent Development: Enhancing the quality and quantity of AI expertise, while improving public AI literacy through targeted education and training initiatives. 2. Technological and Industrial Advancement: Focusing on critical AI technologies and applications to foster industrial growth; and creating the Trustworthy AI Dialogue Engine (TAIDE) that communicates in Traditional Chinese. 3. Enhancing work environments: Establishing robust AI governance infrastructure to facilitate industry and governmental regulation, and to foster compliance with international standards. 4. International Collaboration: Expanding Taiwan's role in international AI forums, such as the Global Partnership on AI, to collaborate on developing trustworthy AI practices. 5. Societal and Humanitarian Engagement: Utilizing AI to tackle pressing societal challenges such as labor shortages, an aging population, and environmental sustainability. II. AI Product and System Evaluation Guidelines: A Risk-based Approach to AI Governance To support infrastructure, in March 2024, the Administration for Digital Industries issued the draft AI Product and System Evaluation Guidelines. The Guidelines are intended to serve as a reference for industry when developing and using AI products and systems, thus laying a crucial foundation for advancing AI-related policies in Taiwan. The Guidelines outline several potential risks associated with AI: 1. Third-Party Software and Hardware: While third-party software, hardware, and datasets can accelerate development, they may also introduce risks into AI products and systems. Therefore, effective risk management policies are crucial. 2. System Transparency: The lack of transparency in AI products and systems makes risk assessment relatively challenging. Inadequate transparency in AI models and datasets also pose risks for development and deployment. 3. Differences in Risk Perception: Developers of AI products and systems may overlook risks specific to different application scenarios. Moreover, risks may gradually emerge as the product or system is used and trained over time. 4. Application Domain Risks: Variations between testing results and actual operational performance can lead to differing risk assessments for evaluated products and systems. 5. Deviation from Human Behavioral Norms: If AI products and systems behave unexpectedly compared to human operations, this can indicate a drift in the product, system, or model, thereby introducing risks. The Guidelines also specify that businesses have to categorize risks when developing or using AI products and systems, and manage them in accordance with these classifications. In alignment with the EU AI Act, risks are classified into four levels: unacceptable, high, limited, and minimal. 1. Unacceptable Risk: If AI systems used by public or private entities provide social scoring of individuals, this could lead to discriminatory outcomes and the exclusion of certain groups. Furthermore, if AI systems are employed to manipulate the cognitive behavior of individuals or vulnerable populations, causing physical or psychological harm, such systems are deemed unacceptable and prohibited. 2. High risk: AI systems are classified as high-risk in several situations. These include applications used in critical infrastructure, such as transportation, where there is potential risk to citizens' safety and health. These situations also encompass AI systems involved in educational or vocational training (such as exam scoring), which can determine access to education or professional paths. AI used as safety-critical product components, such as robot-assisted surgery, also falls into this category. In the employment sector, AI systems used for managing recruitment processes, including CV-sorting software, are considered high-risk. Essential private and public services, such as credit scoring systems that impact loan eligibility, also fall under high-risk. AI used in law enforcement in ways that it may affect fundamental rights, such as evaluating the reliability of evidence, is also included. AI systems involved in migration, asylum, and border control, such as automated visa application examinations, are categorized as high-risk. Finally, AI solutions used in the administration of justice and democratic processes, such as court ruling searches, are also classified as high-risk. If an AI system is classified as high risk, it must be evaluated across ten criteria—Safety, Explainability, Resilience, Fairness, Accuracy, Transparency, Accountability, Reliability, Privacy, and Security—to ensure the AI system’s quality. 3. Limited risk: When an AI product or system is classified as having limited risk, it is up to the enterprise to determine whether an evaluation is required. The Guidelines also introduce specific transparency obligations to ensure that humans are informed when necessary, thus fostering trust. For instance, when using AI systems such as chatbots or systems for generating deepfake content, humans must be made aware that they are interacting with a machine so they can take an informed decision to continue or step back. 4. Minimal or no risk: The Guidelines allow the free use of minimal-risk AI. This includes applications such as AI-enabled video games and spam filters. Ⅲ. Conclusion The AI Product and System Evaluation Guidelines represent a significant step forward in establishing a robust, risk-based framework for AI governance in Taiwan. By aligning with international standards like the EU AI Act, these Guidelines ensure that AI products and systems are rigorously assessed and categorized into four distinct risk levels: unacceptable, high, limited, and minimal. This structured approach allows businesses to manage AI-related risks more effectively, ensuring that systems are safe, transparent, and accountable. The emphasis on evaluating AI systems across ten critical criteria—including safety, explainability, and fairness—reflects a comprehensive strategy to mitigate potential risks. This proactive approach not only safeguards the public but also fosters trust in AI technologies. By setting clear expectations and responsibilities for businesses, the Guidelines promote responsible development and deployment of AI, ultimately contributing to Taiwan's goal of becoming a leader in the global AI landscape.

Executive Yuan Yuan Promoted “Productivity 4.0” to Boost Global Competitiveness

Executive Yuan Yuan Promoted “Productivity 4.0” to Boost Global Competitiveness 1.Executive Yuan held the “Productivity 4.0: Strategy Review Board Meeting” to boost industrial transformation The Executive Yuan held the “Productivity 4.0: Strategy Review Board Meeting” on June 4-5th , 2015. The GDP per capita of manufacturing and service industries, including machinery, metal processing, transportation vehicles, 3C, food, textile, logistics, health care, and agriculture, are expected to be over 10 million NT dollars by 2024. This meeting focuses on three topics: Productivity 4.0 industry and technology development strategy, advanced development strategy on advanced manufacturing and innovation application, and strategy on engineering smart tech talents cultivation and Industry-academic Cooperation. The three main themes to be used to put the advanced manufacturing into force are smart automation and robots, sensing and control technologies from Internet of Things (IoT), and technologies used in analyzing the big data. As a result, the digitalization of small- and medium-sized business and smart operation of big business are as the cornerstones to build service-oriented systems and develop advanced manufacturing in R.O.C.. Facing challenges of labor shortages and aging labor forces, the Executive Yuan is planning to implement “Productivity 4.0” to stimulate the process of industry transformation of value-added innovation and provide new products and services in global market. In implementing the above-mentioned policy goals, the Executive Yuan is planning three directions to be followed. First, global competitiveness is depended upon key technologies. As OEMs, manufacturing industry in R.O.C. is unable to provide products of self-owned brand and is vulnerable while facing challenges from other transnational companies. Second, the Premier, Dr. Mao Chi-kuo, made reference of the bicycle industry’s successful development model as an example for the Productivity 4.0 “A-Team” model. Through combining technologies and organizations, the aim is to build competitive supply chains across all the small- and medium-sized business. Finally, the new skills training and the cultivation of talents are more urgent than ever before. While technical and vocational schools, universities and postgraduate studies are needed to be equipped with sufficient fundamental knowledge, those already in the job market have to learn the skills and knowledge necessary for industrial transformation so that they could contribute their capabilities and wisdom for Ourfuture. 2.Executive Yuan Approved “Productivity 4.0 Initiative” to Promote Industry Innovation and Transformation The Executive Yuan has approved the Productivity 4.0 Initiative on September 17, 2015. Before its approval, the Office of Science and Technology (OST) gave a presentation on the Draft of the Productivity 4.0 Initiative on July 23, 2015 detailing the underlying motives behind the program. Confronted with the challenges our traditional industries and OEMs meet, including labor shortages (the national laboring population ranging from age 15 to 65 has seen a substantial decrease of 0.18 to 0.2 million annually) and a aging labor force, the the Productivity 4.0 Initiative sets the directions for industrial development tackling these issues through six main strategies: enhancing and fine-tuning flagship industries’ smart-supply-chain ecosystems, encouraging the establishing of startups, localizing production and services, securing autonomy in key technologies, cultivating practical and technical talents and injection of industrial policy tools. After hearing the presentation on the Initiative, the Premier, Mao Chi-kuo, made reference to the core ideas of the Productivity 4.0 Initiative in his concluding remarks. “The core concept of the Productivity 4.0 Initiative is to propel R.O.C. to a pivotal position in the global manufacturing supply chain by capitalizing on the nation’s core strength in industrial technology, while fostering an outstanding work environment stimulating synergy between employees and automotive systems in order to cope with R.O.C.’s imminent labor shortage,” Mao said Also focusing on the Productivity 4.0 Initiative, the Premier gave a keynote speech titled ‘Views on the current economic and social issues’ at the Third Wednesday Club. He takes the view that the GDP downslide is of a structural nature and the government is going to guide the economy towards an upward path by assisting industries to innovate and transform. In an effort to remove the three major obstacles to innovation and entrepreneurship— discouraging laws and regulations, difficulty in raising capital and gathering financing as well as lack of international partnerships, the government has been diligently promoting the Third Party Payment Act as well as setting-up R.O.C. Rapid Innovation Prototyping League for Enterprises. Among these measures, Industry 4.0 has been at the core of the Initiative, in which cyber-physical production system (CPS) would be introduced by integrating Cloud-computing and Internet of Things technology to spur industrial transformations, specifically industrial manufacturing, added-value services and agricultural production. The Productivity 4.0 Initiative is an imperative measure in dealing with R.O.C.’s imminent issues of labor shortage, and the aging society, its promising effects are waiting to unfold. 3.Executive Yuan’s Further Addendum to “Productivity 4.0 Plan”: Attainment of Core Technologies and the Cultivation of Domestic Technical Talents In an continual effort to put in place the most integrated infrastructural setting for the flourishing of its “Productivity 4.0 Plan”, Executive Yuan Premier Mao Chi-Kuo announced on the 22nd October that the overhaul infrastructural set-up will be focused on the development of core technologies and the cultivation of skilled technical labor. To this end, the Executive Yuan is gathering participation and resources from the Ministry of Economic Affairs (hereafter MOEA), Ministry of Education, Ministry of Science and Technology, Ministry of Labor, the Council of Agriculture, among other governmental bodies, collecting experiences and knowledge from academia and researchers, in order to improve the development of pivotal technologies, the training of skilled technical labor and consequently to improve and reform the present education system so as to meet the aforementioned goals. Premier Mao Chi-Kuo pointed out that Productivity 4.0 is a production concept in which the industry is evolved from mere automation- to intelligent-based manufacturing, shifting towards a “small-volume, large-variety” production paradigm, closing the gaps between production and consumption sides through direct communication, hence allowing industry to push itself further on changing its old efficiency-based production model to an innovation-driven one. Apart from the Research and Development efforts geared towards key technologies, Premier Mao stressed that the people element, involved in this transformative process, is what dictates Productivity 4.0 Plan’s success. The cross-over or multi-disciplinary capability of the labor force is especially significant. In order to bring up the necessary work force needed for Productivity 4.0, besides raising support for the needed Research and Development, an extensive effort should be placed in reforming and upgrading the current educational system, as well as the technical labor and internal corporate educational structure. Moreover, an efficient platform should be implemented so that opinions and experiences could be pooled out, thus fostering closer ties between industry, academia and research. The MOEA stated that the fundamental premise behind the Productivity 4.0 strategy is that by way of systematic, brand-orientated formation of technical support groups, constituted by members of industry, academia and research, will we able to develop key sensor, internet and core technologies for our manufacturing, business and agriculture sector. It is estimated that by the end of year 2016, the Executive Yuan will have completed 6 major Productivity 4.0 production lines; supported the development of technical personnel in smart manufacturing, smart business and smart agriculture, amounting to 2,500 persons; established 4 inter-university, inter-disciplinary strategic partnerships in order to prepare much needed labor force for the realization of the Productivity 4.0 Plan. It is estimated that by the year 2020, industry has already developed the key technologies through the Productivity 4.0 platform, aiding to decrease by 50% the time currently needed to for Research and Development, increasing the technological sovereignty by 50% and accrue production efficiency by 15% and above. Furthermore, through the educational reforms, the nation will be able to lay solid foundations for its future labor talents, as well as connecting them to the world at large, effectively making them fit to face the global markets and to upgrade their production model.

Shifting AI Governance in East Asia: AI Legislative Progress in Japan, South Korea and Taiwan

Shifting AI Governance in East Asia: AI Legislative Progress in Japan, South Korea and Taiwan 2025/09/09 Keywords: artificial intelligence, artificial intelligence regulation I.Introduction The landscape of AI governance in East Asia is changing, with two new AI laws enacted and one on the way. In South Korea, an act titled “the Basic Act on the Development of Artificial Intelligence and the Establishment of Foundation for Trustworthiness“ (“인공지능 발전과 신뢰 기반 조성 등에 관한 기본법”, henceforth referred to as “South Korea’s AI Act” or “SKAIA”)[1]was approved on December 26[2], 2024 and promulgated on January 21, 2025. The AI Basic Act is designed to establish a national AI governance framework and systematically foster the AI industry while preventing potential AI risks.[3]A few months later, Japan’s first law regulating AI was passed by the National Diet on May 28, 2025. The new law is titled "the Act on Promotion of Research and Development, and Utilization of AI-related Technology" (“人工知能関連技術の研究開発及び活用の推進に関する法律”, henceforth referred to as "Japan's AI Act" or "JAIA")[4], which reflects the strong will of the government to catch up in the global AI race.[5] Elsewhere in the region, Taiwan’s Executive Yuan finally passed its draft AI Basic Act (“人工智慧基本法草案”) on August 28[6] [7], which must now be submitted to the Legislative Yuan for deliberation. The government hopes the new law will lay the foundation for establishing Taiwan as an AI island and a key driving force in global AI development.[8] This article will give a quick overview of the key features of the three new AI regulations to illustrate the new landscape these countries are shaping in AI governance. II.Key features of Japan’s AI Act (JAIA) 1.Purpose and principles of JAIA Given Japan's lagging AI development and rising public concerns, JAIA reflects the government's worry about falling behind global peers in AI investment and adoption.[9] It is believed that new laws are needed in addition to existing laws and regulations to promote innovation and address risks.[10] Hence JAIA aims to advance the R&D and application of AI through the formulation of basic principles and plans, and the establishment of an "AI Strategic Headquarters".[11] JAIA establishes basic principles for the promotion of the R&D and application of AI-related technologies[12], including enhancing industry R&D capabilities and competitiveness, systematically promoting AI collaboration from research to application with transparency, and enabling Japan to shape global norms through international cooperation.[13] 2.Industry Development and Promotion JAIA requires the government to develop a National AI Basic Plan, in accordance with the basic principles, to promote the R&D and application of AI. The AI Basic Plan should set out fundamental policy guidelines and measures to comprehensively and systemically advance the R&D and application of AI-related technologies, along with other necessary provisions.[14] JAIA also specifies basic measures to be included in the plan, which cover issues of promotion of R&D, expansion and sharing of facilities and data, human resources and education, international engagement in AI norm setting, and domestic guidelines making. In addition, the government should monitor AI technology trends and analyze cases of rights violations from improper AI use to develop countermeasures and provide guidance accordingly.[15] 3.Governance JAIA stipulates that an AI Strategy Headquarters should be established under the Cabinet, composed of all cabinet members and headed by the Prime Minister.[16] The AI Strategic Headquarters is tasked with comprehensively and systematically advancing AI-related technology R&D and application policies, including the formulation, promotion, and implementation of AI Basic Plans and other related initiatives.[17] The Act also empowers the AI Strategy Headquarters to invite stakeholders to provide information, opinions or explanations, and other necessary assistance.[18] 4.Risk managements and rights protection JAIA does not impose direct compliance obligations, but AI companies and research institutions are required to cooperate with government investigations and follow government guidance in cases involving violations of human rights and interests.[19] 5.Implementation of JAIA and Follow-up Work JAIA came into force in May 2025. The Japanese government is required to develop guidelines that align with international standards and launch the Strategic Headquarters for the preparation and implementation of the National AI Basic Plan. III.Key features of the South Korea’s AI Act (SKAIA) 1.Purpose and principles of SKAIA SKAIA is designed to establish a foundation for AI development and trustworthiness, increasing citizens’ rights and interests protection, quality of life, and the country’s competitiveness.[20] It focuses on advancing national AI collaboration to foster a flourishing AI sector and developing legal frameworks to mitigate risks.[21] Accordingly, the Act establishes basic AI development principles: prioritizing safety and reliability to improve quality of life, and ensuring those affected by AI output receive clear, meaningful explanations within reasonable parameters.[22] 2.Industry development and promotion Supporting AI technology and industry development is a key feature of SKAIA. It establishes comprehensive measures covering technology development, industry revitalization, SME support, industrial foundations, talent cultivation, regulatory adaptation, and international cooperation.[23] 3.Governance SKAIA also strengthens the institutional framework for AI governance. The Ministry of Science and ICT (henceforth referred to as “MSIT”) is mandated to execute an AI Master Plan every three years and empowered to investigate violations, require corrective action, and impose fines on non-compliant entities.[24] The National AI Committee is authorized to review and decide on the AI Master Plan and AI-related matters, making it the highest decision-making body for South Korea's AI policies. It is composed of the heads of central administrative agencies and civilian AI experts appointed by the president.[25] SKAIA also establishes the AI Policy Center to support MSIT on AI policy formulation, and the AI Safety Institute for AI safety matters.[26] 4.Risk management and rights protection SKAIA imposes specific obligations on operators of high-impact AI and generative AI systems. All operators must ensure system transparency and safety, while high-impact AI operators face additional responsibilities including conducting fundamental rights impact assessments.[27] High-impact AI systems are defined as AI systems that have a significant impact on or may pose a risk to human life, safety, and fundamental rights and are mainly utilized in critical infrastructure sectors and human rights-sensitive areas, or other areas specified by presidential decree.[28] The procedure for determining whether an AI system qualifies as high-impact AI will be established through subordinate legislation.[29] 5.Implementation of SKAIA and Follow-up Work SKAIA will come into effect on January 1, 2026 and the formulation of subordinate statutes that detail enforcement mechanisms and guidelines should be expedited. However, domestic critics argue that corporate obligation provisions may hinder AI development and advocate for postponing their implementation.[30] Actually, an amendment to the Act was proposed in April 2025, seeking such a postponement along with a three-year grace period.[31] IV. Key features of Taiwan’s draft AI Basic Act 1.Purpose and Principles of the draft AI Basic Act[32] Taiwan adopts a relatively conservative approach to AI policy and measures to boost industrial development have long occupied the agenda of AI governance. Given that AI is a crucial technology for national development, the draft AI Basic Act (henceforth referred to as "the draft Act") seeks to ensure that AI technology develops vigorously in a human-centered approach, encourage innovation while considering human rights, and safeguard Taiwan’s national sovereignty and cultural values.[33] Hence, the draft Act establishes seven guiding principles in line with international norms, which are sustainability, human autonomy, privacy protection and data governance, security, transparency and explainability, fairness and accountability.[34] 2.Industry Development and Promotion It is the government’s responsibility to promote the R&D and application of AI and construct the infrastructure needed.[35] In order to facilitate AI innovations, competent authorities may provide a controlled environment for testing and validating AI innovation products and services before they are released to the market or put into use.[36] Considering the wide scope of AI application and development, the government is encouraged to collaborate with the private sector, including through public-private partnerships, and should promote international cooperation on AI matters.[37] The government should also continue to comprehensively promote AI education at all levels to enhance the public's AI literacy.[38] Data is crucial for AI development, so the draft Act mandates the government to establish mechanisms to enhance data availability, and measures to facilitate AI outputs that maintain the country's multicultural values, and protect intellectual property rights.[39] 3.Risk Management and Rights Protection (1) Risk Management The draft Act includes several provisions addressing AI risks. The government should take steps to prevent AI from being used for illegal purposes. For example, Ministry of Digital Affairs (MODA) and other relevant agencies may provide or recommend tools or methods for AI evaluation and verification to avoid misuse of AI.[40] Secondly, MODA is mandated to foster an AI risk classification framework, based on which sectoral competent authorities should establish risk-based tiered management standards.[41] Thirdly, the government may, through binding regulations or non-binding administrative guidance, promote safety standards, verification, transparent and explainable traceability, or accountability mechanisms to enhance the trustworthiness of AI development and application.[42] Lastly, the government should clarify the ownership and conditions of liability for high-risk AI applications and establish relevant mechanisms for relief, compensation or insurance to protect affected parties.[43] However, AI application responsibility norms would not apply to pre-release activities in order to support technological innovation.[44] [45] (2) Rights Protection The draft Act concerns not only the privacy rights of individuals but also labor rights. The government should ensure the protection of personal data used throughout the AI lifecycle on the one hand[46] , and also protect workers' rights and provide necessary assistance to help them adapt to technological changes, especially those who have lost their jobs due to AI use.[47] 4.Governance and Implementation Despite the heated debate regarding the designation of a dedicated AI regulatory authority in the country, the Executive Yuan decided against establishing such an authority, given AI's cross-ministerial nature. Relevant competent authorities will be responsible for formulating implementing regulations and guidelines and the Executive Yuan will continue to guide relevant agencies and departments at all levels through the existing Digital Legal Coordination Meeting to facilitate the development of AI.[48] V.Analysis and conclusion Japan, South Korea and Taiwan all seek to maintain the countries' momentum in promoting AI development through AI legislation. The three parties all emphasize trustworthy AI, though they actually place greater emphasis on AI development. They share considerable common ground in the policies to foster AI industry development, such as promoting AI R&D and application and supporting infrastructure-building, and diverge in their approaches to addressing potential AI-related risks and governance structure. Japan adopts a ‘light touch’ regulatory approach to AI regulation, maintaining coherent policy coordination that responds to domestic imperatives and global trends without imposing regulatory burdens on industries.[49] The country favors a soft approach with governmental guidance. In contrast, South Korea incorporates regulatory provisions specifically targeting high-impact AI systems in its AI Basic Act, seeking to balance between enhancing national competitiveness through AI and mitigating potential risks stemming from AI misuse, though this approach actually faces some domestic opposition currently. Taiwan adopts an approach similar to Japan's. The draft AI Basic Act avoids imposing regulatory obligations, and the government will prioritize AI verification and evaluation mechanisms to ensure trustworthy AI development. Regarding governance approaches, both Japan and South Korea seek to strengthen governmental AI governance functions through legislation, with Japan establishing an AI Strategic Headquarters and South Korea creating an AI Committee, both operating under their respective Cabinets. In contrast, Taiwan's draft AI Basic Act does not address governance structural matters. Given the profound societal transformations that AI technology may bring, all three East Asian countries recognize the importance of sustained AI advancement while acknowledging the critical need to ensure AI safety and trustworthiness to protect human rights. In an era of intense global AI competition, it seems to be the best policy for governments to carefully design AI policies that strike a balance between fostering innovation and safeguarding human rights. This cautious approach is essential as significant challenges remain and AI risks demand comprehensive solutions. Reference: [1] 인공지능 발전과 신뢰 기반 조성 등에 관한 기본법(법률 제20676호, 2025. 1. 21, 제정),법제처 국가법령정보센터,https://www.law.go.kr/%EB%B2%95%EB%A0%B9/%EC%9D%B8%EA%B3%B5%EC%A7%80%EB%8A%A5%20%EB%B0% 9C%EC%A0%84%EA%B3%BC%20%EC%8B%A0%EB%A2%B0%20%EA%B8%B0%EB%B0%98%20%EC%A1%B0%EC% 84%B1%20%EB%93%B1%EC%97%90%20%EA%B4%80%ED%95%9C%20%EA%B8%B0%EB%B3%B8%EB%B2%95/(206 76,20250121) (最後瀏覽日:2025/09/11)。 [2] A New Chapter in the Age of AI: Basic Act on AI Passed at the National Assembly‘s Plenary Session, Ministry of Science and ICT, https://www.msit.go.kr/eng/bbs/view.do?sCode=eng&mId=4&mPid=2&pageIndex=&bbsSeqNo=42&nttSeqNo=1071&searchOpt=ALL&searchTxt= (last visited Sept. 11, 2025). [3] A New Chapter in the Age of AI: Basic Act on AI Passed at the National Assembly‘s Plenary Session, Ministry of Science and ICT, https://www.msit.go.kr/eng/bbs/view.do?sCode=eng&mId=4&mPid=2&pageIndex=&bbsSeqNo=42&nttSeqNo=1071&searchOpt=ALL&searchTxt= (last visited Sept. 11, 2025). [4] 人工知能関連技術の研究開発及び活用の推進に関する法律(令和7年法律第53号),e-Gov法令検索,https://laws.e-gov.go.jp/law/507AC0000000053(最後瀏覽日:2025/09/11)。 [5] CABINET OFFICE, GOVERNMENT OF JAPAN, Outline of the Act on Promotion of Research and Development, and Utilization of AI-related Technology (AI Act), https://www8.cao.go.jp/cstp/ai/ai_hou_gaiyou_en.pdf (last visited Sept. 11, 2025). [6] 〈政院通過「人工智慧基本法」草案 建構AI發展與應用良善環境 打造臺灣成為AI人工智慧島〉,行政院,https://www.ey.gov.tw/Page/9277F759E41CCD91/5d673d1e-f418-47dc-ab35-a06600f77f07(最後瀏覽日:2025/09/09)。 [7] There are other AI bills brought up by legislators in the Legislative Yuan. The purpose of this article is to analyze the AI governance priorities of the governments of Japan, South Korea, and Taiwan; therefore, other AI bills proposed by legislators are not included in the discussion. [8] 蘇文彬,〈行政院通過AI基本法草案,將不設立AI專責機關〉,iThome,2025/08/28,https://www.ithome.com.tw/news/170874 (最後瀏覽日:2025/09/09)。 [9] Japan’s AI Bill Advances Toward Enactment, Connect on Tech (May 27, 2025), https://connectontech.bakermckenzie.com/japans-ai-bill-advances-toward-enactment/ (last visited Sept. 9, 2025). [10] 松尾剛行,〈【2025年施行】AI新法とは?AIの研究開発・利活用を推進する法律を分かりやすく解説!〉,Keiyaku-Watch,https://keiyaku-watch.jp/media/hourei/2025-ai-law/(最後瀏覽日:2025/09/11)。 [11] 人工知能関連技術の研究開発及び活用の推進に関する法律(令和7年法律第53号)第1条。 [12] 人工知能関連技術の研究開発及び活用の推進に関する法律(令和7年法律第53号)第3条。 [13] Japan Enacts AI Promotion Act: Overview and Implications for Businesses, Zelo Law Square (May, 2025), https://zelojapan.com/en/lawsquare/56899 (last visited Sept. 9, 2025). [14] 人工知能関連技術の研究開発及び活用の推進に関する法律(令和7年法律第53号)第18条。 [15] 人工知能関連技術の研究開発及び活用の推進に関する法律(令和7年法律第53号)第11-17条。 [16] 人工知能関連技術の研究開発及び活用の推進に関する法律(令和7年法律第53号)第19、21-24条。 [17] 人工知能関連技術の研究開発及び活用の推進に関する法律(令和7年法律第53号)第20条。 [18] 人工知能関連技術の研究開発及び活用の推進に関する法律(令和7年法律第53号)第25条。 [19] 人工知能関連技術の研究開発及び活用の推進に関する法律(令和7年法律第53号)第16条。 [20] 인공지능 발전과 신뢰 기반 조성 등에 관한 기본법,제1조。 [21] The Korean AI Basic Act: Asia’s First Comprehensive Framework on AI, Lexology (Mar. 17, 2025), https://www.lexology.com/library/detail.aspx?g=f91ff0fb-94ed-4aa9-b667-65d6206a7227 (last visited Sept. 9, 2025). [22] 인공지능 발전과 신뢰 기반 조성 등에 관한 기본법,제3조。 [23] 인공지능 발전과 신뢰 기반 조성 등에 관한 기본법,제13-26조。 [24] 인공지능 발전과 신뢰 기반 조성 등에 관한 기본법,제40조。 [25] 인공지능 발전과 신뢰 기반 조성 등에 관한 기본법,제7조。 [26] 인공지능 발전과 신뢰 기반 조성 등에 관한 기본법,제6-12조。 [27] 인공지능 발전과 신뢰 기반 조성 등에 관한 기본법,제31-32조。 [28] 인공지능 발전과 신뢰 기반 조성 등에 관한 기본법,제4조。 [29] 인공지능 발전과 신뢰 기반 조성 등에 관한 기본법,제33조。 [30] Seungmin (Helen) Lee, South Korea’s Evolving AI Regulations, Stimson (June 12, 2025), https://www.stimson.org/2025/south-koreas-evolving-ai-regulations/ (last visited Sept. 9, 2025). [31] 〈인공지능 발전과 신뢰 기반 조성 등에 관한 기본법 일부개정법률안〉,대한민국국회,https://likms.assembly.go.kr/bill/bi/billDetailPage.do?billId=PRC_N2M5K0S3R2R0Q1O3X5X1W1U1T7P3Q6&currMenuNo=2600044(最後瀏覽日:2025/09/09)。 [32] 〈政院通過「人工智慧基本法」草案 建構AI發展與應用良善環境 打造臺灣成為AI人工智慧島〉,行政院,https://www.ey.gov.tw/Page/9277F759E41CCD91/5d673d1e-f418-47dc-ab35-a06600f77f07(最後瀏覽日:2025/09/09)。 [33] 人工智慧基本法草案第1條。 [34] 人工智慧基本法草案第3條。 [35] 人工智慧基本法草案第4條。 [36] 人工智慧基本法草案第5條。 [37] 人工智慧基本法草案第6條。 [38] 人工智慧基本法草案第7條。 [39] 人工智慧基本法草案第14條。 [40] 人工智慧基本法草案第8條。 [41] 人工智慧基本法草案第9條。 [42] 人工智慧基本法草案第10條。 [43] 人工智慧基本法草案第11條。 [44] 人工智慧基本法草案第11條。 [45] See also: Taiwan Rolls Out Draft Artificial Intelligence Law, OCACNEWS, July 18, 2024, https://ocacnews.net/article/374412 (last visited Sept. 3, 2025). [46] 人工智慧基本法草案第14條。 [47] 人工智慧基本法草案第12條。 [48] 蘇文彬,〈行政院通過AI基本法草案,將不設立AI專責機關〉,iThome,2025/08/28,https://www.ithome.com.tw/news/170874 (最後瀏覽日:2025/09/09)。 [49] Sun Ryung Park, Less Regulation, More Innovation in Japan’s AI Governance, East Asia Forum (May 21, 2025), https://eastasiaforum.org/2025/05/21/less-regulation-more-innovation-in-japans-ai-governance/ (last visited July 4, 2025).

TOP