On March 6, 2014, The Energy Bureau of Ministry of Economic Affairs has published a pre-announcement on a Trial Program of Voluntary Base Green Electricity Framework (hereafter the Trial Program) and consulted on public opinion. In light of the content of the Trial Program, STLI provide the following suggestions for future planning of related policy structure.
The institution of green electricity as established by the Trial Program is one of the policies for promoting renewable energy. Despite its nature of a trial, it is suggested that a policy design with a more options will be beneficial to the promotion of renewable energy, in light of various measures that have been undertaken by different countries.
According to the Trial Program, the planned price rate of the green electricity is set on the basis of the total sum that the electricity subsidy to be paid by the Renewable Energy Development Fund divided by the total sum of electricity generated reported by Tai Power Company. The Ministry of Economic Affairs will adjust the price rate of the green electricity on the base of both how many users subscribe to the green electricity and the price rate of international green electricity market rate and, then announce the price rate in October of each year if not otherwise designated.
In addition, according to the planned Trial Program, the unit for the subscription of green electricity is 100 kW·h. It is further reported that the current planned price rate for green electricity is 1.06 NTD/ kW·h. And it shall be 3.95 NTD/ kW·h if adding up with the original price rate, with an 37% increase in price per kW·h. In terms of the existing content of the Trial Program, only single price rate will be offered during the trial period.
In this regard, we take the view that it would be beneficial to take into account similar approaches that have been taken by other countries. In Germany, for instance, the furtherance of renewable energy is achieved by the obligatory charge(EEG Umlage)together with the voluntary green electricity program provided by the private electricity retail sectors.
According to German Ministry of Economics and Energy (BMWi), the electricity price that the German public pays includes three parts: (1)the cost of the purchase and distribution of the electricity, including the margin of the electricity provider(2)regulated network fees, including those for the operation as well as for the measurement works of the meters(3)charges imposed by the government, including tax and the abovementioned obligatory charge for renewable energy(EEG Umlage), as prescribed by the Act on Renewable Energy (Gesetz für den Vorrang Erneuerbarer Energien, also known as Erneuerbare-Energien-Gesetz - EEG).
In terms of how it is implemented on the ground, an example of the green electricity price menu program from the German electricity retail company, Vattenfall, is given in the following. In all price menu programs provided by Vattenfall in Berlin, for instance, 29.4% of the electricity comes from renewable energy as a result of the implementation of the Act on Renewable Energy.
Asides from the abovementioned percentage as facilitated by the existing obligatory measures, the electricity retail companies in Germany further provide the price menus that are “greener”. For example, among the options provided by Vattenfall(Chart I), in terms of the 12-month program, one can choose the menu which consist of 39.4% of renewable energy, with the price of 0.2642 Euro/ kW·h(about 10.96 NTD/ kW·h). One can also opt for a menu of which the energy supply comes from 100% of renewable energy, with the price of 0.281 Euro/ kW·h(about 11.66 NTD/ kW·h)
Chart I : Green Electricity Price Menus provided by Vattenfall in Berlin, Germany
| Percentage of Renewable Energy Supply |
Percentage of Renewable Energy Supply |
|
|---|---|---|
|
12-month program |
39.4% |
0.2642 Euro/ kW·h(about 10.96 NTD/ kW·h) |
|
All renewable energy program |
100% |
0.281 Euro/ kW·h(about 11.66 NTD/ kW·h) |
Source:Vattenfall website, translated and reorganized by STLI, April 214.
In addition, Australia also has similar programs on green electricity that is voluntary-base and with the goal of promoting renewable energy, reducing carbon emission, and transforming energy economy. Since 1997, the GreenPower in Australia is in charge of audition and certification of the retail companies and power plants on green electricity. The Australian model uses the certification mechanism conducted by independent third party, to ensure the green electricity purchased by end users in compliance with specific standards.
As for the options for the price menu, take the programs of green electricity offered by the Australian retail company Origin Energy for example, user can choose 6 kinds of different programs, which are composed by renewable energy supply of respectively 10%, 20%, 25%, 50%, 75%, and 100%, at various price rates (shown in Chart II).
Chart II Australian Green Electricity Programs provided by Origin Energy
| Percentage of renewable Energy | Electricity Price per kW·h |
|---|---|
|
0 |
0.268 AUD(About 7.52 NTD) |
|
10% |
0.274868 AUD(About 7.69 NTD) |
|
20% |
0.28006 AUD(About 7.84 NTD) |
|
25% |
0.28292 AUD(About 7.92 NTD) |
|
50% |
0.2838 AUD(About 7.95 NTD) |
|
100% |
0.2992 AUD(About 8.37 NTD) |
Source:Origin Energy website, translated and reorganized by STLI, April 214.
Given the information above, it can thus be inferred that the international mechanism for the promotion of green electricity often include a variety of price menus, providing the user more options. Such as two difference programs offered by Vattenfall in Germany and six various rates for green electricity offered by Origin Energy in Australia.
It is the suggestion of present brief that the Trial Program can reference these international examples and try to offer the users a greater flexibility in choosing the most suitable programs for themselves.
Draft of AI Product and System Evaluation Guidelines Released by the Administration for Digital Industries to Enhance AI Governance 2024/08/15 I. AI Taiwan Action Plan 2.0 In 2018, the Executive Yuan launched the “AI Taiwan Action Plan” to ensure that the country keeps pace with AI developments. This strategic initiative focuses on attracting top talent, advancing research and development, and integrating AI into critical sectors such as smart manufacturing and healthcare. The action plan has sparked growing discussion on AI regulation. Through these efforts, Taiwan aims to position itself as a frontrunner in the global smart technology landscape. Later in 2023, the Executive Yuan updated the action plan, releasing “AI Taiwan Action Plan 2.0” to further strengthen AI development. “AI Taiwan Action Plan 2.0” outlines five main pillars: 1. Talent Development: Enhancing the quality and quantity of AI expertise, while improving public AI literacy through targeted education and training initiatives. 2. Technological and Industrial Advancement: Focusing on critical AI technologies and applications to foster industrial growth; and creating the Trustworthy AI Dialogue Engine (TAIDE) that communicates in Traditional Chinese. 3. Enhancing work environments: Establishing robust AI governance infrastructure to facilitate industry and governmental regulation, and to foster compliance with international standards. 4. International Collaboration: Expanding Taiwan's role in international AI forums, such as the Global Partnership on AI, to collaborate on developing trustworthy AI practices. 5. Societal and Humanitarian Engagement: Utilizing AI to tackle pressing societal challenges such as labor shortages, an aging population, and environmental sustainability. II. AI Product and System Evaluation Guidelines: A Risk-based Approach to AI Governance To support infrastructure, in March 2024, the Administration for Digital Industries issued the draft AI Product and System Evaluation Guidelines. The Guidelines are intended to serve as a reference for industry when developing and using AI products and systems, thus laying a crucial foundation for advancing AI-related policies in Taiwan. The Guidelines outline several potential risks associated with AI: 1. Third-Party Software and Hardware: While third-party software, hardware, and datasets can accelerate development, they may also introduce risks into AI products and systems. Therefore, effective risk management policies are crucial. 2. System Transparency: The lack of transparency in AI products and systems makes risk assessment relatively challenging. Inadequate transparency in AI models and datasets also pose risks for development and deployment. 3. Differences in Risk Perception: Developers of AI products and systems may overlook risks specific to different application scenarios. Moreover, risks may gradually emerge as the product or system is used and trained over time. 4. Application Domain Risks: Variations between testing results and actual operational performance can lead to differing risk assessments for evaluated products and systems. 5. Deviation from Human Behavioral Norms: If AI products and systems behave unexpectedly compared to human operations, this can indicate a drift in the product, system, or model, thereby introducing risks. The Guidelines also specify that businesses have to categorize risks when developing or using AI products and systems, and manage them in accordance with these classifications. In alignment with the EU AI Act, risks are classified into four levels: unacceptable, high, limited, and minimal. 1. Unacceptable Risk: If AI systems used by public or private entities provide social scoring of individuals, this could lead to discriminatory outcomes and the exclusion of certain groups. Furthermore, if AI systems are employed to manipulate the cognitive behavior of individuals or vulnerable populations, causing physical or psychological harm, such systems are deemed unacceptable and prohibited. 2. High risk: AI systems are classified as high-risk in several situations. These include applications used in critical infrastructure, such as transportation, where there is potential risk to citizens' safety and health. These situations also encompass AI systems involved in educational or vocational training (such as exam scoring), which can determine access to education or professional paths. AI used as safety-critical product components, such as robot-assisted surgery, also falls into this category. In the employment sector, AI systems used for managing recruitment processes, including CV-sorting software, are considered high-risk. Essential private and public services, such as credit scoring systems that impact loan eligibility, also fall under high-risk. AI used in law enforcement in ways that it may affect fundamental rights, such as evaluating the reliability of evidence, is also included. AI systems involved in migration, asylum, and border control, such as automated visa application examinations, are categorized as high-risk. Finally, AI solutions used in the administration of justice and democratic processes, such as court ruling searches, are also classified as high-risk. If an AI system is classified as high risk, it must be evaluated across ten criteria—Safety, Explainability, Resilience, Fairness, Accuracy, Transparency, Accountability, Reliability, Privacy, and Security—to ensure the AI system’s quality. 3. Limited risk: When an AI product or system is classified as having limited risk, it is up to the enterprise to determine whether an evaluation is required. The Guidelines also introduce specific transparency obligations to ensure that humans are informed when necessary, thus fostering trust. For instance, when using AI systems such as chatbots or systems for generating deepfake content, humans must be made aware that they are interacting with a machine so they can take an informed decision to continue or step back. 4. Minimal or no risk: The Guidelines allow the free use of minimal-risk AI. This includes applications such as AI-enabled video games and spam filters. Ⅲ. Conclusion The AI Product and System Evaluation Guidelines represent a significant step forward in establishing a robust, risk-based framework for AI governance in Taiwan. By aligning with international standards like the EU AI Act, these Guidelines ensure that AI products and systems are rigorously assessed and categorized into four distinct risk levels: unacceptable, high, limited, and minimal. This structured approach allows businesses to manage AI-related risks more effectively, ensuring that systems are safe, transparent, and accountable. The emphasis on evaluating AI systems across ten critical criteria—including safety, explainability, and fairness—reflects a comprehensive strategy to mitigate potential risks. This proactive approach not only safeguards the public but also fosters trust in AI technologies. By setting clear expectations and responsibilities for businesses, the Guidelines promote responsible development and deployment of AI, ultimately contributing to Taiwan's goal of becoming a leader in the global AI landscape.
The use of automated facial recognition technology and supervision mechanism in UKThe use of automated facial recognition technology and supervision mechanism in UK I. Introduction Automatic facial recognition (AFR) technology has developed rapidly in recent years, and it can identify target people in a short time. The UK Home Office announced the "Biometrics Strategy" on June 28, 2018, saying that AFR technology will be introduced in the law enforcement, and the Home Office will also actively cooperate with other agencies to establish a new oversight and advisory board in order to maintain public trust. AFR technology can improve law enforcement work, but its use will increase the risk of intruding into individual liberty and privacy. This article focuses on the application of AFR technology proposed by the UK Home Office. The first part of this article describes the use of AFR technology by the police. The second part focuses on the supervision mechanism proposed by the Home Office in the Biometrics Strategy. However, because the use of AFR technology is still controversial, this article will sort out the key issues of follow-up development through the opinions of the public and private sectors. The overview of the discussion of AFR technology used by police agencies would be helpful for further policy formulation. II. Overview of the strategy of AFR technology used by the UK police According to the Home Office’s Biometrics Strategy, the AFR technology will be used in law enforcement, passports and immigration and national security to protect the public and make these public services more efficient[1]. Since 2017 the UK police have worked with tech companies in testing the AFR technology, at public events like Notting Hill Carnival or big football matches[2]. In practice, AFR technology is deployed with mobile or fixed camera systems. When a face image is captured through the camera, it is passed to the recognition software for identification in real time. Then, the AFR system will process if there is a ‘match’ and the alarm would solicit an operator’s attention to verify the match and execute the appropriate action[3]. For example, South Wales Police have used AFR system to compare images of people in crowds attending events with pre-determined watch lists of suspected mobile phone thieves[4]. In the future, the police may also compare potential suspects against images from closed-circuit television cameras (CCTV) or mobile phone footage for evidential and investigatory purposes[5]. The AFR system may use as tools of crime prevention, more than as a form of crime detection[6]. However, the uses of AFR technology are seen as dangerous and intrusive by the UK public[7]. For one thing, it could cause serious harm to democracy and human rights if the police agency misuses AFR technology. For another, it could have a chilling effect on civil society and people may keep self-censoring lawful behavior under constant surveillance[8]. III. The supervision mechanism of AFR technology To maintaining public trust, there must be a supervision mechanism to oversight the use of AFR technology in law enforcement. The UK Home Office indicates that the use of AFR technology is governed by a number of codes of practice including Police and Criminal Evidence Act 1984, Surveillance Camera Code of Practice and the Information Commissioner’s Office (ICO)’s Code of Practice for surveillance cameras[9]. (I) Police and Criminal Evidence Act 1984 The Police and Criminal Evidence Act (PACE) 1984 lays down police powers to obtain and use biometric data, such as collecting DNA and fingerprints from people arrested for a recordable offence. The PACE allows law enforcement agencies proceeding identification to find out people related to crime for criminal and national security purposes. Therefore, for the investigation, detection and prevention tasks related to crime and terrorist activities, the police can collect the facial image of the suspect, which can also be interpreted as the scope of authorization of the PACE. (II) Surveillance Camera Code of Practice The use of CCTV in public places has interfered with the rights of the people, so the Protection of Freedoms Act 2012 requires the establishment of an independent Surveillance Camera Commissioner (SCC) for supervision. The Surveillance Camera Code of Practice proposed by the SCC sets out 12 principles for guiding the operation and use of surveillance camera systems. The 12 guiding principles are as follows[10]: A. Use of a surveillance camera system must always be for a specified purpose which is in pursuit of a legitimate aim and necessary to meet an identified pressing need. B. The use of a surveillance camera system must take into account its effect on individuals and their privacy, with regular reviews to ensure its use remains justified. C. There must be as much transparency in the use of a surveillance camera system as possible, including a published contact point for access to information and complaints. D. There must be clear responsibility and accountability for all surveillance camera system activities including images and information collected, held and used. E. Clear rules, policies and procedures must be in place before a surveillance camera system is used, and these must be communicated to all who need to comply with them. F. No more images and information should be stored than that which is strictly required for the stated purpose of a surveillance camera system, and such images and information should be deleted once their purposes have been discharged. G. Access to retained images and information should be restricted and there must be clearly defined rules on who can gain access and for what purpose such access is granted; the disclosure of images and information should only take place when it is necessary for such a purpose or for law enforcement purposes. H. Surveillance camera system operators should consider any approved operational, technical and competency standards relevant to a system and its purpose and work to meet and maintain those standards. I. Surveillance camera system images and information should be subject to appropriate security measures to safeguard against unauthorised access and use. J. There should be effective review and audit mechanisms to ensure legal requirements, policies and standards are complied with in practice, and regular reports should be published. K. When the use of a surveillance camera system is in pursuit of a legitimate aim, and there is a pressing need for its use, it should then be used in the most effective way to support public safety and law enforcement with the aim of processing images and information of evidential value. L. Any information used to support a surveillance camera system which compares against a reference database for matching purposes should be accurate and kept up to date. (III) ICO’s Code of Practice for surveillance cameras It must need to pay attention to the personal data and privacy protection during the use of surveillance camera systems and AFR technology. The ICO issued its Code of Practice for surveillance cameras under the Data Protection Act 1998 to explain the legal requirements operators of surveillance cameras. The key points of ICO’s Code of Practice for surveillance cameras are summarized as follows[11]: A. The use time of the surveillance camera systems should be carefully evaluated and adjusted. It is recommended to regularly evaluate whether it is necessary and proportionate to continue using it. B. A police force should ensure an effective administration of surveillance camera systems deciding who has responsibility for the control of personal information, what is to be recorded, how the information should be used and to whom it may be disclosed. C. Recorded material should be stored in a safe way to ensure that personal information can be used effectively for its intended purpose. In addition, the information may be considered to be encrypted if necessary. D. Disclosure of information from surveillance systems must be controlled and consistent with the purposes for which the system was established. E. Individuals whose information is recoded have a right to be provided with that information or view that information. The ICO recommends that information must be provided promptly and within no longer than 40 calendar days of receiving a request. F. The minimum and maximum retention periods of recoded material is not prescribed in the Data Protection Act 1998, but it should not be kept for longer than is necessary and should be the shortest period necessary to serve the purposes for which the system was established. (IV) A new oversight and advisory board In addition to the aforementioned regulations and guidance, the UK Home Office mentioned that it will work closely with related authorities, including ICO, SCC, Biometrics Commissioner (BC), and Forensic Science Regulator (FSR) to establish a new oversight and advisory board to coordinate consideration of law enforcement’s use of facial images and facial recognition systems[12]. To sum up, it is estimated that the use of AFR technology by law enforcement has been abided by existing regulations and guidance. Firstly, surveillance camera systems must be used on the purposes for which the system was established. Secondly, clear responsibility and accountability mechanisms should be ensured. Thirdly, individuals whose information is recoded have the right to request access to relevant information. In the future, the new oversight and advisory board will be asked to consider issues relating to law enforcement’s use of AFR technology with greater transparency. IV. Follow-up key issues for the use of AFR technology Regarding to the UK Home Office’s Biometrics Strategy, members of independent agencies such as ICO, BC, SCC, as well as civil society, believe that there are still many deficiencies, the relevant discussions are summarized as follows: (I) The necessity of using AFR technology Elizabeth Denham, ICO Commissioner, called for looking at the use of AFR technology carefully, because AFR is an intrusive technology and can increase the risk of intruding into our privacy. Therefore, for the use of AFR technology to be legal, the UK police must have clear evidence to demonstrate that the use of AFR technology in public space is effective in resolving the problem that it aims to address[13]. The Home Office has pledged to undertake Data Protection Impact Assessments (DPIAs) before introducing AFR technology, including the purpose and legal basis, the framework applies to the organization using the biometrics, the necessity and proportionality and so on. (II)The limitations of using facial image data The UK police can collect, process and use personal data based on the need for crime prevention, investigation and prosecution. In order to secure the use of biometric information, the BC was established under the Protection of Freedoms Act 2012. The mission of the BC is to regulate the use of biometric information, provide protection from disproportionate enforcement action, and limit the application of surveillance and counter-terrorism powers. However, the BC’s powers do not presently extend to other forms of biometric information other than DNA or fingerprints[14]. The BC has expressed concern that while the use of biometric data may well be in the public interest for law enforcement purposes and to support other government functions, the public benefit must be balanced against loss of privacy. Hence, legislation should be carried to decide that crucial question, instead of depending on the BC’s case feedback[15]. Because biometric data is especially sensitive and most intrusive of individual privacy, it seems that a governance framework should be required and will make decisions of the use of facial images by the police. (III) Database management and transparency For the application of AFR technology, the scope of biometric database is a dispute issue in the UK. It is worth mentioning that the British people feel distrust of the criminal database held by the police. When someone is arrested and detained by the police, the police will take photos of the suspect’s face. However, unlike fingerprints and DNA, even if the person is not sued, their facial images are not automatically deleted from the police biometric database[16]. South Wales Police have used AFR technology to compare facial images of people in crowds attending major public events with pre-determined watch lists of suspected mobile phone thieves in the AFR field test. Although the watch lists are created for time-limited and specific purposes, the inclusion of suspects who could possibly be innocent people still causes public panic. Elizabeth Denham warned that there should be a transparency system about retaining facial images of those arrested but not charged for certain offences[17]. Therefore, in the future the UK Home Office may need to establish a transparent system of AFR biometric database and related supervision mechanism. (IV) Accuracy and identification errors In addition to worrying about infringing personal privacy, the low accuracy of AFR technology is another reason many people oppose the use of AFR technology by police agencies. Silkie Carlo, director of Big Brother Watch, said the police must immediately stop using the AFR technology and avoid mistaking thousands of innocent citizens as criminals; Paul Wiles, Biometrics Commissioner, also called for legislation to manage AFR technology because of its accuracy is too low and the use of AFR technology should be tested and passed external peer review[18]. In the Home Office’s Biometric Strategy, the scientific quality standards for AFR technology will be established jointly with the FSR, an independent agency under the Home Office. In other words, the Home Office plans to extend the existing forensics science regime to regulate AFR technology. Therefore, the FSR has worked with the SCC to develop standards relevant to digital forensics. The UK government has not yet seen specific standards for regulating the accuracy of AFR technology at the present stage. V. Conclusion From the discussion of the public and private sectors in the UK, we can summarize some rules for the use of AFR technology. Firstly, before the application of AFR technology, it is necessary to complete the pre-assessment to ensure the benefits to the whole society. Secondly, there is the possibility of identifying errors in AFR technology. Therefore, in order to maintain the confidence and trust of the people, the relevant scientific standards should be set up first to test the system accuracy. Thirdly, the AFR system should be regarded as an assisting tool for police enforcement in the initial stage. In other words, the information analyzed by the AFR system should still be judged by law enforcement officials, and the police officers should take the responsibilities. In order to balance the protection of public interest and basic human rights, the use of biometric data in the AFR technology should be regulated by a special law other than the regulations of surveillance camera and data protection. The scope of the identification database is also a key point, and it may need legislators’ approval to collect and store the facial image data of innocent people. Last but not least, the use of the AFR system should be transparent and the victims of human rights violations can seek appeal. [1] UK Home Office, Biometrics Strategy, Jun. 28, 2018, https://www.gov.uk/government/publications/home-office-biometrics-strategy (last visited Aug. 09, 2018), at 7. [2] Big Brother Watch, FACE OFF CAMPAIGN: STOP THE MET POLICE USING AUTHORITARIAN FACIAL RECOGNITION CAMERAS, https://bigbrotherwatch.org.uk/all-campaigns/face-off-campaign/ (last visited Aug. 16, 2018). [3] Lucas Introna & David Wood, Picturing algorithmic surveillance: the politics of facial recognition systems, Surveillance & Society, 2(2/3), 177-198 (2004). [4] Supra note 1, at 12. [5] Id, at 25. [6] Michael Bromby, Computerised Facial Recognition Systems: The Surrounding Legal Problems (Sep. 2006)(LL.M Dissertation Faculty of Law University of Edinburgh), http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.197.7339&rep=rep1&type=pdf , at 3. [7] Owen Bowcott, Police face legal action over use of facial recognition cameras, The Guardian, Jun. 14, 2018, https://www.theguardian.com/technology/2018/jun/14/police-face-legal-action-over-use-of-facial-recognition-cameras (last visited Aug. 09, 2018). [8] Martha Spurrier, Facial recognition is not just useless. In police hands, it is dangerous, The Guardian, May 16, 2018, https://www.theguardian.com/commentisfree/2018/may/16/facial-recognition-useless-police-dangerous-met-inaccurate (last visited Aug. 17, 2018). [9] Supra note 1, at 12. [10] Surveillance Camera Commissioner, Surveillance camera code of practice, Oct. 28, 2014, https://www.gov.uk/government/publications/surveillance-camera-code-of-practice (last visited Aug. 17, 2018). [11] UK Information Commissioner’s Office, In the picture: A data protection code of practice for surveillance cameras and personal information, Jun. 09, 2017, https://ico.org.uk/for-organisations/guide-to-data-protection/encryption/scenarios/cctv/ (last visited Aug. 10, 2018). [12] Supra note 1, at 13. [13] Elizabeth Denham, Blog: facial recognition technology and law enforcement, Information Commissioner's Office, May 14, 2018, https://ico.org.uk/about-the-ico/news-and-events/blog-facial-recognition-technology-and-law-enforcement/ (last visited Aug. 14, 2018). [14] Monique Mann & Marcus Smith, Automated Facial Recognition Technology: Recent Developments and Approaches to Oversight, Automated Facial Recognition Technology, 10(1), 140 (2017). [15] Biometrics Commissioner, Biometrics Commissioner’s response to the Home Office Biometrics Strategy, Jun. 28, 2018, https://www.gov.uk/government/news/biometrics-commissioners-response-to-the-home-office-biometrics-strategy (last visited Aug. 15, 2018). [16] Supra note 2. [17] Supra note 13. [18] Jon Sharman, Metropolitan Police's facial recognition technology 98% inaccurate, figures show, INDEPENDENT, May 13, 2018, https://www.independent.co.uk/news/uk/home-news/met-police-facial-recognition-success-south-wales-trial-home-office-false-positive-a8345036.html (last visited Aug. 09, 2018).
Introduction to Taiwan’s Guidelines for Implementing Decentralized Elements in Medicinal Product Clinical TrialsIntroduction to Taiwan’s Guidelines for Implementing Decentralized Elements in Medicinal Product Clinical Trials 2023/12/15 The development of digital tools such as the internet, apps, and wearable devices have meant major breakthroughs for clinical trials. These advances have the potential to reduce the frequency of trial subject visits, accelerate research timelines, and lower the costs of drug development. The COVID-19 pandemic has further accelerated the use of digital tools, prompting many countries to adopt decentralized measures that enable trial subjects to participate in clinical trials regardless of their physical location. In step with the transition into the post-pandemic era, the Taiwan Food and Drug Administration (TFDA) issued the Guidelines for Implementing Decentralized Elements in Medicinal Product Clinical Trials in June, 2023[1]. The Guidelines are intended to cover a wide array of decentralized measures; they aim to increase trial subjects’ willingness to participate in trials, reduce the need for in-person visits to clinical trial sites, enhance real-time data acquisition during trials, and enable clinic sponsors and contract research organizations to process data remotely. I. Key Points of Taiwan’s Guidelines for Implementing Decentralized Elements in Medicinal Product Clinical Trials The Guidelines cover primarily the following matters: General considerations for implementing decentralized measures; trial subject recruitment and electronic informed consent; delivery and provision of investigational medicinal products; remote monitoring of trial subject safety; trial subject reporting of adverse events; remote data monitoring; and information systems and electronic data collection/processing/storage. 1. General Considerations for Implementing Decentralized Measures (1) During clinical trial execution, a reduction in trial subject in-person visits may present challenges to medical observation. It is recommended that home visits for any given trial subject be conducted by the principal investigator, sub-investigator, or a single, consistent delegated study nurse. (2) Sponsors must carefully evaluate all of the trial design’s decentralization measures to ensure data integrity. (3) Sponsors must conduct risk assessments for each individual trial, and must confirm the rationality of choosing decentralized measures. These decentralized measures must also be incorporated into the protocol. (4) When electronically collecting data, sponsors must ensure information system reliability and data security. Artificial intelligence may be considered for use in decentralized clinical trials; sponsors must carefully evaluate such systems, especially when they touch on determinations for critical data or strategies. (5) As the design of decentralized clinical trials is to ensure equal access to healthcare services, it must provide patients with a variety of ways to participate in clinical trials. (6) When implementing any decentralized measures, it is essential to ensure that the principal investigator and sponsor adhere to the Regulations for Good Clinical Practice and bear their respective responsibilities for the trial. (7) The use of decentralized measures must be stated in the regulatory application, and the Checklist of Decentralized Elements in Medicinal Product Clinical Trials must be included in the submission. 2. Subject Recruitment and Electronic Informed Consent (1) Trial subject recruitment through social media or established databases may only be implemented after the Institutional Review Board reviews and approves of the recruitment methods and content. (2) Must comply with the Principles for Recruiting Clinical Trial Subjects in medicinal product trials, the Personal Data Protection Act, and other regulations. (3) Regarding clinical trial subject informed consent done through digital software or devices, if it complies with Article 4, Paragraph 2 of the Electronic Signatures Act, that is, if the content can be displayed in its entirety and continues to be accessible for subsequent reference, then so long as the trial subject agrees to do so, the signature may be done via a tablet or other electronic device. The storage of signed electronic Informed Consent Forms (eICF) must align with the aforementioned Principles and meet the competent authority’s access requirements. 3. Delivery and Provision of Investigational Medicinal Products (1) The method of delivering and providing investigational medicinal products and whether trial subjects can use them on their own at home depends to a high degree on the investigational medicinal product’s administration route and safety profile. (2) When investigational medicinal products are delivered and provided through decentralized measures to trial subjects, this must be documented in the protocol. The process of delivering and providing said products must also be clearly stated in the informed consent form; only after being explained to a trial subject by the trial team, and after the trial subject’s consent is obtained, may such decentralized measures be used. (3) Investigational products prescribed by the principal investigator/sub-investigator must be reviewed by a delegated pharmacist to confirm that the investigational products’ specific items, dosage, duration, total quantity, and labeling align with the trial design. The pharmacist must also review each trial subject’s medication history, to ensure there are no medication-related issues; only then, and only in a manner that ensures the investigational product’s quality and the subject’s privacy, may delegated and specifically-trained trial personnel provide the investigational product to the subject. (4) Compliance with relevant regulations such as the Pharmaceutical Affairs Act, Pharmacists Act, Regulations on Good Practices for Drug Dispensation, and Regulations for Good Clinical Practice is required. 4. Remote Monitoring of Subject Safety (1) Decentralized trial designs involve trial subjects performing relatively large numbers of trial-related procedures at home. The principal investigator must delegate trained, qualified personnel to perform tasks such as collecting blood samples, administering investigational products, conducting safety monitoring, doing adverse event tracking, etc. (2) If trial subjects receive protocol-prescribed testing at nearby medical facilities or laboratories rather than at the original trial site, these locations must be authorized by the trial sponsor and must have relevant laboratory certification; only then may they collect or analyze samples. Such locations must provide detailed records to the principal investigator, to be archived in the trial master file. (3) The trial protocol and schedule must clearly specify which visits must be conducted at the trial site; which can be conducted via phone calls, video calls, or home visits; which tests must be performed at nearby laboratories; and whether trial subjects have multiple or single options at each visit. 5. Subject Reporting of Adverse Events (1) If the trial uses a digital platform to enhance adverse event reporting, trial subjects must be able to report adverse events through the digital platform, such as via a mobile phone app; that is, the principal investigator must be able to immediately access such adverse event information. (2) The principal investigator must handle such reports using risk-based assessment methods. The principal investigator must validate the adverse event reporting platform’s effectiveness, and must develop procedures to identify potential duplicate reports. 6. Remote Data Monitoring (1) If a sponsor chooses to implement remote monitoring, it must perform a reasonability assessment to confirm the appropriateness of such monitoring and establish a remote monitoring plan. (2) The monitoring plan must include monitoring strategies, monitoring personnel responsibilities, monitoring methods, rationale for such implementation, and critical data and processes that must be monitored. It must also generate comprehensive monitoring reports for audit purposes. (3) The sponsor is responsible for ensuring the implementation of remote monitoring, and must conduct risk assessments regarding the implementation process’ data protection and information confidentiality. 7. Information Systems and Electronic Data Collection, Processing, and Storage (1) In accordance with the Regulations for Good Clinical Practice, data recorded in clinical trials must be trustworthy, reliable, and verifiable. (2) It must be ensured that all organizations participating in the clinical trial have a full picture of the data flow. It is recommended that the trial protocol and trial-related documents include data flow diagrams and additional explanations. (3) Define the types and scopes of subject personal data that will be collected, and ensure that every step in the process properly protects their data in accordance with the Personal Data Protection Act. II. A Comparison with Decentralized Trial Regulations in Other Countries Denmark became the first country in the world to release regulatory measures on decentralized trials, issuing the “Danish Medicines Agency’s Guidance on the Implementation of Decentralized Elements in Clinical Trials with Medicinal Products” in September 2021[2]. In December 2022, the European Union as a whole released its “Recommendation Paper on Decentralized Elements in Clinical Trials”[3]. The United States issued the draft “Decentralized Clinical Trials for Drugs, Biological Products, and Devices” document in May 2023[4]. The comparison in Table 1 shows that Taiwan’s guidelines a relatively similar in structure to those of Denmark and the EU; the US guidelines also cover medical device clinical trials. Table 1: Summary of Decentralized Clinical Trial Guidelines in Taiwan, Denmark, the European Union as a whole, and the United States Taiwan Denmark European Union as a whole United States What do the guidelines apply to? Medicinal products Medicinal products Medicinal products Medicinal products and medical devices Trial subject recruitment and electronic informed consent Covers informed consent process; informed consent interview; digital information sheet; trial subject consent form signing; etc. Covers informed consent process; informed consent interview; trial subject consent form signing; etc. Covers informed consent process; informed consent interview; digital information sheet; trial subject consent form signing; etc. Covers informed consent process; informed consent interview; etc. Delivery and provision of investigational medicinal products Delegated, specifically-trained trial personnel deliver and provide investigational medicinal products. The investigator or delegated personnel deliver and provide investigational medicinal products. The investigator, delegated personnel, or a third-party, Good Distribution Practice-compliant logistics provider deliver and provide investigational medicinal products. The principal investigator, delegated personnel, or a distributor deliver and provide investigational products. Remote monitoring of trial subject safety Trial subjects may do return visits at trial sites, via phone calls, via video calls, or via home visits, and may undergo testing at nearby laboratories. Trial subjects may do return visits at trial sites, via phone calls, via video calls, or via home visits, and may undergo testing at nearby laboratories. Trial subjects may do return visits at trial sites, via phone calls, via video calls, or via home visits. Trial subjects may do return visits at trial sites, via phone calls, via video calls, or via home visits, and may undergo testing at nearby laboratories. Trial subject reporting of adverse events Trial subjects may self-report adverse events through a digital platform. Trial subjects may self-report adverse events through a digital platform. Trial subjects may self-report adverse events through a digital platform. Trial subjects may self-report adverse events through a digital platform. Remote data monitoring The sponsor may conduct remote data monitoring. The sponsor may conduct remote data monitoring. The sponsor may conduct remote data monitoring (not permitted in some countries). The sponsor may conduct remote data monitoring. Information systems and electronic data collection, processing, and storage The recorded data must be credible, reliable, and verifiable. Requires an information system that is validated, secure, and user-friendly. The recorded data must be credible, reliable, and verifiable. Must ensure data reliability, security, privacy, and confidentiality. III. Conclusion The implementation of decentralized clinical trials must be approached with careful assessment of risks and rationality, with trial subject safety, rights, and well-being as top priorities. Since Taiwan’s Guidelines for Implementing Decentralized Elements in Medicinal Product Clinical Trials were just announced in June of this year, the status of decentralized clinical trial implementation is still pending industry feedback to confirm feasibility. The overall goal is to enhance and optimize the clinical trial environment in Taiwan. Reference: [1] 衛生福利部食品藥物管理署,〈藥品臨床試驗執行分散式措施指引〉,2023/6/12,https://www.fda.gov.tw/TC/siteListContent.aspx?sid=9354&id=43548(最後瀏覽日:2023/11/2)。 [2] [DMA] DANISH MEDICINES AGENCY, The Danish Medicines Agency’s guidance on the Implementation of decentralised elements in clinical trials with medicinal products (2021),https://laegemiddelstyrelsen.dk/en/news/2021/guidance-on-the-implementation-of-decentralised-elements-in-clinical-trials-with-medicinal-products-is-now-available/ (last visited Nov. 2, 2023). [3] [HMA] HEADS OF MEDICINES AGENCIES, [EC] EUROPEAN COMMISSION & [EMA] EUROPEAN MEDICINES AGENCY, Recommendation paper on decentralised elements in clinical trials (2022),https://health.ec.europa.eu/latest-updates/recommendation-paper-decentralised-elements-clinical-trials-2022-12-14_en (last visited Nov. 2, 2023). [4] [US FDA] US FOOD AND DRUG ADMINISTRATION, Decentralized Clinical Trials for Drugs, Biological Products, and Devices (draft, 2023),https://www.fda.gov/regulatory-information/search-fda-guidance-documents/decentralized-clinical-trials-drugs-biological-products-and-devices (last visited Nov. 2, 2023).
Observing Recent Foreign Developments upon Bio-medicine、 Marketing Medical Devices、Technology Development Project and the Newest Litigation Trend Concerning the Joint Infringement of Method/Process Patents1、Chinese REACH has put into shape, how about Taiwan REACH? - A Perspective of Chinese Measures on Environmental Management of New Chemical Substances Taiwan food industry has been struck by the government agency's disclosure that certain unfaithful manufacturers have mixed toxic chemicals into the food additives for the past 30 years, and the chemicals may seriously threaten public health. This event has not only shocked the confidence of the customers to the industry, but also drew public attention on the well-management and the safe use of chemicals. In order to manage the fast advancing and widely applicable chemical substance appropriately, the laws and regulations among the international jurisprudences in recent years tend to regulate unfamiliar chemicals as “new chemical substances” and leverage registration systems to follow their use and import. REACH is one the most successful models which has been implemented by European Union since 2006. China, one of our most important business partners, has also learned from the EU experience and implemented its amended " Measures on Environmental Management of New Chemical Substances" (also known as "Chinese REACH") last year. It is not only a necessity for our industry which has invested or is running a business in China to realize how this new regulation may influence their business as differently , but also for our authority concerned to observe how can our domestic law and regulation may connect to this international trend. Therefore, except for briefing the content of Chinese REACH, this article may also review those existing law and regulations in Taiwan and observe the law making movement taken by our authority. We expect that the comparison and observation in this article may be a reference for our authorities concerned to map out a better environment for new chemical management. 2、The study on Taiwanese businessmen Join the Bid Invitation and Bidding of Science and Technology Project China government invests great funds in their Science and Technology Project management system, containing most of innovated technology. It also creates the great business opportunity for domestic industry. China government builds up a Bid Invitation and Bidding Procedure in the original Science and Technology Project Regime recent years, in order to make the regime become more open and full of transparency. It also improves Regime to become more fairness and efficiency. Taiwan industry may try to apply for those Science and Technology Project, due to this attractive opportunity, but they should understand china's legal system before they really do that. This Article will introduce the "Bid Invitation and Bidding Law of the Peoples Republic of China", and the "Provisional Regulation on Bid Invitation and Bidding of Science and Technology Project", then clarify applied relationship between the "Bid Invitation and Bidding Law of the Peoples Republic of China", and "Government Procurement Law of the Peoples Republic of China". It also analyzes "Bid Invitation and Bidding Procedure", "Administration of Contract Performance Procedure", "Inspection and Acceptance Procedure", and "Protest and Complaint Procedure, providing complete legal observation and opinion for Taiwan industry finally. Keyword Bid Invitation and Bidding Law of the Peoples Republic of China; Government Procurement Law of the Peoples Republic of China; Provisional Regulation on Bid Invitation and Bidding of Science and Technology Project; Applying for Science and Technology Project Regime; Bid Invitation and Bidding Procedure; Administration of Contract Performance Procedure; Inspection and Acceptance Procedure; Protest and Complaint Procedure. 3、Comparing the Decisions of the United States Supreme Court regarding Preempting Marketing Medical Devices and Drugs from State Tort Litigations with the Decision of a Hypothetical Case in Taiwan The investment costs of complying with pertinent laws and regulations for manufacturing, marketing, and profiting from drugs and medical devices (abbreviated as MD) are far higher than the costs necessary for securing a market permit. The usage of MD products contains the risk of harming their users or the patients, who might sue the manufacturer for damages in the court based on tort law. To help reduce the risk of such litigation, the industry should be aware of the laws governing the state tort litigations and the preemption doctrine of the federal laws of the United States. This article collected four critical decisions by the United States Supreme Court to analyze the requirements of federal preemption from the state tort litigations in these cases. The article also analyzed the issues of preemption in our law system in a hypothetical case. These issues include the competing regulatory requirements of the laws and regulations on the drugs and MDs and the Drug Injury Relief Act versus the Civil Code and the Consumer Protection Law. The article concluded: 1. The pre-market-approval of MD in the United States is exempted from the state tort litigations; 2. Brand-name-drug manufacturers must proactively update the drug label regarding severe risks evidenced by the latest findings; 3. Generic-drug manufacturers are exempted from the product liability litigations and not required to comply with the aforementioned brand-name-drug manufacturers' obligation; 4. No preemption issues are involved in these kinds of product liability litigations in our country; 5. The judge of general court is not bound by the approval of marketing of drug and MD; 6. The judge of general court is not bound by the determination and verdict of the Drug Injury Relief Act. 4、Through Computer-Aided Detection Software, Comparing by Discussing and Analyzing the Regulatory Requirements for Marketing Medical Devices in the United States and in Taiwan Computer-Aided Detection (CADe) software systematically assists medical doctors to detect suspicious diseased site(s) inside patients' bodies, and it would help patients receive proper medical treatments as soon as possible. Only few of this type of medical device (MD) have been legally marketed either in the United States of America (USA) or in Taiwan. This is a novel MD, and the rules regulating it are still under development. Thus, it is valuable to investigate and discuss its regulations. To clarify the requirements of legally marketing the MD, this article not only collects and summarizes the latest draft guidance announced by the USA, but also compares and analyzes the similarities and differences between USA and Taiwan, and further explains the logics that USA applies to clarify and qualify CADe for marketing, so that the Department of Health (DOH) in Taiwan could use them as references. Meanwhile, the article collects the related requirements by the Administrative Procedure Act and by the Freedom of Government Information Law of our nation, and makes the following suggestions on MD regulations to the DOH: creating product code in the system of categorization, providing clearer definition of classification, and actively announcing the (abbreviated) marketing route that secures legal permission for each individual product. 5、A Discussion on the Recent Cases Concerning the Joint Infringement of Method/Process Patents in the U.S. and Japan In the era of internet and mobile communication, practices of a method patent concerning innovative service might often involve several entities, and sometimes the method patent can only be infringed jointly. Joint infringement of method/process patents is an issue needed to be addressed by patent law, since it is assumed that a method patent can only be directly infringed by one entity to perform all the steps disclosed in the patent. In the U.S., CAFC has established the "control or direction" standard to address the issue, but the standard has been criticized and it is under revision now. In Japan, there is no clearly-established standard to address the issue of joint infringement, but it seems that the entity that controls and benefits from the joint infringement might be held liable. Based on its discussion about the recent development in the U.S. and Japan, this article attempts to provide some suggestions for inventors of innovative service models to use patents to protect their inventions properly: they should try to avoid describing their inventions in the way of being practiced by multi-entities, they should try to claim both method and system/apparatus inventions, and they should try to predict the potential infringement of their patents in order to address the problem of how to prove the infringement.