Reviews on Taiwan Constitutional Court's Judgment no. 13 of 2022

Reviews on Taiwan Constitutional Court's Judgment no. 13 of 2022

2022/11/24

I.Introduction

  In 2012, the Taiwan Human Rights Promotion Association and other civil groups believe that the National Health Insurance Administration released the national health insurance database and other health insurance data for scholars to do research without consent, which may be unconstitutional and petitioned for constitutional interpretation.

  Taiwan Human Rights Promotion Association believes that the state collects, processes, and utilizes personal data on a large scale with the "Personal Data Protection Law", but does not set up another law of conduct to control the exercise of state power, which has violated the principle of legal retention; the data is provided to third-party academic research for use, and the parties involved later Excessive restrictions on the right to withdraw go against the principle of proportionality.

  The claimant criticized that depriving citizens of their prior consent and post-control rights to medical data is like forcing all citizens to unconditionally contribute data for use outside the purpose before they can use health insurance. The personal data law was originally established to "avoid the infringement of personality rights and promote the rational use of data", but in the insufficient and outdated design of the regulations, it cannot protect the privacy of citizens' information from infringement, and it is easy to open the door to the use of data for other purposes.

  In addition, even if the health insurance data is de-identified, it is still "individual data" that can distinguish individuals, not "overall data." Health insurance data can be connected with other data of the Ministry of Health and Welfare, such as: physical and mental disability files, sexual assault notification files, etc., and you can also apply for bringing in external data or connecting with other agency data. Although Taiwan prohibits the export of original data, the risk of re-identification may also increase as the number of sources and types of data concatenated increases, as well as unspecified research purposes.

  The constitutional court of Taiwan has made its judgment on the constitutionality of the personal data usage of National Health Insurance research database. The judgment, released on August 12, 2022, states that Article 6 of Personal Data Protection Act(PDPA), which asks “data pertaining to a natural person's medical records, healthcare, genetics, sex life, physical examination and criminal records shall not be collected, processed or used unless where it is necessary for statistics gathering or academic research by a government agency or an academic institution for the purpose of healthcare, public health, or crime prevention, provided that such data, as processed by the data provider or as disclosed by the data collector, may not lead to the identification of a specific data subject” does not violate Intelligible principle and Principle of proportionality. Therefore, PDPA does not invade people’s right to privacy and remains constitutional.

  However, the judgment finds the absence of independent supervisory authority responsible for ensuring Taiwan institutions and bodies comply with data protection law, can be unconstitutional, putting personal data protection system on the borderline to failure. Accordingly, laws and regulations must be amended to protect people’s information privacy guaranteed by Article 22 of Constitution of the Republic of China (Taiwan).

  In addition, the judgment also states it is unconstitutional that Articles 79 and 80 of National Health Insurance Law and other relevant laws lack clear provisions in terms of store, process, external transmission of Personal health insurance data held by Central Health Insurance Administration of the Ministry of Health and Welfare.

  Finally, the Central Health Insurance Administration of the Ministry of Health and Welfare provides public agencies or academic research institutions with personal health insurance data for use outside the original purpose of collection. According to the overall observation of the relevant regulations, there is no relevant provision that the parties can request to “opt-out”; within this scope, it violates the intention of Article 22 of the Constitution to protect people's right to information privacy.

II.Independent supervisory authority

  According to Article 3 of Central Regulations and Standards Act, government agencies can be divided into independent agencies that can independently exercise their powers and operate autonomously, and non- independent agencies that must obey orders from their superiors. In Taiwan, the so-called "dedicated agency"(專責機關) does not fall into any type of agency defined by the Central Regulations and Standards Act. Dedicated agency should be interpreted as an agency that is responsible for a specific business and here is no other agency to share the business.

  The European Union requires member states to set up independent regulatory agencies (refer to Articles 51 and 52 of General Data Protection Regulation (GDPR)). In General Data Protection Regulation and the adequacy reference guidelines, the specific requirements for personal data supervisory agencies are as follows: the country concerned should have one or more independent supervisory agencies; they should perform their duties completely independently and cannot seek or accept instructions; the supervisory agencies should have necessary and practicable powers, including the power of investigation; it should be considered whether its staff and budget can effectively assist its implementation. Therefore, in order to pass the EU's adequacy certification and implement the protection of people's privacy and information autonomy, major countries have set up independent supervisory agencies for personal data protection based on the GDPR standards.

  According to this research, most countries have 5 to 10 commissioners that independently exercise their powers to supervise data exchange and personal data protection. In order to implement the powers and avoid unnecessary conflicts of interests among personnel, most of the commissioners are full-time professionals. Article 3 of Basic Code Governing Central Administrative Agencies Organizations defines independent agency as "A commission-type collegial organization that exercises its powers and functions independently without the supervision of other agencies, and operates autonomously unless otherwise stipulated." It is similar to Japan, South Korea, and the United States.

III.Right to Opt-out

  The judgment pointed out that the parties still have the right to control afterwards the personal information that is allowed to be collected, processed and used without the consent of the parties or that meets certain requirements. Although Article 11 of PDPA provides for certain parties to exercise the right to control afterwards, it does not cover all situations in which personal data is used, such as: legally collecting, processing or using correct personal data, and its specific purpose has not disappeared, In the event that the time limit has not yet expired, so the information autonomy of the party cannot be fully protected, the subject, cause, procedure, effect, etc. of the request for suspension of use should be clearly stipulated in the revised law, and exceptions are not allowed.

  The United Kingdom is of great reference. In 2017, after the British Information Commissioner's Office (ICO) determined that the data sharing agreement between Google's artificial intelligence DeepMind and the British National Health Service (NHS) violated the British data protection law, the British Department of Health and Social Care proposed National data opt-out Directive in May, 2018. British health and social care-related institutions may refer to the National Data Opt-out Operational Policy Guidance Document published by the National Health Service in October to plan the mechanism for exercising patient's opt-out right. The guidance document mainly explains the overall policy on the exercise of the right to opt-out, as well as the specific implementation of suggested practices, such as opt-out response measures, methods of exercising the opt-out right, etc.

  National Data Opt-out Operational Policy Guidance Document also includes exceptions and restrictions on the right to opt-out. The Document stipulates that exceptions may limit the right to Opt-out, including: the sharing of patient data, if it is based on the consent of the parties (consent), the prevention and control of infectious diseases (communicable disease and risks to public health), major public interests (overriding) Public interest), statutory obligations, or cooperation with judicial investigations (information required by law or court order), health and social care-related institutions may exceptionally restrict the exercise of the patient's right to withdraw.

  What needs to be distinguished from the situation in Taiwan is that when the UK first collected public information and entered it into the NHS database, there was already a law authorizing the NHS to search and use personal information of the public. The right to choose to enter or not for the first time; and after their personal data has entered the NHS database, the law gives the public the right to opt-out. Therefore, the UK has given the public two opportunities to choose through the enactment of special laws to protect public's right to information autonomy.

  At present, the secondary use of data in the health insurance database does not have a complete legal basis in Taiwan. At the beginning, the data was automatically sent in without asking for everyone’s consent, and there was no way to withdraw when it was used for other purposes, therefore it was s unconstitutional. Hence, in addition to thinking about what kind of provisions to add to the PDPA as a condition for "exception and non-request for cessation of use", whether to formulate a special law on secondary use is also worthy of consideration by the Taiwan government.

IV.De-identification

  According to the relevant regulations of PDPA, there is no definition of "de-identification", resulting in a conceptual gap in the connotation. In other words, what angle or standard should be used to judge that the processed data has reached the point where it is impossible to identify a specific person. In judicial practice, it has been pointed out that for "data recipients", if the data has been de-identified, the data will no longer be regulated by PDPA due to the loss of personal attributes, and it is even further believed that de-identification is not necessary.

  However, the Judgment No. 13 of Constitutional Court, pointed out that through de-identification measures, ordinary people cannot identify a specific party without using additional information, which can be regarded as personal data of de-identification data. Therefore, the judge did not give an objective standard for de-identification, but believed that the purpose of data utilization and the risk of re-identification should be measured on a case-by-case basis, and a strict review of the constitutional principle of proportionality should be carried out. So far, it should be considered that the interpretation of the de-identification standard has been roughly finalized.

V.Conclusions

  The judge first explained that if personal information is processed, the type and nature of the data can still be objectively restored to indirectly identify the parties, no matter how simple or difficult the restoration process is, if the data is restored in a specific way, the parties can still be identified. personal information. Therefore, the independent control rights of the parties to such data are still protected by Article 22 of the Constitution.

  Conversely, when the processed data objectively has no possibility to restore the identification of individuals, it loses the essence of personal data, and the parties concerned are no longer protected by Article 22 of the Constitution.

  Based on this, the judge declared that according to Article 6, Item 1, Proviso, Clause 4 of the PDPA, the health insurance database has been processed so that the specific party cannot be identified, and it is used by public agencies or academic research institutions for medical and health purposes. Doing necessary statistical or academic research complies with the principles of legal clarity and proportionality, and does not violate the Constitution.

  However, the judge believes that the current personal data law or other relevant regulations still lack an independent supervision mechanism for personal data protection, and the protection of personal information privacy is insufficient. In addition, important matters such as personal health insurance data can be stored, processed, and transmitted externally by the National Health Insurance Administration in a database; the subject, purpose, requirements, scope, and method of providing external use; and organizational and procedural supervision and protection mechanisms, etc. Articles 79 and 80 of the Health Insurance Law and other relevant laws lack clear provisions, so they are determined to be unconstitutional.

  In the end, the judge found that the relevant laws and regulations lacked the provisions that the parties can request to stop using the data, whether it is the right of the parties to request to stop, or the procedures to be followed to stop the use, there is no relevant clear text, obviously the protection of information privacy is insufficient. Therefore, regarding unconstitutional issues, the Constitutional Court ordered the relevant agencies to amend the Health Insurance Law and related laws within 3 years, or formulate specific laws.

※Reviews on Taiwan Constitutional Court's Judgment no. 13 of 2022,STLI, https://stli.iii.org.tw/en/article-detail.aspx?no=55&tp=2&i=168&d=8950 (Date:2024/10/24)
Quote this paper
You may be interested
The Key Elements for Data Intermediaries to Deliver Their Promise

The Key Elements for Data Intermediaries to Deliver Their Promise 2022/12/13   As human history enters the era of data economy, data has become the new oil. It feeds artificial intelligence algorithms that are disrupting how advertising, healthcare, transportation, insurance, and many other industries work. The excitement of having data as a key production input lies in the fact that it is a non-rivalrous good that does not diminish by consumption.[1] However, the fact that people are reluctant in sharing data due to privacy and trade secrets considerations has been preventing countries to realize the full value of data. [2]   To release more data, policymakers and researchers have been exploring ways to overcome the trust dilemma. Of all the discussions, data intermediaries have become a major solution that governments are turning to. This article gives an overview of relevant policy developments concerning data intermediaries and a preliminary analysis of the key elements that policymakers should consider for data intermediaries to function well. I. Policy and Legal developments concerning data intermediaries   In order to unlock data’s full value, many countries have started to focus on data intermediaries. For example, in 2021, the UK’s Department for Digital, Culture, Media and Sport (DCMS) commissioned the Centre for Data Ethics and Innovation (CDEI) to publish a report on data intermediaries[3] , in response to the 2020 National Data Strategy.[4] In 2020, the European Commission published its draft Data Governance Act (DGA)[5] , which aims to build up trust in data intermediaries and data altruism organizations, in response to the 2020 European Strategy for Data.[6] The act was adopted and approved in mid-2022 by the Parliament and Council; and will apply from 24 September 2023.[7] The Japanese government has also promoted the establishment of data intermediaries since 2019, publishing guidance to establish regulations on data trust and data banks.[8] II. Key considerations for designing effective data intermediary policy 1.Evaluate which type of data intermediary works best in the targeted country   From CDEI’s report on data intermediaries and the confusion in DGA’s various versions of data intermediary’s definition, one could tell that there are many forms of data intermediaries. In fact, there are at least eight types of data intermediaries, including personal information management systems (PIMS), data custodians, data exchanges, industrial data platforms, data collaboratives, trusted third parties, data cooperatives, and data trusts.[9] Each type of data intermediary was designed to combat data-sharing issues in specific countries, cultures, and scenarios. Hence, policymakers need to evaluate which type of data intermediary is more suitable for their society and market culture, before investing more resources to promote them.   For example, data trust came from the concept of trust—a trustee managing a trustor’s property rights on behalf of his interest. This practice emerged in the middle ages in England and has since developed into case law.[10] Thus, the idea of data trust is easily understood and trusted by the British people and companies. As a result, British people are more willing to believe that data trusts will manage their data on their behalf in their best interest and share their valuable data, compared to countries without a strong legal history of trusts. With more people sharing their data, trusts would have more bargaining power to negotiate contract terms that are more beneficial to data subjects than what individual data owners could have achieved. However, this model would not necessarily work for other countries without a strong foundation of trust law. 2.Quality signals required to build trust: A government certificate system can help overcome the lemon market problem   The basis of trust in data intermediaries depends largely on whether the service provider is really neutral in its actions and does not reuse or sell off other parties’ data in secret. However, without a suitable way to signal their service quality, the market would end up with less high-quality service, as consumers would be reluctant to pay for higher-priced service that is more secure and trustworthy when they have no means to verify the exact quality.[11] This lemon market problem could only be solved by a certificate system established by actors that consumers trust, which in most cases is the government.   The EU government clearly grasped this issue as a major obstacle to the encouragement of trust in data intermediaries and thus tackles it with a government register and verification system. According to the Data Government Act, data intermediation services providers who intend to provide services are required to notify the competent authority with information on their legal status, form, ownership structure, relevant subsidiaries, address, public website, contact details, the type of service they intend to provide, the estimated start date of activities…etc. This information would be provided on a website for consumers to review. In addition, they can request the competent authority to confirm their legal compliance status, which would in turn verify them as reliable entities that can use the ‘data intermediation services provider recognised in the Union’ label. 3.Overcoming trust issues with technology that self-enforces privacy: privacy-enhancing technologies (PETs)   Even if there are verified data intermediation services available, businesses and consumers might still be reluctant to trust human organizations. A way to boost trust is to adopt technologies that self-enforces privacy. A real-world example is OpenSAFELY, a data intermediary implementing privacy-enhancing technologies (PETs) to provide health data sharing in a secure environment. Through a federated analytics system, researchers are able to conduct research with large volumes of healthcare data, without the ability to observe any data directly. Under such protection, UK NHS is willing to share its data for research purposes. The accuracy and timeliness of such research have provided key insights to inform the UK government in decision-making during the COVID-19 pandemic.   With the benefits it can bring, unsurprisingly, PETs-related policies have become quite popular around the globe. In June 2022, Singapore launched its Digital Trust Centre (DTC) for accelerating PETs development and also signed a Memorandum of Understanding with the International Centre of Expertise of Montreal for the Advancement of Artificial Intelligence (CEIMIA) to collaborate on PETs.[12] On September 7th, 2022, the UK Information Commissioners’ Office (ICO) published draft guidance on PETs.[13] Moreover, the U.K. and U.S. governments are collaborating on PETs prize challenges, announcing the first phase winners on November 10th, 2022.[14] We could reasonably predict that more PETs-related policies would emerge in the coming year. [1] Yan Carrière-Swallow and Vikram Haksar, The Economics of Data, IMFBlog (Sept. 23, 2019), https://blogs.imf.org/2019/09/23/the-economics-of-data/#:~:text=Data%20has%20become%20a%20key,including%20oil%2C%20in%20important%20ways (last visited July 22, 2022). [2] Frontier Economics, Increasing access to data across the economy: Report prepared for the Department for Digital, Culture, Media, and Sport (2021), https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/974532/Frontier-access_to_data_report-26-03-2021.pdf (last visited July 22, 2022). [3] The Centre for Data Ethics and Innovation (CDEI), Unlocking the value of data: Exploring the role of data intermediaries (2021), https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1004925/Data_intermediaries_-_accessible_version.pdf (last visited June 17, 2022). [4] Please refer to the guidelines for the selection of sponsors of the 2022 Social Innovation Summit: https://www.gov.uk/government/publications/uk-national-data-strategy/national-data-strategy(last visited June 17, 2022). [5] Regulation of the European Parliament and of the Council on European data governance and amending Regulation (EU) 2018/1724 (Data Governance Act), 2020/0340 (COD) final (May 4, 2022). [6] Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and The Committee of the Regions— A European strategy for data, COM/2020/66 final (Feb 19, 2020). [7] Proposal for a Regulation on European Data Governance, European Parliament Legislative Train Schedule, https://www.europarl.europa.eu/legislative-train/theme-a-europe-fit-for-the-digital-age/file-data-governance-act(last visited Aug 17, 2022). [8] 周晨蕙,〈日本資訊信託功能認定指引第二版〉,科技法律研究所,https://stli.iii.org.tw/article-detail.aspx?no=67&tp=5&d=8422(最後瀏覽日期︰2022/05/30)。 [9] CDEI, supra note 3. [10] Ada Lovelace Institute, Exploring legal mechanisms for data stewardship (2021), 30~31,https://www.adalovelaceinstitute.org/wp-content/uploads/2021/03/Legal-mechanisms-for-data-stewardship_report_Ada_AI-Council-2.pdf (last visited Aug 17, 2022). [11] George A. Akerlof, The Market for "Lemons": Quality Uncertainty and the Market Mechanism, THE QUARTERLY JOURNAL OF ECONOMICS, 84(3), 488-500 (1970). [12] IMDA, MOU Signing Between IMDA and CEIMIA is a Step Forward in Cross-border Collaboration on Privacy Enhancing Technology (PET) (2022),https://www.imda.gov.sg/-/media/Imda/Files/News-and-Events/Media-Room/Media-Releases/2022/06/MOU-bet-IMDA-and-CEIMIA---ATxSG-1-Jun-2022.pdf (last visited Nov. 28, 2022). [13] ICO publishes guidance on privacy enhancing technologies, ICO, https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2022/09/ico-publishes-guidance-on-privacy-enhancing-technologies/ (last visited Nov. 27, 2022). [14] U.K. and U.S. governments collaborate on prize challenges to accelerate development and adoption of privacy-enhancing technologies, GOV.UK, https://www.gov.uk/government/news/uk-and-us-governments-collaborate-on-prize-challenges-to-accelerate-development-and-adoption-of-privacy-enhancing-technologies (last visited Nov. 28, 2022); Winners Announced in First Phase of UK-US Privacy-Enhancing Technologies Prize Challenges, NIST, https://www.nist.gov/news-events/news/2022/11/winners-announced-first-phase-uk-us-privacy-enhancing-technologies-prize (last visited Nov. 28, 2022).

Introduction to Taiwan’s Guidelines for Implementing Decentralized Elements in Medicinal Product Clinical Trials

Introduction to Taiwan’s Guidelines for Implementing Decentralized Elements in Medicinal Product Clinical Trials 2023/12/15 The development of digital tools such as the internet, apps, and wearable devices have meant major breakthroughs for clinical trials. These advances have the potential to reduce the frequency of trial subject visits, accelerate research timelines, and lower the costs of drug development. The COVID-19 pandemic has further accelerated the use of digital tools, prompting many countries to adopt decentralized measures that enable trial subjects to participate in clinical trials regardless of their physical location. In step with the transition into the post-pandemic era, the Taiwan Food and Drug Administration (TFDA) issued the Guidelines for Implementing Decentralized Elements in Medicinal Product Clinical Trials in June, 2023[1]. The Guidelines are intended to cover a wide array of decentralized measures; they aim to increase trial subjects’ willingness to participate in trials, reduce the need for in-person visits to clinical trial sites, enhance real-time data acquisition during trials, and enable clinic sponsors and contract research organizations to process data remotely. I. Key Points of Taiwan’s Guidelines for Implementing Decentralized Elements in Medicinal Product Clinical Trials The Guidelines cover primarily the following matters: General considerations for implementing decentralized measures; trial subject recruitment and electronic informed consent; delivery and provision of investigational medicinal products; remote monitoring of trial subject safety; trial subject reporting of adverse events; remote data monitoring; and information systems and electronic data collection/processing/storage. 1. General Considerations for Implementing Decentralized Measures (1) During clinical trial execution, a reduction in trial subject in-person visits may present challenges to medical observation. It is recommended that home visits for any given trial subject be conducted by the principal investigator, sub-investigator, or a single, consistent delegated study nurse. (2) Sponsors must carefully evaluate all of the trial design’s decentralization measures to ensure data integrity. (3) Sponsors must conduct risk assessments for each individual trial, and must confirm the rationality of choosing decentralized measures. These decentralized measures must also be incorporated into the protocol. (4) When electronically collecting data, sponsors must ensure information system reliability and data security. Artificial intelligence may be considered for use in decentralized clinical trials; sponsors must carefully evaluate such systems, especially when they touch on determinations for critical data or strategies. (5) As the design of decentralized clinical trials is to ensure equal access to healthcare services, it must provide patients with a variety of ways to participate in clinical trials. (6) When implementing any decentralized measures, it is essential to ensure that the principal investigator and sponsor adhere to the Regulations for Good Clinical Practice and bear their respective responsibilities for the trial. (7) The use of decentralized measures must be stated in the regulatory application, and the Checklist of Decentralized Elements in Medicinal Product Clinical Trials must be included in the submission. 2. Subject Recruitment and Electronic Informed Consent (1) Trial subject recruitment through social media or established databases may only be implemented after the Institutional Review Board reviews and approves of the recruitment methods and content. (2) Must comply with the Principles for Recruiting Clinical Trial Subjects in medicinal product trials, the Personal Data Protection Act, and other regulations. (3) Regarding clinical trial subject informed consent done through digital software or devices, if it complies with Article 4, Paragraph 2 of the Electronic Signatures Act, that is, if the content can be displayed in its entirety and continues to be accessible for subsequent reference, then so long as the trial subject agrees to do so, the signature may be done via a tablet or other electronic device. The storage of signed electronic Informed Consent Forms (eICF) must align with the aforementioned Principles and meet the competent authority’s access requirements. 3. Delivery and Provision of Investigational Medicinal Products (1) The method of delivering and providing investigational medicinal products and whether trial subjects can use them on their own at home depends to a high degree on the investigational medicinal product’s administration route and safety profile. (2) When investigational medicinal products are delivered and provided through decentralized measures to trial subjects, this must be documented in the protocol. The process of delivering and providing said products must also be clearly stated in the informed consent form; only after being explained to a trial subject by the trial team, and after the trial subject’s consent is obtained, may such decentralized measures be used. (3) Investigational products prescribed by the principal investigator/sub-investigator must be reviewed by a delegated pharmacist to confirm that the investigational products’ specific items, dosage, duration, total quantity, and labeling align with the trial design. The pharmacist must also review each trial subject’s medication history, to ensure there are no medication-related issues; only then, and only in a manner that ensures the investigational product’s quality and the subject’s privacy, may delegated and specifically-trained trial personnel provide the investigational product to the subject. (4) Compliance with relevant regulations such as the Pharmaceutical Affairs Act, Pharmacists Act, Regulations on Good Practices for Drug Dispensation, and Regulations for Good Clinical Practice is required. 4. Remote Monitoring of Subject Safety (1) Decentralized trial designs involve trial subjects performing relatively large numbers of trial-related procedures at home. The principal investigator must delegate trained, qualified personnel to perform tasks such as collecting blood samples, administering investigational products, conducting safety monitoring, doing adverse event tracking, etc. (2) If trial subjects receive protocol-prescribed testing at nearby medical facilities or laboratories rather than at the original trial site, these locations must be authorized by the trial sponsor and must have relevant laboratory certification; only then may they collect or analyze samples. Such locations must provide detailed records to the principal investigator, to be archived in the trial master file. (3) The trial protocol and schedule must clearly specify which visits must be conducted at the trial site; which can be conducted via phone calls, video calls, or home visits; which tests must be performed at nearby laboratories; and whether trial subjects have multiple or single options at each visit. 5. Subject Reporting of Adverse Events (1) If the trial uses a digital platform to enhance adverse event reporting, trial subjects must be able to report adverse events through the digital platform, such as via a mobile phone app; that is, the principal investigator must be able to immediately access such adverse event information. (2) The principal investigator must handle such reports using risk-based assessment methods. The principal investigator must validate the adverse event reporting platform’s effectiveness, and must develop procedures to identify potential duplicate reports. 6. Remote Data Monitoring (1) If a sponsor chooses to implement remote monitoring, it must perform a reasonability assessment to confirm the appropriateness of such monitoring and establish a remote monitoring plan. (2) The monitoring plan must include monitoring strategies, monitoring personnel responsibilities, monitoring methods, rationale for such implementation, and critical data and processes that must be monitored. It must also generate comprehensive monitoring reports for audit purposes. (3) The sponsor is responsible for ensuring the implementation of remote monitoring, and must conduct risk assessments regarding the implementation process’ data protection and information confidentiality. 7. Information Systems and Electronic Data Collection, Processing, and Storage (1) In accordance with the Regulations for Good Clinical Practice, data recorded in clinical trials must be trustworthy, reliable, and verifiable. (2) It must be ensured that all organizations participating in the clinical trial have a full picture of the data flow. It is recommended that the trial protocol and trial-related documents include data flow diagrams and additional explanations. (3) Define the types and scopes of subject personal data that will be collected, and ensure that every step in the process properly protects their data in accordance with the Personal Data Protection Act. II. A Comparison with Decentralized Trial Regulations in Other Countries Denmark became the first country in the world to release regulatory measures on decentralized trials, issuing the “Danish Medicines Agency’s Guidance on the Implementation of Decentralized Elements in Clinical Trials with Medicinal Products” in September 2021[2]. In December 2022, the European Union as a whole released its “Recommendation Paper on Decentralized Elements in Clinical Trials”[3]. The United States issued the draft “Decentralized Clinical Trials for Drugs, Biological Products, and Devices” document in May 2023[4]. The comparison in Table 1 shows that Taiwan’s guidelines a relatively similar in structure to those of Denmark and the EU; the US guidelines also cover medical device clinical trials. Table 1: Summary of Decentralized Clinical Trial Guidelines in Taiwan, Denmark, the European Union as a whole, and the United States Taiwan Denmark European Union as a whole United States What do the guidelines apply to? Medicinal products Medicinal products Medicinal products Medicinal products and medical devices Trial subject recruitment and electronic informed consent Covers informed consent process; informed consent interview; digital information sheet; trial subject consent form signing; etc. Covers informed consent process; informed consent interview; trial subject consent form signing; etc. Covers informed consent process; informed consent interview; digital information sheet; trial subject consent form signing; etc. Covers informed consent process; informed consent interview; etc. Delivery and provision of investigational medicinal products Delegated, specifically-trained trial personnel deliver and provide investigational medicinal products. The investigator or delegated personnel deliver and provide investigational medicinal products. The investigator, delegated personnel, or a third-party, Good Distribution Practice-compliant logistics provider deliver and provide investigational medicinal products. The principal investigator, delegated personnel, or a distributor deliver and provide investigational products. Remote monitoring of trial subject safety Trial subjects may do return visits at trial sites, via phone calls, via video calls, or via home visits, and may undergo testing at nearby laboratories. Trial subjects may do return visits at trial sites, via phone calls, via video calls, or via home visits, and may undergo testing at nearby laboratories. Trial subjects may do return visits at trial sites, via phone calls, via video calls, or via home visits. Trial subjects may do return visits at trial sites, via phone calls, via video calls, or via home visits, and may undergo testing at nearby laboratories. Trial subject reporting of adverse events Trial subjects may self-report adverse events through a digital platform. Trial subjects may self-report adverse events through a digital platform. Trial subjects may self-report adverse events through a digital platform. Trial subjects may self-report adverse events through a digital platform. Remote data monitoring The sponsor may conduct remote data monitoring. The sponsor may conduct remote data monitoring. The sponsor may conduct remote data monitoring (not permitted in some countries). The sponsor may conduct remote data monitoring. Information systems and electronic data collection, processing, and storage The recorded data must be credible, reliable, and verifiable. Requires an information system that is validated, secure, and user-friendly. The recorded data must be credible, reliable, and verifiable. Must ensure data reliability, security, privacy, and confidentiality. III. Conclusion The implementation of decentralized clinical trials must be approached with careful assessment of risks and rationality, with trial subject safety, rights, and well-being as top priorities. Since Taiwan’s Guidelines for Implementing Decentralized Elements in Medicinal Product Clinical Trials were just announced in June of this year, the status of decentralized clinical trial implementation is still pending industry feedback to confirm feasibility. The overall goal is to enhance and optimize the clinical trial environment in Taiwan. [1] 衛生福利部食品藥物管理署,〈藥品臨床試驗執行分散式措施指引〉,2023/6/12,https://www.fda.gov.tw/TC/siteListContent.aspx?sid=9354&id=43548(最後瀏覽日:2023/11/2)。 [2] [DMA] DANISH MEDICINES AGENCY, The Danish Medicines Agency’s guidance on the Implementation of decentralised elements in clinical trials with medicinal products (2021),https://laegemiddelstyrelsen.dk/en/news/2021/guidance-on-the-implementation-of-decentralised-elements-in-clinical-trials-with-medicinal-products-is-now-available/ (last visited Nov. 2, 2023). [3] [HMA] HEADS OF MEDICINES AGENCIES, [EC] EUROPEAN COMMISSION & [EMA] EUROPEAN MEDICINES AGENCY, Recommendation paper on decentralised elements in clinical trials (2022),https://health.ec.europa.eu/latest-updates/recommendation-paper-decentralised-elements-clinical-trials-2022-12-14_en (last visited Nov. 2, 2023). [4] [US FDA] US FOOD AND DRUG ADMINISTRATION, Decentralized Clinical Trials for Drugs, Biological Products, and Devices (draft, 2023),https://www.fda.gov/regulatory-information/search-fda-guidance-documents/decentralized-clinical-trials-drugs-biological-products-and-devices (last visited Nov. 2, 2023).

Recommendation of the Regulations on the Legal and Effective Access to Taiwan’s Biological Resources

Preface Considering that, many countries and regional international organizations already set up ABS system, such as Andean Community, African Union, Association of Southeast Asia Nations (ASEAN), Australia, South Africa, and India, all are enthusiastic with the establishment of the regulations regarding the access management of biological resources and genetic resources. On the other hand, there are still many countries only use traditional and existing conservation-related regulations to manage the access of biological resources. Can Taiwan's regulations comply with the purposes and objects of CBD? Is there a need for Taiwan to set up specific regulations for the management of these access activities? This article plans to present Taiwan's regulations and review the effectiveness of the existing regulations from the aspect of enabling the legal and effective access to biological resources. A recommendation will be made on whether Taiwan should reinforce the management of the bio-resources access activities. Review and Recommendation of the Regulations on the Legal and Effective Access to Taiwan's Biological Rersearch Resources (1)Evaluate the Needs and Benefits before Establishing the Regulation of Access Rights When taking a look at the current development of the regulations on the access of biological resources internationally, we discover that some countries aggressively develop designated law for access, while some countries still adopt existing regulations to explain the access rights. Whether to choose a designated law or to adopt the existing law should depend on the needs of establishing access and benefit sharing system. Can the access and benefit sharing system benefit the functioning of bio-technological research and development activities that link closely to the biological resources? Can the system protect the interests of Taiwan's bio-research results? In Taiwan, in the bio-technology industry, Agri-biotech, Medical, or Chinese Herb Research & Development are the key fields of development. However, the biological resources they use for the researches are mainly supplied from abroad. Hence, the likelihood of violating international bio-piracy is higher. On the contrary, the incidence of international research houses searching for the biological resources from Taiwan is comparatively lower, so the possibility for them to violate Taiwan's bio-piracy is very low. To look at this issue from a different angle, if Taiwan establishes a separate management system for the access of biological resources, it is likely to add more restrictions to Taiwan's bio-tech R&D activities and impact the development of bio-industry. Also, under the new management system, international R&D teams will also be confined, if they wish to explore the biological resources, or conduct R&D and seek for co-operation activities in Taiwan. Not to mention that it is not a usual practice for international R&D teams to look for Taiwan's biological resources. A new management system will further reduce their level of interest in doing so. In the end, the international teams will then shift their focus of obtaining resources from other countries where the regulation on access is relatively less strict. Before Taiwan establishes the regulations on the legal and effective access to bio-research resources, the government should consider not only the practical elements of the principal on the fair and impartial sharing of the derived interests from bio-research resources, but also take account of its positive and negative impacts on the development of related bio-technological industries. Even if a country's regulation on the access and benefit sharing is thorough and comprehensive enough to protect the interests of bio-resource provider, it will, on the contrary, reduce the industry's interest in accessing the bio-resources. As a result, the development of bio-tech industry will be impacted and the resource provider will then be unable to receive any benefits. By then, the goal of establishing the regulation to benefit both the industry and resource provider will not be realized. To sum up, it is suggested to evaluate the suitability of establishing the management system for the access to biological resources through the cost-effect analysis first. And, further consider the necessity of setting up regulations by the access the economic benefits derived from the regulation for both resource provider and bio-tech industry. (2)The Feasibility of Managing the access to Bio-research Resources from existing Regulations As analysed in the previous paragraphs, the original intention of setting up the Wildlife Conservation Act, National Park Law, Forestry Act, Cultural Heritage Preservation Act, and Aboriginal Basic Act is to protect the environment and to conserve the ecology. However, if we utilize these traditional regulations properly, it can also partially help to manage the access to biological resources. When Taiwan's citizens wish to enter specific area, or to collect the biological resources within the area, they need to receive the permit from management authority, according to current regulations. Since these national parks, protection areas, preserved areas, or other controlled areas usually have the most comprehensive collections of valuable biological resources in a wide range of varieties, it is suggested to include the agreements of access and benefit sharing as the mandatory conditions when applying for the entrance permit. Therefore, the principal of benefit sharing from the access to biological resources can be assured. Furthermore, the current regulations already favour activities of accessing biological resources for academic research purpose. This practice also ties in with the international trend of separating the access application into two categories - academic and business. Australia's practice of access management can be a very good example of utilizing the existing regulations to control the access of resources. The management authority defines the guidelines of managing the entrance of control areas, research of resources, and the collection and access of resources. The authority also adds related agreements, such as PIC (Prior Informed Consent), MTA (Material Transfer Agreement), and benefit sharing into the existing guidelines of research permission. In terms of scope of management, the existing regulation does not cover all of Taiwan's bio-research resources. Luckily, the current environmental protection law regulates areas with the most resourceful resources or with the most distinctive and rare species. These are often the areas where the access management system is required. Therefore, to add new regulation for access management on top of the existing regulation is efficient method that utilizes the least administrative resources. This could be a feasible way for Taiwan to manage the access to biological resources. (3)Establish Specific Regulations to Cover the Details of the Scope of Derived Interests and the Items and Percentage of Funding Allocation In addition to the utilization of current regulations to control the access to biological resources, many countries establish specific regulations to manage the biological resources. If, after the robust economic analysis had been done, the country has come to an conclusion that it is only by establishing new regulations of access management the resources and derived interests of biological resources can be impartially shared, the CBD (Convention of Bio Diversity), the Bonn Guidelines, or the real implementation experiences of many countries can be an important guidance when establishing regulations. Taiwan has come up with the preliminary draft of Genetic Resources Act that covers the important aspects of international access guidelines. The draft indicates the definition and the scope of access activities, the process of access applications (for both business and academic purpose), the establishment of standardized or model MTA, the obligation of disclosing the sources of property rights (patents), and the establishment of bio-diversity fund. However, if we observe the regulation or drafts to the access management of the international agreements or each specific country, we can find that the degree of strictness varies and depends on the needs and situations. Generally speaking, these regulations usually do not cover some detailed but important aspects such as the scope of derived interests from biological resources, or the items and percentage of the allocation of bio-diversity fund. Under the regulation to the access to biological resources, in addition to the access fee charge, the impartial sharing of the derived interests is also an important issue. Therefore, to define the scope of interests is extremely important. Any interest that is out of the defined scope cannot be shared. The interest stated in the existing regulation generally refers to the biological resources or the derived business interests from genetic resources. Apart from describing the forms of interest such as money, non-money, or intellectual property rights, the description of actual contents or scope of the interests is minimal in the regulations. However, after realizing the importance of bio-diversity and the huge business potential, many countries have started to investigate the national and international bio-resources and develop a database system to systematically collect related bio-research information. The database comprised of bio-resources is extremely useful to the activities related to bio-tech developments. If the international bio-tech companies can access Taiwan's bio-resource database, it will save their travelling time to Taiwan. Also, the database might as well become a product that generates revenues. The only issue that needs further clarification is whether the revenue generated from the access of database should be classified as business interests, as defined in the regulations. As far as the bio-diversity fund is concerned, many countries only describe the need of setting up bio-diversity funds in a general manner in the regulations. But the definition of which kind of interests should be put into funds, the percentage of the funds, and the related details are not described. As a result, the applicants to the access of bio-resources or the owner of bio-resources cannot predict the amount of interests to be put into bio-diversity fund before they actually use the resources. This issue will definitely affect the development of access activities. To sum up, if Taiwan's government wishes to develop the specific regulations for the access of biological resources, it is advised to take the above mentioned issues into considerations for a more thoroughly described, and more effective regulations and related framework. Conclusion In recent years, it has been a global trend to establish the regulations of the access to and benefit sharing of bio-resources. The concept of benefit sharing is especially treated as a useful weapon for the developing countries to protect the interests of their abundant bio-research resources. However, as we are in the transition period of changing from free access to biological resources to controlled access, we are facing different regulations within one country as well as internationally. It will be a little bit disappointing for the academic research institution and the industry who relies on the biological resources to conduct bio-tech development if they do not see a clear principal direction to follow. The worse case is the violation of the regulation of the country who owns the bio-resources when the research institutions try to access, exchange, or prospect the biological resources without thorough understanding of related regulations. For some of Taiwan's leading fields in the bio-tech industry, such as Chinese and herbal medicine related products, agricultural products, horticultural products, and bio-tech products, since many resources are obtained from abroad, the incidence of violation of international regulation will increase, and the costs from complying the regulations will also increase. Therefore, not only the researcher but also the government have the responsibility to understand and educate the related people in Taiwan's bio-tech fields the status of international access management regulations and the methods of legally access the international bio-research resources. Currently in Taiwan, we did not establish specific law to manage the access to and benefit sharing of bio-resources. Comparing with the international standard, there is still room of improvement for Taiwan's regulatory protection to the provider of biological resources. However, we have to consider the necessity of doing so, and how to do the improvement. And Taiwan's government should resolve this issue. When we consider whether we should follow international trend to establish a specific law for access management, we should always go back to check the potential state interests we will receive and take this point into consideration. To define the interests, we should always cover the protection of biological resources, the development of bio-tech industry, and the administrative costs of government. Also the conservation of biological resources and the encouragement of bio-tech development should be also taken into consideration when the government is making decisions. In terms of establishing regulations for the access to biological resources and the benefit sharing, there are two possible solutions. The first solution is to utilize the existing regulations and add the key elements of access management into the scope of administrative management. The work is planned through the revision of related current procedures such as entrance control of controlled areas and the access of specific resources. The second solution is to establish new regulations for the access to biological resources. The first solution is relatively easier and quicker; while the second solution is considered to have a more comprehensive control of the issue. The government has the final judgement on which solution to take to generate a more effective management of Taiwan's biological resources.

The use of automated facial recognition technology and supervision mechanism in UK

The use of automated facial recognition technology and supervision mechanism in UK I. Introduction   Automatic facial recognition (AFR) technology has developed rapidly in recent years, and it can identify target people in a short time. The UK Home Office announced the "Biometrics Strategy" on June 28, 2018, saying that AFR technology will be introduced in the law enforcement, and the Home Office will also actively cooperate with other agencies to establish a new oversight and advisory board in order to maintain public trust. AFR technology can improve law enforcement work, but its use will increase the risk of intruding into individual liberty and privacy.   This article focuses on the application of AFR technology proposed by the UK Home Office. The first part of this article describes the use of AFR technology by the police. The second part focuses on the supervision mechanism proposed by the Home Office in the Biometrics Strategy. However, because the use of AFR technology is still controversial, this article will sort out the key issues of follow-up development through the opinions of the public and private sectors. The overview of the discussion of AFR technology used by police agencies would be helpful for further policy formulation. II. Overview of the strategy of AFR technology used by the UK police   According to the Home Office’s Biometrics Strategy, the AFR technology will be used in law enforcement, passports and immigration and national security to protect the public and make these public services more efficient[1]. Since 2017 the UK police have worked with tech companies in testing the AFR technology, at public events like Notting Hill Carnival or big football matches[2].   In practice, AFR technology is deployed with mobile or fixed camera systems. When a face image is captured through the camera, it is passed to the recognition software for identification in real time. Then, the AFR system will process if there is a ‘match’ and the alarm would solicit an operator’s attention to verify the match and execute the appropriate action[3]. For example, South Wales Police have used AFR system to compare images of people in crowds attending events with pre-determined watch lists of suspected mobile phone thieves[4]. In the future, the police may also compare potential suspects against images from closed-circuit television cameras (CCTV) or mobile phone footage for evidential and investigatory purposes[5].   The AFR system may use as tools of crime prevention, more than as a form of crime detection[6]. However, the uses of AFR technology are seen as dangerous and intrusive by the UK public[7]. For one thing, it could cause serious harm to democracy and human rights if the police agency misuses AFR technology. For another, it could have a chilling effect on civil society and people may keep self-censoring lawful behavior under constant surveillance[8]. III. The supervision mechanism of AFR technology   To maintaining public trust, there must be a supervision mechanism to oversight the use of AFR technology in law enforcement. The UK Home Office indicates that the use of AFR technology is governed by a number of codes of practice including Police and Criminal Evidence Act 1984, Surveillance Camera Code of Practice and the Information Commissioner’s Office (ICO)’s Code of Practice for surveillance cameras[9]. (I) Police and Criminal Evidence Act 1984   The Police and Criminal Evidence Act (PACE) 1984 lays down police powers to obtain and use biometric data, such as collecting DNA and fingerprints from people arrested for a recordable offence. The PACE allows law enforcement agencies proceeding identification to find out people related to crime for criminal and national security purposes. Therefore, for the investigation, detection and prevention tasks related to crime and terrorist activities, the police can collect the facial image of the suspect, which can also be interpreted as the scope of authorization of the  PACE. (II) Surveillance Camera Code of Practice   The use of CCTV in public places has interfered with the rights of the people, so the Protection of Freedoms Act 2012 requires the establishment of an independent Surveillance Camera Commissioner (SCC) for supervision. The Surveillance Camera Code of Practice  proposed by the SCC sets out 12 principles for guiding the operation and use of surveillance camera systems. The 12 guiding principles are as follows[10]: A. Use of a surveillance camera system must always be for a specified purpose which is in pursuit of a legitimate aim and necessary to meet an identified pressing need. B. The use of a surveillance camera system must take into account its effect on individuals and their privacy, with regular reviews to ensure its use remains justified. C. There must be as much transparency in the use of a surveillance camera system as possible, including a published contact point for access to information and complaints. D. There must be clear responsibility and accountability for all surveillance camera system activities including images and information collected, held and used. E. Clear rules, policies and procedures must be in place before a surveillance camera system is used, and these must be communicated to all who need to comply with them. F. No more images and information should be stored than that which is strictly required for the stated purpose of a surveillance camera system, and such images and information should be deleted once their purposes have been discharged. G. Access to retained images and information should be restricted and there must be clearly defined rules on who can gain access and for what purpose such access is granted; the disclosure of images and information should only take place when it is necessary for such a purpose or for law enforcement purposes. H. Surveillance camera system operators should consider any approved operational, technical and competency standards relevant to a system and its purpose and work to meet and maintain those standards. I. Surveillance camera system images and information should be subject to appropriate security measures to safeguard against unauthorised access and use. J. There should be effective review and audit mechanisms to ensure legal requirements, policies and standards are complied with in practice, and regular reports should be published. K. When the use of a surveillance camera system is in pursuit of a legitimate aim, and there is a pressing need for its use, it should then be used in the most effective way to support public safety and law enforcement with the aim of processing images and information of evidential value. L. Any information used to support a surveillance camera system which compares against a reference database for matching purposes should be accurate and kept up to date. (III) ICO’s Code of Practice for surveillance cameras   It must need to pay attention to the personal data and privacy protection during the use of surveillance camera systems and AFR technology. The ICO issued its Code of Practice for surveillance cameras under the Data Protection Act 1998 to explain the legal requirements operators of surveillance cameras. The key points of ICO’s Code of Practice for surveillance cameras are summarized as follows[11]: A. The use time of the surveillance camera systems should be carefully evaluated and adjusted. It is recommended to regularly evaluate whether it is necessary and proportionate to continue using it. B. A police force should ensure an effective administration of surveillance camera systems deciding who has responsibility for the control of personal information, what is to be recorded, how the information should be used and to whom it may be disclosed. C. Recorded material should be stored in a safe way to ensure that personal information can be used effectively for its intended purpose. In addition, the information may be considered to be encrypted if necessary. D. Disclosure of information from surveillance systems must be controlled and consistent with the purposes for which the system was established. E. Individuals whose information is recoded have a right to be provided with that information or view that information. The ICO recommends that information must be provided promptly and within no longer than 40 calendar days of receiving a request. F. The minimum and maximum retention periods of recoded material is not prescribed in the Data Protection Act 1998, but it should not be kept for longer than is necessary and should be the shortest period necessary to serve the purposes for which the system was established. (IV) A new oversight and advisory board   In addition to the aforementioned regulations and guidance, the UK Home Office mentioned that it will work closely with related authorities, including ICO, SCC, Biometrics Commissioner (BC), and Forensic Science Regulator (FSR) to establish a new oversight and advisory board to coordinate consideration of law enforcement’s use of facial images and facial recognition systems[12].   To sum up, it is estimated that the use of AFR technology by law enforcement has been abided by existing regulations and guidance. Firstly, surveillance camera systems must be used on the purposes for which the system was established. Secondly, clear responsibility and accountability mechanisms should be ensured. Thirdly, individuals whose information is recoded have the right to request access to relevant information. In the future, the new oversight and advisory board will be asked to consider issues relating to law enforcement’s use of AFR technology with greater transparency. IV. Follow-up key issues for the use of AFR technology   Regarding to the UK Home Office’s Biometrics Strategy, members of independent agencies such as ICO, BC, SCC, as well as civil society, believe that there are still many deficiencies, the relevant discussions are summarized as follows: (I) The necessity of using AFR technology   Elizabeth Denham, ICO Commissioner, called for looking at the use of AFR technology carefully, because AFR is an intrusive technology and can increase the risk of intruding into our privacy. Therefore, for the use of AFR technology to be legal, the UK police must have clear evidence to demonstrate that the use of AFR technology in public space is effective in resolving the problem that it aims to address[13].   The Home Office has pledged to undertake Data Protection Impact Assessments (DPIAs) before introducing AFR technology, including the purpose and legal basis, the framework applies to the organization using the biometrics, the necessity and proportionality and so on. (II)The limitations of using facial image data   The UK police can collect, process and use personal data based on the need for crime prevention, investigation and prosecution. In order to secure the use of biometric information, the BC was established under the Protection of Freedoms Act 2012. The mission of the BC is to regulate the use of biometric information, provide protection from disproportionate enforcement action, and limit the application of surveillance and counter-terrorism powers.   However, the BC’s powers do not presently extend to other forms of biometric information other than DNA or fingerprints[14]. The BC has expressed concern that while the use of biometric data may well be in the public interest for law enforcement purposes and to support other government functions, the public benefit must be balanced against loss of privacy. Hence, legislation should be carried to decide that crucial question, instead of depending on the BC’s case feedback[15].   Because biometric data is especially sensitive and most intrusive of individual privacy, it seems that a governance framework should be required and will make decisions of the use of facial images by the police. (III) Database management and transparency   For the application of AFR technology, the scope of biometric database is a dispute issue in the UK. It is worth mentioning that the British people feel distrust of the criminal database held by the police. When someone is arrested and detained by the police, the police will take photos of the suspect’s face. However, unlike fingerprints and DNA, even if the person is not sued, their facial images are not automatically deleted from the police biometric database[16].   South Wales Police have used AFR technology to compare facial images of people in crowds attending major public events with pre-determined watch lists of suspected mobile phone thieves in the AFR field test. Although the watch lists are created for time-limited and specific purposes, the inclusion of suspects who could possibly be innocent people still causes public panic.   Elizabeth Denham warned that there should be a transparency system about retaining facial images of those arrested but not charged for certain offences[17]. Therefore, in the future the UK Home Office may need to establish a transparent system of AFR biometric database and related supervision mechanism. (IV) Accuracy and identification errors   In addition to worrying about infringing personal privacy, the low accuracy of AFR technology is another reason many people oppose the use of AFR technology by police agencies. Silkie Carlo, director of Big Brother Watch, said the police must immediately stop using the AFR technology and avoid mistaking thousands of innocent citizens as criminals; Paul Wiles, Biometrics Commissioner, also called for legislation to manage AFR technology because of its accuracy is too low and the use of AFR technology should be tested and passed external peer review[18].   In the Home Office’s Biometric Strategy, the scientific quality standards for AFR technology will be established jointly with the FSR, an independent agency under the Home Office. In other words, the Home Office plans to extend the existing forensics science regime to regulate AFR technology.   Therefore, the FSR has worked with the SCC to develop standards relevant to digital forensics. The UK government has not yet seen specific standards for regulating the accuracy of AFR technology at the present stage. V. Conclusion   From the discussion of the public and private sectors in the UK, we can summarize some rules for the use of AFR technology. Firstly, before the application of AFR technology, it is necessary to complete the pre-assessment to ensure the benefits to the whole society. Secondly, there is the possibility of identifying errors in AFR technology. Therefore, in order to maintain the confidence and trust of the people, the relevant scientific standards should be set up first to test the system accuracy. Thirdly, the AFR system should be regarded as an assisting tool for police enforcement in the initial stage. In other words, the information analyzed by the AFR system should still be judged by law enforcement officials, and the police officers should take the responsibilities.   In order to balance the protection of public interest and basic human rights, the use of biometric data in the AFR technology should be regulated by a special law other than the regulations of surveillance camera and data protection. The scope of the identification database is also a key point, and it may need legislators’ approval to collect and store the facial image data of innocent people. Last but not least, the use of the AFR system should be transparent and the victims of human rights violations can seek appeal. [1] UK Home Office, Biometrics Strategy, Jun. 28, 2018, https://www.gov.uk/government/publications/home-office-biometrics-strategy (last visited Aug. 09, 2018), at 7. [2] Big Brother Watch, FACE OFF CAMPAIGN: STOP THE MET POLICE USING AUTHORITARIAN FACIAL RECOGNITION CAMERAS, https://bigbrotherwatch.org.uk/all-campaigns/face-off-campaign/ (last visited Aug. 16, 2018). [3] Lucas Introna & David Wood, Picturing algorithmic surveillance: the politics of facial recognition systems, Surveillance & Society, 2(2/3), 177-198 (2004). [4] Supra note 1, at 12. [5] Id, at 25. [6] Michael Bromby, Computerised Facial Recognition Systems: The Surrounding Legal Problems (Sep. 2006)(LL.M Dissertation Faculty of Law University of Edinburgh), http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.197.7339&rep=rep1&type=pdf , at 3. [7] Owen Bowcott, Police face legal action over use of facial recognition cameras, The Guardian, Jun. 14, 2018, https://www.theguardian.com/technology/2018/jun/14/police-face-legal-action-over-use-of-facial-recognition-cameras (last visited Aug. 09, 2018). [8] Martha Spurrier, Facial recognition is not just useless. In police hands, it is dangerous, The Guardian, May 16, 2018, https://www.theguardian.com/commentisfree/2018/may/16/facial-recognition-useless-police-dangerous-met-inaccurate (last visited Aug. 17, 2018). [9] Supra note 1, at 12. [10] Surveillance Camera Commissioner, Surveillance camera code of practice, Oct. 28, 2014, https://www.gov.uk/government/publications/surveillance-camera-code-of-practice (last visited Aug. 17, 2018). [11] UK Information Commissioner’s Office, In the picture: A data protection code of practice for surveillance cameras and personal information, Jun. 09, 2017, https://ico.org.uk/for-organisations/guide-to-data-protection/encryption/scenarios/cctv/ (last visited Aug. 10, 2018). [12] Supra note 1, at 13. [13] Elizabeth Denham, Blog: facial recognition technology and law enforcement, Information Commissioner's Office, May 14, 2018, https://ico.org.uk/about-the-ico/news-and-events/blog-facial-recognition-technology-and-law-enforcement/ (last visited Aug. 14, 2018). [14] Monique Mann & Marcus Smith, Automated Facial Recognition Technology: Recent Developments and Approaches to Oversight, Automated Facial Recognition Technology, 10(1), 140 (2017). [15] Biometrics Commissioner, Biometrics Commissioner’s response to the Home Office Biometrics Strategy, Jun. 28, 2018, https://www.gov.uk/government/news/biometrics-commissioners-response-to-the-home-office-biometrics-strategy (last visited Aug. 15, 2018). [16] Supra note 2. [17] Supra note 13. [18] Jon Sharman, Metropolitan Police's facial recognition technology 98% inaccurate, figures show, INDEPENDENT, May 13, 2018, https://www.independent.co.uk/news/uk/home-news/met-police-facial-recognition-success-south-wales-trial-home-office-false-positive-a8345036.html (last visited Aug. 09, 2018).

TOP