Brief Introduction to Taiwan Social Innovation Policies

Brief Introduction to Taiwan Social Innovation Policies

2021/09/13

1. Introduction

  The Millennium Development Goals (MDGs)[1] set forth by the United Nations in 2000 are carried out primarily by nations and international organizations. Subsequently, the Sustainable Development Goals (SDGs) set forth by the United Nations in 2015 started to delegate the functions to organizations of all levels. Presently, there is a global awareness of the importance of balancing “economic growth”, “social progress”, and “environmental protection” simultaneously during development. In the above context, many similar concepts have arisen worldwide, including social/solidarity economy, social entrepreneurship and social enterprise, and social innovation.

  Generally, social innovation aims to alter the interactions between various groups in society through innovative applications of technology or business models, and to find new ways to solve social problems through such alterations. In other words, the goal is to use innovative methods to solve social problems. The difference between social innovation and social enterprise is that social enterprise combines commercial power to achieve its social mission under a specific perspective, while social innovation creates social value through cooperation with and coordination among technology, resources, and communities under a diversified nature.

2. Overview of Taiwan Social Enterprise Policy

  To integrate into the global community and assist in the development of domestic social innovation, Taiwan’s Executive Yuan launched the “Social Enterprise Action Plan” in 2014, which is the first policy initiative to support social enterprises (from 2014 to 2016). Under this policy initiative, through consulting with various ministries and applying methods such as “amending regulations”, “building platforms”, and “raising funds”, the initiative set to create an environment with favorable conditions for social innovation and start-ups. At this stage, the initiative was adopted under the principle of “administrative guidance before legislation” in order to encourage private enterprise development without excessive burden, and avoid regulations restricting the development of social enterprises, such as excessive definition of social enterprises. Moreover, for preserving the original types of these enterprises, this Action Plan did not limit the types of social enterprises to companies, non-profit organizations, or other specific types of organizations.

  To sustain the purpose of the Social Enterprise Action Plan and to echo and reflect the 17 sustainable development goals proposed in SDGs by the United Nations, the Executive Yuan launched the “Social Innovation Action Plan” (effective from 2018 to 2022) in 2018 to establish a friendly development environment for social innovation and to develop diversified social innovation models through the concept of “openness, gathering, practicality, and sustainability”. In this Action Plan, “social innovation” referred to “social innovation organizations” that solve social problems through technology or innovative business models. The balancing of the three managerial goals of society, environment value, and profitability is the best demonstration of the concept of social innovation.

3. Government’s Relevant Social Enterprise Policy and Resources

  The ministries of the Taiwan Government have been promoting relevant policies in accordance with the Social Innovation Action Plan issued by the Executive Yuan in 2018, such as the “Registration System for Social Innovation Enterprises” (counseling of social enterprises), the “Buying Power - Social Innovation Products and Services Procurement”, the “Social Innovation Platform” established by the Ministry of Economic Affairs, the “Social Innovation Manager Training Courses”, the “Promoting Social Innovation and Employment Opportunities” administered by the Ministry of Labor, and the “University Social Responsibility Program” published by the Ministry of Education. Among the above policies stands out the measures adopted by the Ministry of Economic Affairs, and a brief introduction of those policies are as follows:

i. Social Innovation Platform

  To connect all resources involved in social issues to promote social innovation development in Taiwan, the Ministry of Economic Affairs established the “Social Innovation Platform”.[2] With visibility through the Social Innovation Platform, it has become more efficient to search for targets in a public and transparent way and to assist with the input of resources originally belonging to different fields in order to expand social influence.

  As a digital platform gathering “social innovation issues in Taiwan,” the Social Innovation Platform covers multiple and complete social innovation resources, which include the “SDGs Map” constructed on the Social Innovation Platform, by which we can better understand how county and city governments in Taiwan implement SDGs and Voluntary Local Review Reports, and which allow us to search the Social Innovation Database[3] and the registered organizations, by which citizens, enterprises, organizations, and even local governments concerned with local development can find their partners expediently as possible, establish service lines to proactively assist public or private entities with their needs/resources, and continue to enable the regional revitalization organizations, ministries, and enterprises to identify and put forward their needs for social innovation through the function of “Social Innovation Proposals”, which assist social innovation organizations with visibility while advancing cooperation and expanding social influence.

  In addition, the “Event Page” was established on the Social Innovation Platform and offers functions, such as the publishing, searching, and sorting of events in four major dimensions with respect to social innovation organization, governments, enterprises, and citizens; and encourages citizens, social innovation organizations, enterprises, and governments to devote themselves via open participation to continuously expande the influence of the (Civic Technology) Social Innovation Platform. The “Corporate Social Responsibility Report” collects the corporate social responsibility reports, observes the distribution of resources for sustainable development by corporations in Taiwan, offers filtering functions by regions, keyword, popular rankings, and or SDGs types, and provides contact information and a download function for previous years’ reports, in order to effectively assist social innovation organizations to obtain a more precise understanding of the status quo, needs, and trends with respect to their development of respective products and services.


Figure 1: SDGs Map
Reference: Social Innovation Platform (https://si.taiwan.gov.tw/)


Figure 2: Social Innovation Database
Reference: Social Innovation Platform (https://si.taiwan.gov.tw/)


Figure 3: Social Innovation Proposals
Reference: Social Innovation Platform (https://si.taiwan.gov.tw/)


Figure 4: Event Page
Reference: Social Innovation Platform (https://si.taiwan.gov.tw/)


Figure 5: Corporate Social Responsibility Report
Reference: Social Innovation Platform (https://si.taiwan.gov.tw/)

ii. Social Innovation Database

  To encourage social innovation organizations to disclose their social missions, products and services, and to guide society to understand the content of social innovation, and to assist the administrative ministries to be able to utilize such information, the Ministry of Economic Affairs issued the “Principles of Registration of Social Innovation Organizations” to establish the “Social Innovation Database”.

  Once a social innovation organization discloses the items, such as its social missions, business model, or social influence, it may obtain the relevant promotional assistance resources, including becoming a trade partner with Buying Power (Social Innovation Products and Services Procurement), receiving exclusive consultation and assistance from professionals for social innovation organizations, and becoming qualified to apply to entering into the Social Innovation Lab. Moreover, the Ministry of Economic Affairs is simultaneously consolidating, identifying, and designating the awards and grants offered by the various ministries, policies and measures in respect of investment, and financing and assistance, as resources made available to registered entities.

  As of 25 May 2021, there were 658 registered social innovation organizations and 96 Social Innovation Partners (enterprises with CSR or ESG resources that recognize the cooperation with social innovation under the social innovation thinking model may be registered as a “Social Innovation Partner”). The public and enterprises can search for organizations registered in the Social Innovation Database through the above-said Social Innovation Platform, the search ability of which advances the exposure of and the opportunities for cooperation with social innovation organizations.


Figure 6: Numbers of registered social innovation organizations and accumulated value of purchases under Buying Power
Reference: Social Innovation Platform(https://si.taiwan.gov.tw/)

iii. Buying Power - Social Innovation Products and Services Procurement

  In order to continue increasing the awareness on social innovation organizations and related issues and promote responsible consumption and production in Taiwan, as well as to raise the attention of the commercial sector to the sustainability-driven procurement models, the Ministry of Economic Affairs held the first “Buying Power - Social Innovation Products and Services Procurement” event in 2017. Through the award system under the Buying Power, it continues to encourage the governments, state-owned enterprises, private enterprises, and organizations to take the lead in purchasing products or services from social innovation organizations, to provide the relevant resources so as to assist social innovation organizations to obtain resources and to explore business opportunities in the markets, to practice responsible consumption and production, and to promote innovative cooperation between all industries and commerce and social innovation organizations.

  The aim of the implementation of the Buying Power is to encourage the central and local governments, state-owned enterprises, private enterprises, and non-governmental organizations to purchase products or services from organizations registered in the Social Innovation Database, while prizes will be awarded based on the purchase amounts accumulated during the calculation period. The winners can obtain priority in applying for membership in the Social Innovation Partner Group, with corresponding member services, in the future.

  Under the Social Innovation Platform, both the amount of purchase awards and the number of applicants for special awards continue to increase. So far, purchases have accumulated to a value of more than NT$1.1 billion (see Figure 6), and more than 300 organizations have proactively participated.

iv. Social Innovation Mark

  In order to promote public awareness of social innovation, the Ministry of Economic Affairs has been charged with the commissioned task of promoting the Social Innovation Mark, and issued “ The Small and Medium Enterprise Administration of the Ministry of Economic Affairs Directions for Authorization of the Social Innovation Mark” as the standard for the authorization of the Social Innovation Mark. Social innovation organizations can use the Mark, through obtaining authorization, to hold Social Innovation Summits or other social innovation activities for promoting social innovation concepts.

  In order to build the Mark as a conceptual symbol of social innovation, the Ministry of Economic Affairs has been using the Social Innovation Mark in connection with various social innovation activities, such as the Social Innovation Platform, the Buying Power, and the annual Social Innovation Summit. Taking the selection of sponsors of the Social Innovation Summit in 2022 as an example[4], only organizations that have obtained authorization of the Social Innovation Mark can use the Mark to hold the Social Innovation Summit.


Figure 7: The Social Innovation Mark of the Small and Medium Enterprise Administration, Ministry of Economic Affairs

IV. Conclusion

  The “Organization for Economic Cooperation and Development” (OECD) regards social innovation as a new strategy for solving future social problems and as an important method for youth entrepreneurship and social enterprise development. Taiwan’s social innovation energy has entered a stage of expansion and development. Through the promotion of the “Social Innovation Action Plan,” the resources from the central and local governments are integrated to establish the Social Innovation Platform, the Social Innovation Database, the Social Innovation Lab, and the Social Innovation Mark. In addition, incentives such as the Buying Power have been created, manifesting the positive influence of Taiwan’s social innovation.

 

 

[1] MDGs are put forward by the United Nations in 2000, and are also the goals requiring all the 191 member states and at least 22 international organizations of the United Nations to be committed to on their best endeavors, including: 1. eradicating extreme poverty and hunger, 2. applying universal primary education, 3. promoting gender equality and empowering women, 4. reducing child mortality rates, 5. improving maternal health, 6. combatting HIV/AIDS, malaria, and other diseases, 7. ensuring environmental sustainability, and 8. establishing a global partnership for development.

[2] Please refer to the Social Innovation Platform: https://si.taiwan.gov.tw/.

[3] Please refer to the Social Innovation Database: https://si.taiwan.gov.tw/Home/Org_list.

[4] Please refer to the guidelines for the selection of sponsors of the 2022 Social Innovation Summit: https://www.moeasmea.gov.tw/files/6221/4753E497-B422-4303-A8D4-35AE0B4043A9

Links
※Brief Introduction to Taiwan Social Innovation Policies,STLI, https://stli.iii.org.tw/en/article-detail.aspx?d=8723&i=168&no=105&tp=2 (Date:2024/07/16)
Quote this paper
You may be interested
Experiences about opening data in private sector

Experiences about opening data in private sector Ⅰ. Introduction   Open data is the idea that data should be available freely for everyone to use and republish without restrictions from copyright, patents or other mechanisms of control. The concept of open data is not new; but a formalized definition is relatively new, and The Open Definition gives full details on the requirements for open data and content as follows:   Availability and access: the data must be available as a whole with no more than a reasonable reproduction cost, preferably by downloading over the internet. The data must also be available in a convenient and modifiable form.   Reuse and redistribution: the data must be provided under terms that permit reuse and redistribution including the intermixing with other datasets. The data shall be machine-readable.   Universal participation: everyone must be able to use, reuse and redistribute the data— which by means there should be no discrimination against fields of endeavor or against persons or groups. For example, “non-commercial” restrictions that would prevent “commercial” use, or restrictions of use for certain purposes are not allowed.   In order to be in tune with international developmental trends, Taiwan passed an executive resolution in favor of promoting Open Government Data in November 2012. Through the release of government data, open data has grown significantly in Taiwan and Taiwan has come out on top among 122 countries and areas in the 2015 and 2016 Global Open Data Index[1].   The result represented a major leap for Taiwan, however, progress is still to be made as most of the data are from the Government, and data from other territories, especially from private sector can rarely be seen. It is a pity that data from private sector has not being properly utilized and true value of such data still need to be revealed. The following research will place emphasis to enhance the value of private data and the strategies of boosting private sector to open their own data. Ⅱ. Why open private data   With the trend of Open Government Data recent years, countries are now starting to realize that Open Government Data is improving transparency, creating opportunities for social and commercial innovation, and opening the door to better engagement with citizens. But open data is not limited to Open Government Data. In fact, the private sector not only interacts with government data, but also produces a massive amount of data, much of which in need of utilized.   According to the G20 open data policy agenda made in 2014, the potential economic value of open data for Australia is up to AUD 64 billion per annum, and the potential value of open data from private sector is around AUD 34 billion per annum. Figure 1 Value of open data for Australia (AUD billion per annum) Source: McKinsey Global Institute   The purpose for opening data held by private entities and corporations is rooted in a broad recognition that private data has the potential to foster much public good. Openness of data for companies can translate into more efficient internal governance frameworks, enhanced feedback from workers and employees, improved traceability of supply chains, accountability to end consumers, and with better service and product delivery. Open Private Data is thus a true win-win for all with benefiting not only the governance but environmental and social gains.   At the same time, a variety of constraints, notably privacy and security, but also proprietary interests and data protectionism on the part of some companies—hold back this potential. Ⅲ. The cases of Open Private Data   Syngenta AG, a global Swiss agribusiness that produces agrochemicals and seeds, has established a solid foundation for reporting on progress that relies on independent data collection and validation, assurance by 3rd party assurance providers, and endorsement from its implementing partners. Through the website, Syngenta AG has shared their datasets for agricultural with efficiency indicators for 3600 farms for selected agro-ecological zones and market segments in 42 countries in Europe, Africa, Latin America, North America and Asia. Such datasets are precious but Syngenta AG share them for free only with a Non-Commercial license which means users may copy and redistribute the material in any medium or format freely but may not use the material for commercial purposes. Figure 2 Description and License for Open data of Syngenta AG Source: http://www.syngenta.com   Tokyo Metro is a rapid transit system in Tokyo, Japan has released information such as train location and delay times for all lines as open data. The company held an Open Data Utilization Competition from 12 September to 17 November, 2014 to promote development of an app using this data and continues to provide the data even after the competition ended. However, many restrictions such as non-commercial use, or app can only be used for Tokyo Metro lines has weakened the efficiency of open data, it is still valued as an initial step of open private data. Figure 3 DM of Tokyo Metro Open data Contest Source: https://developer.tokyometroapp.jp/ Ⅳ. How to enhance Open Private Data   Open Private Data is totally different from Open Government Data since “motivation” is vital for private institutions to release their own data. Unlike the government data can be disclosed and free to use via administrative order or legislation, all of the data controlled by private institutions can only be opened under their own will. The initiative for open data therefore shall focus on how to motivate private sectors releasing their own data-by ensuring profit and minimizing risks.   Originally, open data shall be available freely for everyone to use without any restrictions, and data owners may profit indirectly as users utilizing their data creating apps, etc. but not profit from open data itself. The income is unsteady and data owners therefore lose their interest to open data. As a countermeasure, it is suggested to make data chargeable though this may contradict to the definition of open data. When data owners can charge by usage or by time, the motivation of open data would arise when open data is directly profitable.   Data owners may also worry about many legal issues when releasing their own data. They may not care about whether profitable or not but afraid of being involved into litigation disputes such as intellectual property infringement, unfair competition, etc. It is very important for data owners to have a well protected authorization agreement when releasing data, but not all of them is able to afford the cost of making agreement for each data sharing. Therefore, a standard sample of contract that can be widely adopted plays a very important role for open private data.   A data sharing platform would be a solution to help data owners sharing their own data. It can not only provide a convenient way to collect profit from data sharing but help data owners avoiding legal risks with the platform’s standard agreement. All the data owners have to do is just to transfer their own data to the platform without concern since the platform would handle other affairs. Ⅴ. Conclusion   Actively engaging the private sector in the open data value-chain is considered an innovation imperative as it is highly related to the development of information economy. Although many works still need to be done such as identifying mechanisms for catalyzing private sector engagement, these works can be done by organizations such as the World Bank and the Centre for Open Data Enterprise. Private-public collaboration is also important when it comes to strengthening the global data infrastructure, and the benefits of open data are diverse and range from improved efficiency of public administrations to economic growth in the private sector. However, open private data is not the goal but merely a start for open data revolution. It is to add variation for other organizations and individuals to analyze to create innovations while individuals, private sectors, or government will benefit from that innovation and being encouraged to release much more data to strengthen this data circulation. [1] Global Open Data Index, https://index.okfn.org/place/(Last visited: May 15, 2017)

Legal Opinion Led to Science and Technology Law: By the Mechanism of Policy Assessment of Industry and Social Needs

With the coming of the Innovation-based economy era, technology research has become the tool of advancing competitive competence for enterprises and academic institutions. Each country not only has begun to develop and strengthen their competitiveness of industrial technology but also has started to establish related mechanism for important technology areas selected or legal analysis. By doing so, they hope to promote collaboration of university-industry research, completely bring out the economic benefits of the R & D. and select the right technology topics. To improve the depth of research cooperation and collect strategic advice, we have to use legislation system, but also social communication mechanism to explore the values and practical recommendations that need to be concerned in policy-making. This article in our research begins with establishing a mechanism for collecting diverse views on the subject, and shaping more efficient dialogue space. Finally, through the process of practicing, this study effectively collects important suggestions of practical experts.

Hard Law or Soft Law? –Global AI Regulation Developments and Regulatory Considerations

Hard Law or Soft Law? –Global AI Regulation Developments and Regulatory Considerations 2023/08/18 Since the launch of ChatGPT on November 30, 2022, the technology has been disrupting industries, shifting the way things used to work, bringing benefits but also problems. Several law suits were filed by artists, writers and voice actors in the US, claiming that the usage of copyright materials in training generative AI violates their copyright.[1] AI deepfake, hallucination and bias has also become the center of discussion, as the generation of fake news, false information, and biased decisions could deeply affect human rights and the society as a whole.[2] To retain the benefits of AI without causing damage to the society, regulators around the world have been accelerating their pace in establishing AI regulations. However, with the technology evolving at such speed and uncertainty, there is a lack of consensus on which regulation approach can effectively safeguard human rights while promoting innovation. This article will provide an overview of current AI regulation developments around the world, a preliminary analysis of the pros and cons of different regulation approaches, and point out some other elements that regulators should consider. I. An overview of the current AI regulation landscape around the world The EU has its lead in legislation, with its parliament adopting its position on the AI ACT in June 2023, heading into trilogue meetings that aim to reach an agreement by the end of this year.[3] China has also announced its draft National AI ACT, scheduled to enter its National People's Congress before the end of 2023.[4] It already has several administration rules in place, such as the 2021 regulation on recommendation algorithms, the 2022 rules for deep synthesis, and the 2023 draft rules on generative AI.[5] Some other countries have been taking a softer approach, preferring voluntary guidelines and testing schemes. The UK published its AI regulation plans in March, seeking views on its sectoral guideline-based pro-innovation regulation approach.[6] To minimize uncertainty for companies, it proposed a set of regulatory principles to ensure that government bodies develop guidelines in a consistent manner.[7] The US National Institute of Standards and Technology (NIST) released the AI Risk Management Framework in January[8], with a non-binding Blueprint for an AI Bill of Rights published in October 2022, providing guidance on the design and use of AI with a set of principles.[9] It is important to take note that some States have drafted regulations on specific subjects, such as New York City’s Final Regulations on Use of AI in Hiring and Promotion came into force in July 2023.[10] Singapore launched the world’s first AI testing framework and toolkit international pilot in May 2022, with the assistance of AWS, DBS Bank, Google, Meta, Microsoft, Singapore Airlines, etc. After a year of testing, it open-sourced the software toolkit in July 2023, to better develop the system.[11] There are also some countries still undecided on their regulation approach. Australia commenced a public consultation on its AI regulatory framework proposal in June[12], seeking views on its draft AI risk management approach.[13] Taiwan’s government announced in July 2023 to propose a draft AI basic law by September 2023, covering topics such as AI-related definition, privacy protections, data governance, risk management, ethical principles, and industrial promotion.[14] However, the plan was recently postponed, indicating a possible shift towards voluntary or mandatory government principles and guidance, before establishing the law.[15] II. Hard law or soft law? The pros and cons of different regulatory approaches One of the key advantages of hard law in AI regulation is its ability to provide binding legal obligations and legal enforcement mechanisms that ensure accountability and compliance.[16] Hard law also provides greater legal certainty, transparency and remedies for consumers and companies, which is especially important for smaller companies that do not have as many resources to influence and comply with fast-changing soft law.[17] However, the legislative process can be time-consuming, slower to update, and less agile.[18] This poses the risk of stifling innovation, as hard law inevitably cannot keep pace with the rapidly evolving AI technology.[19] In contrast, soft law represents a more flexible and adaptive approach to AI regulation. As the potential of AI still remains largely mysterious, government bodies can formulate principles and guidelines tailored to the regulatory needs of different industry sectors.[20] In addition, if there are adequate incentives in place for actors to comply, the cost of enforcement could be much lower than hard laws. Governments can also experiment with several different soft law approaches to test their effectiveness.[21] However, the voluntary nature of soft law and the lack of legal enforcement mechanisms could lead to inconsistent adoption and undermine the effectiveness of these guidelines, potentially leaving critical gaps in addressing AI's risks.[22] Additionally, in cases of AI-related harms, soft law could not offer effective protection on consumer rights and human rights, as there is no clear legal obligation to facilitate accountability and remedies.[23] Carlos Ignacio Gutierrez and Gary Marchant, faculty members at Arizona State University (ASU), analyzed 634 AI soft law programs against 100 criteria and found that two-thirds of the program lack enforcement mechanisms to deliver its anticipated AI governance goals. He pointed out that credible indirect enforcement mechanisms and a perception of legitimacy are two critical elements that could strengthen soft law’s effectiveness.[24] For example, to publish stem cell research in top academic journals, the author needs to demonstrate that the research complies with related research standards.[25] In addition, companies usually have a greater incentive to comply with private standards to avoid regulatory shifts towards hard laws with higher costs and constraints.[26] III. Other considerations Apart from understanding the strengths and limitations of soft law and hard law, it is important for governments to consider each country’s unique differences. For example, Singapore has always focused on voluntary approaches as it acknowledges that being a small country, close cooperation with the industry, research organizations, and other governments to formulate a strong AI governance practice is much more important than rushing into legislation.[27] For them, the flexibility and lower cost of soft regulation provide time to learn from industries to prevent forming rules that aren’t addressing real-world issues.[28] This process allows preparation for better legislation at a later stage. Japan has also shifted towards a softer approach to minimize legal compliance costs, as it recognizes its slower position in the AI race.[29] For them, the EU AI Act is aiming at regulating Giant Tech companies, rather than promoting innovation.[30] That is why Japan considers that hard law does not suit the industry development stage they’re currently in.[31] Therefore, they seek to address legal issues with current laws and draft relevant guidance.[32] IV. Conclusion As the global AI regulatory landscape continues to evolve, it is important for governments to consider the pros and cons of hard law and soft law, and also country-specific conditions in deciding what’s suitable for the country. Additionally, a regular review on the effectiveness and impact of their chosen regulatory approach on AI’s development and the society is recommended. [1] ChatGPT and Deepfake-Creating Apps: A Running List of Key AI-Lawsuits, TFL, https://www.thefashionlaw.com/from-chatgpt-to-deepfake-creating-apps-a-running-list-of-key-ai-lawsuits/ (last visited Aug 10, 2023); Protection for Voice Actors is Artificial in Today’s Artificial Intelligence World, The National Law Review, https://www.natlawreview.com/article/protection-voice-actors-artificial-today-s-artificial-intelligence-world (last visited Aug 10, 2023). [2] The politics of AI: ChatGPT and political bias, Brookings, https://www.brookings.edu/articles/the-politics-of-ai-chatgpt-and-political-bias/ (last visited Aug 10, 2023); Prospect of AI Producing News Articles Concerns Digital Experts, VOA, https://www.voanews.com/a/prospect-of-ai-producing-news-articles-concerns-digital-experts-/7202519.html (last visited Aug 10, 2023). [3] EU AI Act: first regulation on artificial intelligence, European Parliament, https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence (last visited Aug 10, 2023). [4] 中國國務院發布立法計畫 年內審議AI法草案,經濟日報(2023/06/09),https://money.udn.com/money/story/5604/7223533 (last visited Aug 10, 2023). [5] id [6] A pro-innovation approach to AI regulation, GOV.UK, https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper (last visited Aug 10, 2023). [7] id [8] AI RISK MANAGEMENT FRAMEWORK, NIST, https://www.nist.gov/itl/ai-risk-management-framework (last visited Aug 10, 2023). [9] The White House released an ‘AI Bill of Rights’, CNN, https://edition.cnn.com/2022/10/04/tech/ai-bill-of-rights/index.html (last visited Aug 10, 2023). [10] New York City Adopts Final Regulations on Use of AI in Hiring and Promotion, Extends Enforcement Date to July 5, 2023, Littler https://www.littler.com/publication-press/publication/new-york-city-adopts-final-regulations-use-ai-hiring-and-promotionv (last visited Aug 10, 2023). [11] IMDA, Fact sheet - Open-Sourcing of AI Verify and Set Up of AI Verify Foundation (2023), https://www.imda.gov.sg/-/media/imda/files/news-and-events/media-room/media-releases/2023/06/7-jun---ai-annoucements---annex-a.pdf (last visited Aug 10, 2023). [12] Supporting responsible AI: discussion paper, Australia Government Department of Industry, Science and Resources,https://consult.industry.gov.au/supporting-responsible-ai (last visited Aug 10, 2023). [13] Australian Government Department of Industry, Science and Resources, Safe and responsible AI in Australia (2023), https://storage.googleapis.com/converlens-au-industry/industry/p/prj2452c8e24d7a400c72429/public_assets/Safe-and-responsible-AI-in-Australia-discussion-paper.pdf (last visited Aug 10, 2023). [14] 張璦,中央通訊社,AI基本法草案聚焦隱私保護、應用合法性等7面向 擬設打假中心,https://www.cna.com.tw/news/ait/202307040329.aspx (最後瀏覽日:2023/08/10)。 [15] 蘇思云,中央通訊社,2023/08/01,鄭文燦:考量技術發展快應用廣 AI基本法延後提出,https://www.cna.com.tw/news/afe/202308010228.aspx (最後瀏覽日:2023/08/10)。 [16] supra, note 13, at 27. [17] id. [18] id., at 28. [19] Soft law as a complement to AI regulation, Brookings, https://www.brookings.edu/articles/soft-law-as-a-complement-to-ai-regulation/ (last visited Aug 10, 2023). [20] supra, note 5. [21] Gary Marchant, “Soft Law” Governance of Artificial Intelligence (2019), https://escholarship.org/uc/item/0jq252ks (last visited Aug 10, 2023). [22] How soft law is used in AI governance, Brookings,https://www.brookings.edu/articles/how-soft-law-is-used-in-ai-governance/ (last visited Aug 10, 2023). [23] supra, note 13, at 27. [24] Why Soft Law is the Best Way to Approach the Pacing Problem in AI, Carnegie Council for Ethics in International Affairs,https://www.carnegiecouncil.org/media/article/why-soft-law-is-the-best-way-to-approach-the-pacing-problem-in-ai (last visited Aug 10, 2023). [25] id. [26] id. [27] Singapore is not looking to regulate A.I. just yet, says the city-state’s authority, CNBC,https://www.cnbc.com/2023/06/19/singapore-is-not-looking-to-regulate-ai-just-yet-says-the-city-state.html#:~:text=Singapore%20is%20not%20rushing%20to,Media%20Development%20Authority%2C%20told%20CNBC (last visited Aug 10, 2023). [28] id. [29] Japan leaning toward softer AI rules than EU, official close to deliberations says, Reuters, https://www.reuters.com/technology/japan-leaning-toward-softer-ai-rules-than-eu-source-2023-07-03/ (last visited Aug 10, 2023). [30] id. [31] id. [32] id.

The use of automated facial recognition technology and supervision mechanism in UK

The use of automated facial recognition technology and supervision mechanism in UK I. Introduction   Automatic facial recognition (AFR) technology has developed rapidly in recent years, and it can identify target people in a short time. The UK Home Office announced the "Biometrics Strategy" on June 28, 2018, saying that AFR technology will be introduced in the law enforcement, and the Home Office will also actively cooperate with other agencies to establish a new oversight and advisory board in order to maintain public trust. AFR technology can improve law enforcement work, but its use will increase the risk of intruding into individual liberty and privacy.   This article focuses on the application of AFR technology proposed by the UK Home Office. The first part of this article describes the use of AFR technology by the police. The second part focuses on the supervision mechanism proposed by the Home Office in the Biometrics Strategy. However, because the use of AFR technology is still controversial, this article will sort out the key issues of follow-up development through the opinions of the public and private sectors. The overview of the discussion of AFR technology used by police agencies would be helpful for further policy formulation. II. Overview of the strategy of AFR technology used by the UK police   According to the Home Office’s Biometrics Strategy, the AFR technology will be used in law enforcement, passports and immigration and national security to protect the public and make these public services more efficient[1]. Since 2017 the UK police have worked with tech companies in testing the AFR technology, at public events like Notting Hill Carnival or big football matches[2].   In practice, AFR technology is deployed with mobile or fixed camera systems. When a face image is captured through the camera, it is passed to the recognition software for identification in real time. Then, the AFR system will process if there is a ‘match’ and the alarm would solicit an operator’s attention to verify the match and execute the appropriate action[3]. For example, South Wales Police have used AFR system to compare images of people in crowds attending events with pre-determined watch lists of suspected mobile phone thieves[4]. In the future, the police may also compare potential suspects against images from closed-circuit television cameras (CCTV) or mobile phone footage for evidential and investigatory purposes[5].   The AFR system may use as tools of crime prevention, more than as a form of crime detection[6]. However, the uses of AFR technology are seen as dangerous and intrusive by the UK public[7]. For one thing, it could cause serious harm to democracy and human rights if the police agency misuses AFR technology. For another, it could have a chilling effect on civil society and people may keep self-censoring lawful behavior under constant surveillance[8]. III. The supervision mechanism of AFR technology   To maintaining public trust, there must be a supervision mechanism to oversight the use of AFR technology in law enforcement. The UK Home Office indicates that the use of AFR technology is governed by a number of codes of practice including Police and Criminal Evidence Act 1984, Surveillance Camera Code of Practice and the Information Commissioner’s Office (ICO)’s Code of Practice for surveillance cameras[9]. (I) Police and Criminal Evidence Act 1984   The Police and Criminal Evidence Act (PACE) 1984 lays down police powers to obtain and use biometric data, such as collecting DNA and fingerprints from people arrested for a recordable offence. The PACE allows law enforcement agencies proceeding identification to find out people related to crime for criminal and national security purposes. Therefore, for the investigation, detection and prevention tasks related to crime and terrorist activities, the police can collect the facial image of the suspect, which can also be interpreted as the scope of authorization of the  PACE. (II) Surveillance Camera Code of Practice   The use of CCTV in public places has interfered with the rights of the people, so the Protection of Freedoms Act 2012 requires the establishment of an independent Surveillance Camera Commissioner (SCC) for supervision. The Surveillance Camera Code of Practice  proposed by the SCC sets out 12 principles for guiding the operation and use of surveillance camera systems. The 12 guiding principles are as follows[10]: A. Use of a surveillance camera system must always be for a specified purpose which is in pursuit of a legitimate aim and necessary to meet an identified pressing need. B. The use of a surveillance camera system must take into account its effect on individuals and their privacy, with regular reviews to ensure its use remains justified. C. There must be as much transparency in the use of a surveillance camera system as possible, including a published contact point for access to information and complaints. D. There must be clear responsibility and accountability for all surveillance camera system activities including images and information collected, held and used. E. Clear rules, policies and procedures must be in place before a surveillance camera system is used, and these must be communicated to all who need to comply with them. F. No more images and information should be stored than that which is strictly required for the stated purpose of a surveillance camera system, and such images and information should be deleted once their purposes have been discharged. G. Access to retained images and information should be restricted and there must be clearly defined rules on who can gain access and for what purpose such access is granted; the disclosure of images and information should only take place when it is necessary for such a purpose or for law enforcement purposes. H. Surveillance camera system operators should consider any approved operational, technical and competency standards relevant to a system and its purpose and work to meet and maintain those standards. I. Surveillance camera system images and information should be subject to appropriate security measures to safeguard against unauthorised access and use. J. There should be effective review and audit mechanisms to ensure legal requirements, policies and standards are complied with in practice, and regular reports should be published. K. When the use of a surveillance camera system is in pursuit of a legitimate aim, and there is a pressing need for its use, it should then be used in the most effective way to support public safety and law enforcement with the aim of processing images and information of evidential value. L. Any information used to support a surveillance camera system which compares against a reference database for matching purposes should be accurate and kept up to date. (III) ICO’s Code of Practice for surveillance cameras   It must need to pay attention to the personal data and privacy protection during the use of surveillance camera systems and AFR technology. The ICO issued its Code of Practice for surveillance cameras under the Data Protection Act 1998 to explain the legal requirements operators of surveillance cameras. The key points of ICO’s Code of Practice for surveillance cameras are summarized as follows[11]: A. The use time of the surveillance camera systems should be carefully evaluated and adjusted. It is recommended to regularly evaluate whether it is necessary and proportionate to continue using it. B. A police force should ensure an effective administration of surveillance camera systems deciding who has responsibility for the control of personal information, what is to be recorded, how the information should be used and to whom it may be disclosed. C. Recorded material should be stored in a safe way to ensure that personal information can be used effectively for its intended purpose. In addition, the information may be considered to be encrypted if necessary. D. Disclosure of information from surveillance systems must be controlled and consistent with the purposes for which the system was established. E. Individuals whose information is recoded have a right to be provided with that information or view that information. The ICO recommends that information must be provided promptly and within no longer than 40 calendar days of receiving a request. F. The minimum and maximum retention periods of recoded material is not prescribed in the Data Protection Act 1998, but it should not be kept for longer than is necessary and should be the shortest period necessary to serve the purposes for which the system was established. (IV) A new oversight and advisory board   In addition to the aforementioned regulations and guidance, the UK Home Office mentioned that it will work closely with related authorities, including ICO, SCC, Biometrics Commissioner (BC), and Forensic Science Regulator (FSR) to establish a new oversight and advisory board to coordinate consideration of law enforcement’s use of facial images and facial recognition systems[12].   To sum up, it is estimated that the use of AFR technology by law enforcement has been abided by existing regulations and guidance. Firstly, surveillance camera systems must be used on the purposes for which the system was established. Secondly, clear responsibility and accountability mechanisms should be ensured. Thirdly, individuals whose information is recoded have the right to request access to relevant information. In the future, the new oversight and advisory board will be asked to consider issues relating to law enforcement’s use of AFR technology with greater transparency. IV. Follow-up key issues for the use of AFR technology   Regarding to the UK Home Office’s Biometrics Strategy, members of independent agencies such as ICO, BC, SCC, as well as civil society, believe that there are still many deficiencies, the relevant discussions are summarized as follows: (I) The necessity of using AFR technology   Elizabeth Denham, ICO Commissioner, called for looking at the use of AFR technology carefully, because AFR is an intrusive technology and can increase the risk of intruding into our privacy. Therefore, for the use of AFR technology to be legal, the UK police must have clear evidence to demonstrate that the use of AFR technology in public space is effective in resolving the problem that it aims to address[13].   The Home Office has pledged to undertake Data Protection Impact Assessments (DPIAs) before introducing AFR technology, including the purpose and legal basis, the framework applies to the organization using the biometrics, the necessity and proportionality and so on. (II)The limitations of using facial image data   The UK police can collect, process and use personal data based on the need for crime prevention, investigation and prosecution. In order to secure the use of biometric information, the BC was established under the Protection of Freedoms Act 2012. The mission of the BC is to regulate the use of biometric information, provide protection from disproportionate enforcement action, and limit the application of surveillance and counter-terrorism powers.   However, the BC’s powers do not presently extend to other forms of biometric information other than DNA or fingerprints[14]. The BC has expressed concern that while the use of biometric data may well be in the public interest for law enforcement purposes and to support other government functions, the public benefit must be balanced against loss of privacy. Hence, legislation should be carried to decide that crucial question, instead of depending on the BC’s case feedback[15].   Because biometric data is especially sensitive and most intrusive of individual privacy, it seems that a governance framework should be required and will make decisions of the use of facial images by the police. (III) Database management and transparency   For the application of AFR technology, the scope of biometric database is a dispute issue in the UK. It is worth mentioning that the British people feel distrust of the criminal database held by the police. When someone is arrested and detained by the police, the police will take photos of the suspect’s face. However, unlike fingerprints and DNA, even if the person is not sued, their facial images are not automatically deleted from the police biometric database[16].   South Wales Police have used AFR technology to compare facial images of people in crowds attending major public events with pre-determined watch lists of suspected mobile phone thieves in the AFR field test. Although the watch lists are created for time-limited and specific purposes, the inclusion of suspects who could possibly be innocent people still causes public panic.   Elizabeth Denham warned that there should be a transparency system about retaining facial images of those arrested but not charged for certain offences[17]. Therefore, in the future the UK Home Office may need to establish a transparent system of AFR biometric database and related supervision mechanism. (IV) Accuracy and identification errors   In addition to worrying about infringing personal privacy, the low accuracy of AFR technology is another reason many people oppose the use of AFR technology by police agencies. Silkie Carlo, director of Big Brother Watch, said the police must immediately stop using the AFR technology and avoid mistaking thousands of innocent citizens as criminals; Paul Wiles, Biometrics Commissioner, also called for legislation to manage AFR technology because of its accuracy is too low and the use of AFR technology should be tested and passed external peer review[18].   In the Home Office’s Biometric Strategy, the scientific quality standards for AFR technology will be established jointly with the FSR, an independent agency under the Home Office. In other words, the Home Office plans to extend the existing forensics science regime to regulate AFR technology.   Therefore, the FSR has worked with the SCC to develop standards relevant to digital forensics. The UK government has not yet seen specific standards for regulating the accuracy of AFR technology at the present stage. V. Conclusion   From the discussion of the public and private sectors in the UK, we can summarize some rules for the use of AFR technology. Firstly, before the application of AFR technology, it is necessary to complete the pre-assessment to ensure the benefits to the whole society. Secondly, there is the possibility of identifying errors in AFR technology. Therefore, in order to maintain the confidence and trust of the people, the relevant scientific standards should be set up first to test the system accuracy. Thirdly, the AFR system should be regarded as an assisting tool for police enforcement in the initial stage. In other words, the information analyzed by the AFR system should still be judged by law enforcement officials, and the police officers should take the responsibilities.   In order to balance the protection of public interest and basic human rights, the use of biometric data in the AFR technology should be regulated by a special law other than the regulations of surveillance camera and data protection. The scope of the identification database is also a key point, and it may need legislators’ approval to collect and store the facial image data of innocent people. Last but not least, the use of the AFR system should be transparent and the victims of human rights violations can seek appeal. [1] UK Home Office, Biometrics Strategy, Jun. 28, 2018, https://www.gov.uk/government/publications/home-office-biometrics-strategy (last visited Aug. 09, 2018), at 7. [2] Big Brother Watch, FACE OFF CAMPAIGN: STOP THE MET POLICE USING AUTHORITARIAN FACIAL RECOGNITION CAMERAS, https://bigbrotherwatch.org.uk/all-campaigns/face-off-campaign/ (last visited Aug. 16, 2018). [3] Lucas Introna & David Wood, Picturing algorithmic surveillance: the politics of facial recognition systems, Surveillance & Society, 2(2/3), 177-198 (2004). [4] Supra note 1, at 12. [5] Id, at 25. [6] Michael Bromby, Computerised Facial Recognition Systems: The Surrounding Legal Problems (Sep. 2006)(LL.M Dissertation Faculty of Law University of Edinburgh), http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.197.7339&rep=rep1&type=pdf , at 3. [7] Owen Bowcott, Police face legal action over use of facial recognition cameras, The Guardian, Jun. 14, 2018, https://www.theguardian.com/technology/2018/jun/14/police-face-legal-action-over-use-of-facial-recognition-cameras (last visited Aug. 09, 2018). [8] Martha Spurrier, Facial recognition is not just useless. In police hands, it is dangerous, The Guardian, May 16, 2018, https://www.theguardian.com/commentisfree/2018/may/16/facial-recognition-useless-police-dangerous-met-inaccurate (last visited Aug. 17, 2018). [9] Supra note 1, at 12. [10] Surveillance Camera Commissioner, Surveillance camera code of practice, Oct. 28, 2014, https://www.gov.uk/government/publications/surveillance-camera-code-of-practice (last visited Aug. 17, 2018). [11] UK Information Commissioner’s Office, In the picture: A data protection code of practice for surveillance cameras and personal information, Jun. 09, 2017, https://ico.org.uk/for-organisations/guide-to-data-protection/encryption/scenarios/cctv/ (last visited Aug. 10, 2018). [12] Supra note 1, at 13. [13] Elizabeth Denham, Blog: facial recognition technology and law enforcement, Information Commissioner's Office, May 14, 2018, https://ico.org.uk/about-the-ico/news-and-events/blog-facial-recognition-technology-and-law-enforcement/ (last visited Aug. 14, 2018). [14] Monique Mann & Marcus Smith, Automated Facial Recognition Technology: Recent Developments and Approaches to Oversight, Automated Facial Recognition Technology, 10(1), 140 (2017). [15] Biometrics Commissioner, Biometrics Commissioner’s response to the Home Office Biometrics Strategy, Jun. 28, 2018, https://www.gov.uk/government/news/biometrics-commissioners-response-to-the-home-office-biometrics-strategy (last visited Aug. 15, 2018). [16] Supra note 2. [17] Supra note 13. [18] Jon Sharman, Metropolitan Police's facial recognition technology 98% inaccurate, figures show, INDEPENDENT, May 13, 2018, https://www.independent.co.uk/news/uk/home-news/met-police-facial-recognition-success-south-wales-trial-home-office-false-positive-a8345036.html (last visited Aug. 09, 2018).

TOP