Hard Law or Soft Law? –Global AI Regulation Developments and Regulatory Considerations

Hard Law or Soft Law?
–Global AI Regulation Developments and Regulatory Considerations

2023/08/18

Since the launch of ChatGPT on November 30, 2022, the technology has been disrupting industries, shifting the way things used to work, bringing benefits but also problems. Several law suits were filed by artists, writers and voice actors in the US, claiming that the usage of copyright materials in training generative AI violates their copyright.[1] AI deepfake, hallucination and bias has also become the center of discussion, as the generation of fake news, false information, and biased decisions could deeply affect human rights and the society as a whole.[2]

To retain the benefits of AI without causing damage to the society, regulators around the world have been accelerating their pace in establishing AI regulations. However, with the technology evolving at such speed and uncertainty, there is a lack of consensus on which regulation approach can effectively safeguard human rights while promoting innovation. This article will provide an overview of current AI regulation developments around the world, a preliminary analysis of the pros and cons of different regulation approaches, and point out some other elements that regulators should consider.

I. An overview of the current AI regulation landscape around the world

The EU has its lead in legislation, with its parliament adopting its position on the AI ACT in June 2023, heading into trilogue meetings that aim to reach an agreement by the end of this year.[3] China has also announced its draft National AI ACT, scheduled to enter its National People's Congress before the end of 2023.[4] It already has several administration rules in place, such as the 2021 regulation on recommendation algorithms, the 2022 rules for deep synthesis, and the 2023 draft rules on generative AI.[5]

Some other countries have been taking a softer approach, preferring voluntary guidelines and testing schemes. The UK published its AI regulation plans in March, seeking views on its sectoral guideline-based pro-innovation regulation approach.[6] To minimize uncertainty for companies, it proposed a set of regulatory principles to ensure that government bodies develop guidelines in a consistent manner.[7] The US National Institute of Standards and Technology (NIST) released the AI Risk Management Framework in January[8], with a non-binding Blueprint for an AI Bill of Rights published in October 2022, providing guidance on the design and use of AI with a set of principles.[9] It is important to take note that some States have drafted regulations on specific subjects, such as New York City’s Final Regulations on Use of AI in Hiring and Promotion came into force in July 2023.[10] Singapore launched the world’s first AI testing framework and toolkit international pilot in May 2022, with the assistance of AWS, DBS Bank, Google, Meta, Microsoft, Singapore Airlines, etc. After a year of testing, it open-sourced the software toolkit in July 2023, to better develop the system.[11]

There are also some countries still undecided on their regulation approach. Australia commenced a public consultation on its AI regulatory framework proposal in June[12], seeking views on its draft AI risk management approach.[13] Taiwan’s government announced in July 2023 to propose a draft AI basic law by September 2023, covering topics such as AI-related definition, privacy protections, data governance, risk management, ethical principles, and industrial promotion.[14] However, the plan was recently postponed, indicating a possible shift towards voluntary or mandatory government principles and guidance, before establishing the law.[15]

II. Hard law or soft law? The pros and cons of different regulatory approaches

One of the key advantages of hard law in AI regulation is its ability to provide binding legal obligations and legal enforcement mechanisms that ensure accountability and compliance.[16] Hard law also provides greater legal certainty, transparency and remedies for consumers and companies, which is especially important for smaller companies that do not have as many resources to influence and comply with fast-changing soft law.[17] However, the legislative process can be time-consuming, slower to update, and less agile.[18] This poses the risk of stifling innovation, as hard law inevitably cannot keep pace with the rapidly evolving AI technology.[19]

In contrast, soft law represents a more flexible and adaptive approach to AI regulation. As the potential of AI still remains largely mysterious, government bodies can formulate principles and guidelines tailored to the regulatory needs of different industry sectors.[20] In addition, if there are adequate incentives in place for actors to comply, the cost of enforcement could be much lower than hard laws. Governments can also experiment with several different soft law approaches to test their effectiveness.[21] However, the voluntary nature of soft law and the lack of legal enforcement mechanisms could lead to inconsistent adoption and undermine the effectiveness of these guidelines, potentially leaving critical gaps in addressing AI's risks.[22] Additionally, in cases of AI-related harms, soft law could not offer effective protection on consumer rights and human rights, as there is no clear legal obligation to facilitate accountability and remedies.[23]

Carlos Ignacio Gutierrez and Gary Marchant, faculty members at Arizona State University (ASU), analyzed 634 AI soft law programs against 100 criteria and found that two-thirds of the program lack enforcement mechanisms to deliver its anticipated AI governance goals. He pointed out that credible indirect enforcement mechanisms and a perception of legitimacy are two critical elements that could strengthen soft law’s effectiveness.[24] For example, to publish stem cell research in top academic journals, the author needs to demonstrate that the research complies with related research standards.[25] In addition, companies usually have a greater incentive to comply with private standards to avoid regulatory shifts towards hard laws with higher costs and constraints.[26]

III. Other considerations

Apart from understanding the strengths and limitations of soft law and hard law, it is important for governments to consider each country’s unique differences. For example, Singapore has always focused on voluntary approaches as it acknowledges that being a small country, close cooperation with the industry, research organizations, and other governments to formulate a strong AI governance practice is much more important than rushing into legislation.[27] For them, the flexibility and lower cost of soft regulation provide time to learn from industries to prevent forming rules that aren’t addressing real-world issues.[28] This process allows preparation for better legislation at a later stage.

Japan has also shifted towards a softer approach to minimize legal compliance costs, as it recognizes its slower position in the AI race.[29] For them, the EU AI Act is aiming at regulating Giant Tech companies, rather than promoting innovation.[30] That is why Japan considers that hard law does not suit the industry development stage they’re currently in.[31] Therefore, they seek to address legal issues with current laws and draft relevant guidance.[32]

IV. Conclusion

As the global AI regulatory landscape continues to evolve, it is important for governments to consider the pros and cons of hard law and soft law, and also country-specific conditions in deciding what’s suitable for the country. Additionally, a regular review on the effectiveness and impact of their chosen regulatory approach on AI’s development and the society is recommended.

 

Reference:

[1] ChatGPT and Deepfake-Creating Apps: A Running List of Key AI-Lawsuits, TFL, https://www.thefashionlaw.com/from-chatgpt-to-deepfake-creating-apps-a-running-list-of-key-ai-lawsuits/ (last visited Aug 10, 2023); Protection for Voice Actors is Artificial in Today’s Artificial Intelligence World, The National Law Review, https://www.natlawreview.com/article/protection-voice-actors-artificial-today-s-artificial-intelligence-world (last visited Aug 10, 2023).

[2] The politics of AI: ChatGPT and political bias, Brookings, https://www.brookings.edu/articles/the-politics-of-ai-chatgpt-and-political-bias/ (last visited Aug 10, 2023); Prospect of AI Producing News Articles Concerns Digital Experts, VOA, https://www.voanews.com/a/prospect-of-ai-producing-news-articles-concerns-digital-experts-/7202519.html (last visited Aug 10, 2023).

[3] EU AI Act: first regulation on artificial intelligence, European Parliament, https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence (last visited Aug 10, 2023).

[4] 中國國務院發布立法計畫 年內審議AI法草案,經濟日報(2023/06/09),https://money.udn.com/money/story/5604/7223533 (last visited Aug 10, 2023).

[5] id

[6] A pro-innovation approach to AI regulation, GOV.UK, https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper (last visited Aug 10, 2023).

[7] id

[8] AI RISK MANAGEMENT FRAMEWORK, NIST, https://www.nist.gov/itl/ai-risk-management-framework (last visited Aug 10, 2023).

[9] The White House released an ‘AI Bill of Rights’, CNN, https://edition.cnn.com/2022/10/04/tech/ai-bill-of-rights/index.html (last visited Aug 10, 2023).

[10] New York City Adopts Final Regulations on Use of AI in Hiring and Promotion, Extends Enforcement Date to July 5, 2023, Littler https://www.littler.com/publication-press/publication/new-york-city-adopts-final-regulations-use-ai-hiring-and-promotionv (last visited Aug 10, 2023).

[11] IMDA, Fact sheet - Open-Sourcing of AI Verify and Set Up of AI Verify Foundation (2023), https://www.imda.gov.sg/-/media/imda/files/news-and-events/media-room/media-releases/2023/06/7-jun---ai-annoucements---annex-a.pdf (last visited Aug 10, 2023).

[12] Supporting responsible AI: discussion paper, Australia Government Department of Industry, Science and Resources,https://consult.industry.gov.au/supporting-responsible-ai (last visited Aug 10, 2023).

[13] Australian Government Department of Industry, Science and Resources, Safe and responsible AI in Australia (2023), https://storage.googleapis.com/converlens-au-industry/industry/p/prj2452c8e24d7a400c72429/public_assets/Safe-and-responsible-AI-in-Australia-discussion-paper.pdf (last visited Aug 10, 2023).

[14] 張璦,中央通訊社,AI基本法草案聚焦隱私保護、應用合法性等7面向 擬設打假中心,https://www.cna.com.tw/news/ait/202307040329.aspx (最後瀏覽日:2023/08/10)。

[15] 蘇思云,中央通訊社,2023/08/01,鄭文燦:考量技術發展快應用廣 AI基本法延後提出,https://www.cna.com.tw/news/afe/202308010228.aspx (最後瀏覽日:2023/08/10)。

[16] supra, note 13, at 27.

[17] id.

[18] id., at 28.

[19] Soft law as a complement to AI regulation, Brookings, https://www.brookings.edu/articles/soft-law-as-a-complement-to-ai-regulation/ (last visited Aug 10, 2023).

[20] supra, note 5.

[21] Gary Marchant, “Soft Law” Governance of Artificial Intelligence (2019), https://escholarship.org/uc/item/0jq252ks (last visited Aug 10, 2023).

[22] How soft law is used in AI governance, Brookings,https://www.brookings.edu/articles/how-soft-law-is-used-in-ai-governance/ (last visited Aug 10, 2023).

[23] supra, note 13, at 27.

[24] Why Soft Law is the Best Way to Approach the Pacing Problem in AI, Carnegie Council for Ethics in International Affairs,https://www.carnegiecouncil.org/media/article/why-soft-law-is-the-best-way-to-approach-the-pacing-problem-in-ai (last visited Aug 10, 2023).

[25] id.

[26] id.

[28] id.

[29] Japan leaning toward softer AI rules than EU, official close to deliberations says, Reuters, https://www.reuters.com/technology/japan-leaning-toward-softer-ai-rules-than-eu-source-2023-07-03/ (last visited Aug 10, 2023).

[30] id.

[31] id.

[32] id.

 

※Hard Law or Soft Law? –Global AI Regulation Developments and Regulatory Considerations,STLI, https://stli.iii.org.tw/en/article-detail.aspx?no=55&tp=2&i=168&d=9051 (Date:2025/11/16)
Quote this paper
You may be interested
After the European Union's Artificial Intelligence Law, the draft of AI Basic Law is announced in Taiwan.

After the European Union's Artificial Intelligence Law, the draft of AI Basic Law is announced in Taiwan. 2024/09/19 Countries around the world are currently seeking to establish AI governance principles. The U.S. currently has only AI executive orders and state bills, and the European Union (EU) first AI law came into effect in August 2024. Taiwan has announced a draft of AI Basic Law for public comments on July 15, 2024, which, if passed by the Legislative Yuan, will become the world's second special legislation on AI. Taiwan's Coming AI Basic Law - Legislative Development and Progress With the successful conclusion of the 2024 Paris Olympics, AI technology has demonstrated its potential on the global stage, bringing new experiences to the public in varied areas, such as sport competition analysis, athlete training, and referee assisting, and showing that AI has also crossed over into the sports industry, in addition to its known applications in areas such as healthcare, finance, transportation, arts and culture fields. As AI will be apply in various industries, it may also bring new risks or impacts to individuals or society. Countries are seeking to establish guidelines and principles for AI governance. The EU’s Artificial Intelligence Act, which was announced to take effect in August 2024. Even in the AI pioneer, the U.S., there are currently only U.S. President’s AI Executive Orders and bills introduced by Congress or state governments. When Taiwan President Lai announced the promotion of the Island of Artificial Intelligence, Taiwan also had a draft of the AI Basic Law announced for public comments by the National Science and Technology Council (NSTC) on July 15, 2024, proposing the principles of basic values for the development of AI in Taiwan. What is the Basic Law in Taiwan? There are 11 basic laws/acts in Taiwan, including the Fundamental Science and Technology Act, and the Ocean Basic Act, etc. A basic law/act is a legislative model of principle, progress, or guideline for a specific important matter. The AI Basic Law serves as a declaration of policy integration, reveals the government's goals and principles, and regulates the executive branch without directly regulating the people, or deriving the rights for substantive claims. Why Taiwan need a Basic Law on Artificial Intelligence? AI is evolving rapidly, and its applications are spreading to a wider range of areas. If all sectors and administrations have different values, there will be no way for a country to develop AI. NSTC has announced a total of 18 articles in the draft, in Article 3 first set out 7 common principles, such as human autonomy, from the AI research and development to the final market application must comply with the basic values; and in the following provisions of Article 4 to declare that the government's 4 major promotional focuses. The most important provision is found in Article 17, which requires that government ministries should review and adjust the functions, businesses and regulations under their scope in accordance with the Basic Law, so as to enable the executive branch to accelerate its response to the changes brought about by AI, and to share a common set of values: the promotion of innovation while taking human rights into consideration. 7 basic principles The draft AI Basic Law in the announcement contains the following 7 basic principles: 1. Sustainable development and well-being: Social equity and environmental sustainability should be taken into account. Appropriate education and training should be provided to minimize the possible digital gap, so that people can adapt to the changes brought about by AI. 2. Human autonomy: It shall support human autonomy, respect for fundamental human rights and cultural values such as the right to personal integrity, and allow for human oversight, thereby implementing a human-based approach that upholds the rule of law and democratic values. 3. Privacy Protection and Data Governance: The privacy of personal data should be properly protected to avoid the risk of data leakage, and the principle of data minimization should be adopted; at the same time, the opening and reuse of non-sensitive data should be promoted. 4. Security and safety: In the process of AI research and development and application, security measures should be established to prevent security threats and attacks and to ensure the robustness and safety of the system. 5. Transparency and explainability: The output of AI should be appropriately disclosed or labeled to facilitate the assessment of possible risks and the understanding of the impact on related rights and interests, thereby enhancing the trustworthiness of AI. 6. Fairness and non-discrimination: In the process of AI research and development and application, the risks of bias and discrimination in algorithms should be avoided as much as possible, and should not lead to discriminatory results for specific groups. 7. Accountability: Ensure the assumption of corresponding responsibilities, including internal governance responsibilities and external social responsibilities. 4 key areas of promotion 1. Innovative Collaboration and Talent Cultivation: Ensuring the resources and talent needed for AI. 2. Risk management and application responsibility: Risks must be identified and managed before AI systems can be safely applied. 3. Protection of rights and access to data: People's basic rights, such as privacy, cannot be compromised. 4. Regulatory Adaptation and Business Review: Policies and regulations must be agile to keep pace with AI development. The AI Basic Law is paving the way for Taiwan's future opportunities and challenges. AI development requires sufficient resources, data and a friendly environment; to ensure the safe application of AI, it is necessary to first identify and plan for different possible risks, and the draft AI Basic Law has initially drawn a blueprint for the above innovative development and safe application. In the future, various government ministries will need to work together to keep up with the wave of AI innovation in terms of business and legal regulations for multiple fields and industries. It is believed that Taiwan can leverage the advantages in the semiconductor industry and talent resources to gain a favorable global strategic position for the development of AI, as well as to help achieve the goal of "AI for good" to enhance the well-being of Taiwan people through a sound legal environment.

The opening and sharing of scientific data- The Data Policy of the U.S. National Institutes of Health

The opening and sharing of scientific data- The Data Policy of the U.S. National Institutes of Health Li-Ting Tsai   Scientific research improves the well-being of all mankind, the data sharing on medical and health promote the overall amount of energy in research field. For promoting the access of scientific data and research findings which was supported by the government, the U.S. government affirmed in principle that the development of science was related to the retention and accesses of data. The disclosure of information should comply with legal restrictions, and the limitation by time as well. For government-sponsored research, the data produced was based on the principle of free access, and government policies should also consider the actual situation of international cooperation[1]Furthermore, the access of scientific research data would help to promote scientific development, therefore while formulating a sharing policy, the government should also consider the situation of international cooperation, and discuss the strategy of data disclosure based on the principle of free access.   In order to increase the effectiveness of scientific data, the U.S. National Institutes of Health (NIH) set up the Office of Science Policy (OSP) to formulate a policy which included a wide range of issues, such as biosafety (biosecurity), genetic testing, genomic data sharing, human subjects protections, the organization and management of the NIH, and the outputs and value of NIH-funded research. Through extensive analysis and reports, proposed emerging policy recommendations.[2] At the level of scientific data sharing, NIH focused on "genes and health" and "scientific data management". The progress of biomedical research depended on the access of scientific data; sharing scientific data was helpful to verify research results. Researchers integrated data to strengthen analysis, promoted the reuse of difficult-generated data, and accelerated research progress.[3] NIH promoted the use of scientific data through data management to verify and share research results.   For assisting data sharing, NIH had issued a data management and sharing policy (DMS Policy), which aimed to promote the sharing of scientific data funded or conducted by NIH.[4] DMS Policy defines “scientific data.” as “The recorded factual material commonly accepted in the scientific community as of sufficient quality to validate and replicate research findings, regardless of whether the data are used to support scholarly publications. Scientific data do not include laboratory notebooks, preliminary analyses, completed case report forms, drafts of scientific papers, plans for future research, peer reviews, communications with colleagues, or physical objects, such as laboratory specimens.”[5] In other words, for determining scientific data, it is not only based on whether the data can support academic publications, but also based on whether the scientific data is a record of facts and whether the research results can be repeatedly verified.   In addition, NIH, NIH research institutes, centers, and offices have had expected sharing of data, such as: scientific data sharing, related standards, database selection, time limitation, applicable and presented in the plan; if not applicable, the researcher should propose the data sharing and management methods in the plan. NIH also recommended that the management and sharing of data should implement the FAIR (Findable, Accessible, Interoperable and Reusable) principles. The types of data to be shared should first in general descriptions and estimates, the second was to list meta-data and other documents that would help to explain scientific data. NIH encouraged the sharing of scientific data as soon as possible, no later than the publication or implementation period.[6] It was said that even each research project was not suitable for the existing sharing strategy, when planning a proposal, the research team should still develop a suitable method for sharing and management, and follow the FAIR principles.   The scientific research data which was provided by the research team would be stored in a database which was designated by the policy or funder. NIH proposed a list of recommended databases lists[7], and described the characteristics of ideal storage databases as “have unique and persistent identifiers, a long-term and sustainable data management plan, set up metadata, organizing data and quality assurance, free and easy access, broad and measured reuse, clear use guidance, security and integrity, confidentiality, common format, provenance and data retention policy”[8]. That is to say, the design of the database should be easy to search scientific data, and should maintain the security, integrity and confidentiality and so on of the data while accessing them.   In the practical application of NIH shared data, in order to share genetic research data, NIH proposed a Genomic Data Sharing (GDS) Policy in 2014, including NIH funding guidelines and contracts; NIH’s GDS policy applied to all NIHs Funded research, the generated large-scale human or non-human genetic data would be used in subsequent research. [9] This can effectively promote genetic research forward.   The GDS policy obliged researchers to provide genomic data; researchers who access genomic data should also abide by the terms that they used the Controlled-Access Data for research.[10] After NIH approved, researchers could use the NIH Controlled-Access Data for secondary research.[11] Reviewed by NIH Data Access Committee, while researchers accessed data must follow the terms which was using Controlled-Access Data for research reason.[12] The Genomic Summary Results (GSR) was belong to NIH policy,[13] and according to the purpose of GDS policy, GSR was defined as summary statistics which was provided by researchers, and non-sensitive data was included to the database that was designated by NIH.[14] Namely. NIH used the application and approval of control access data to strike a balance between the data of limitation access and scientific development.   For responding the COVID-19 and accelerating the development of treatments and vaccines, NIH's data sharing and management policy alleviated the global scientific community’s need for opening and sharing scientific data. This policy established data sharing as a basic component in the research process.[15] In conclusion, internalizing data sharing in the research process will help to update the research process globally and face the scientific challenges of all mankind together. [1]NATIONAL SCIENCE AND TECHNOLOGY COUNCIL, COMMITTEE ON SCIENCE, SUBCOMMITEE ON INTERNATIONAL ISSUES, INTERAGENCY WORKING GROUP ON OPEN DATA SHARING POLICY, Principles For Promoting Access To Federal Government-Supported Scientific Data And Research Findings Through International Scientific Cooperation (2016), 1, organized from Principles, at 5-8, https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/NSTC/iwgodsp_principles_0.pdf (last visited December 14, 2020). [2]About Us, Welcome to NIH Office of Science Policy, NIH National Institutes of Health Office of Science Policy, https://osp.od.nih.gov/about-us/ (last visited December 7, 2020). [3]NIH Data Management and Sharing Activities Related to Public Access and Open Science, NIH National Institutes of Health Office of Science Policy, https://osp.od.nih.gov/scientific-sharing/nih-data-management-and-sharing-activities-related-to-public-access-and-open-science/ (last visited December 10, 2020). [4]Final NIH Policy for Data Management and Sharing, NIH National Institutes of Health Office of Extramural Research, Office of The Director, National Institutes of Health (OD), https://grants.nih.gov/grants/guide/notice-files/NOT-OD-21-013.html (last visited December 11, 2020). [5]Final NIH Policy for Data Management and Sharing, NIH National Institutes of Health Office of Extramural Research, Office of The Director, National Institutes of Health (OD), https://grants.nih.gov/grants/guide/notice-files/NOT-OD-21-013.html (last visited December 12, 2020). [6]Supplemental Information to the NIH Policy for Data Management and Sharing: Elements of an NIH Data Management and Sharing Plan, Office of The Director, National Institutes of Health (OD), https://grants.nih.gov/grants/guide/notice-files/NOT-OD-21-014.html (last visited December 13, 2020). [7]The list of databases in details please see:Open Domain-Specific Data Sharing Repositories, NIH National Library of Medicine, https://www.nlm.nih.gov/NIHbmic/domain_specific_repositories.html (last visited December 24, 2020). [8]Supplemental Information to the NIH Policy for Data Management and Sharing: Selecting a Repository for Data Resulting from NIH-Supported Research, Office of The Director, National Institutes of Health (OD), https://grants.nih.gov/grants/guide/notice-files/NOT-OD-21-016.html (last visited December 13, 2020). [9]NIH Genomic Data Sharing, National Institutes of Health Office of Science Policy, https://osp.od.nih.gov/scientific-sharing/genomic-data-sharing/ (last visited December 15, 2020). [10]NIH Genomic Data Sharing Policy, National Institutes of Health (NIH), https://grants.nih.gov/grants/guide/notice-files/NOT-OD-14-124.html (last visited December 17, 2020). [11]NIH Genomic Data Sharing Policy, National Institutes of Health (NIH), https://grants.nih.gov/grants/guide/notice-files/NOT-OD-14-124.html (last visited December 17, 2020). [12]id. [13]NIH National Institutes of Health Turning Discovery into Health, Responsible Use of Human Genomic Data An Informational Resource, 1, at 6, https://osp.od.nih.gov/wp-content/uploads/Responsible_Use_of_Human_Genomic_Data_Informational_Resource.pdf (last visited December 17, 2020). [14]Update to NIH Management of Genomic Summary Results Access, National Institutes of Health (NIH), https://grants.nih.gov/grants/guide/notice-files/NOT-OD-19-023.html (last visited December 17, 2020). [15]Francis S. Collins, Statement on Final NIH Policy for Data Management and Sharing, National Institutes of Health Turning Discovery Into Health, https://www.nih.gov/about-nih/who-we-are/nih-director/statements/statement-final-nih-policy-data-management-sharing (last visited December 14, 2020).

Suggestions for MOEA Trial Program of Voluntary Base Green Electricity Framework

On March 6, 2014, The Energy Bureau of Ministry of Economic Affairs has published a pre-announcement on a Trial Program of Voluntary Base Green Electricity Framework (hereafter the Trial Program) and consulted on public opinion. In light of the content of the Trial Program, STLI provide the following suggestions for future planning of related policy structure. The institution of green electricity as established by the Trial Program is one of the policies for promoting renewable energy. Despite its nature of a trial, it is suggested that a policy design with a more options will be beneficial to the promotion of renewable energy, in light of various measures that have been undertaken by different countries. According to the Trial Program, the planned price rate of the green electricity is set on the basis of the total sum that the electricity subsidy to be paid by the Renewable Energy Development Fund divided by the total sum of electricity generated reported by Tai Power Company. The Ministry of Economic Affairs will adjust the price rate of the green electricity on the base of both how many users subscribe to the green electricity and the price rate of international green electricity market rate and, then announce the price rate in October of each year if not otherwise designated. In addition, according to the planned Trial Program, the unit for the subscription of green electricity is 100 kW·h. It is further reported that the current planned price rate for green electricity is 1.06 NTD/ kW·h. And it shall be 3.95 NTD/ kW·h if adding up with the original price rate, with an 37% increase in price per kW·h. In terms of the existing content of the Trial Program, only single price rate will be offered during the trial period. In this regard, we take the view that it would be beneficial to take into account similar approaches that have been taken by other countries. In Germany, for instance, the furtherance of renewable energy is achieved by the obligatory charge(EEG Umlage)together with the voluntary green electricity program provided by the private electricity retail sectors. According to German Ministry of Economics and Energy (BMWi), the electricity price that the German public pays includes three parts: (1)the cost of the purchase and distribution of the electricity, including the margin of the electricity provider(2)regulated network fees, including those for the operation as well as for the measurement works of the meters(3)charges imposed by the government, including tax and the abovementioned obligatory charge for renewable energy(EEG Umlage), as prescribed by the Act on Renewable Energy (Gesetz für den Vorrang Erneuerbarer Energien, also known as Erneuerbare-Energien-Gesetz - EEG). In terms of how it is implemented on the ground, an example of the green electricity price menu program from the German electricity retail company, Vattenfall, is given in the following. In all price menu programs provided by Vattenfall in Berlin, for instance, 29.4% of the electricity comes from renewable energy as a result of the implementation of the Act on Renewable Energy. Asides from the abovementioned percentage as facilitated by the existing obligatory measures, the electricity retail companies in Germany further provide the price menus that are “greener”. For example, among the options provided by Vattenfall(Chart I), in terms of the 12-month program, one can choose the menu which consist of 39.4% of renewable energy, with the price of 0.2642 Euro/ kW·h(about 10.96 NTD/ kW·h). One can also opt for a menu of which the energy supply comes from 100% of renewable energy, with the price of 0.281 Euro/ kW·h(about 11.66 NTD/ kW·h) Chart I : Green Electricity Price Menus provided by Vattenfall in Berlin, Germany Percentage of Renewable Energy Supply Percentage of Renewable Energy Supply Electricity Price 12-month program 39.4% 0.2642 Euro/ kW·h(about 10.96 NTD/ kW·h) All renewable energy program 100% 0.281 Euro/ kW·h(about 11.66 NTD/ kW·h) Source:Vattenfall website, translated and reorganized by STLI, April 214. In addition, Australia also has similar programs on green electricity that is voluntary-base and with the goal of promoting renewable energy, reducing carbon emission, and transforming energy economy. Since 1997, the GreenPower in Australia is in charge of audition and certification of the retail companies and power plants on green electricity. The Australian model uses the certification mechanism conducted by independent third party, to ensure the green electricity purchased by end users in compliance with specific standards. As for the options for the price menu, take the programs of green electricity offered by the Australian retail company Origin Energy for example, user can choose 6 kinds of different programs, which are composed by renewable energy supply of respectively 10%, 20%, 25%, 50%, 75%, and 100%, at various price rates (shown in Chart II). Chart II Australian Green Electricity Programs provided by Origin Energy Percentage of renewable Energy Electricity Price per kW·h 0 0.268 AUD(About 7.52 NTD) 10% 0.274868 AUD(About 7.69 NTD) 20% 0.28006 AUD(About 7.84 NTD) 25% 0.28292 AUD(About 7.92 NTD) 50% 0.2838 AUD(About 7.95 NTD) 100% 0.2992 AUD(About 8.37 NTD) Source:Origin Energy website, translated and reorganized by STLI, April 214. Given the information above, it can thus be inferred that the international mechanism for the promotion of green electricity often include a variety of price menus, providing the user more options. Such as two difference programs offered by Vattenfall in Germany and six various rates for green electricity offered by Origin Energy in Australia. It is the suggestion of present brief that the Trial Program can reference these international examples and try to offer the users a greater flexibility in choosing the most suitable programs for themselves.

The Key Elements for Data Intermediaries to Deliver Their Promise

The Key Elements for Data Intermediaries to Deliver Their Promise 2022/12/13   As human history enters the era of data economy, data has become the new oil. It feeds artificial intelligence algorithms that are disrupting how advertising, healthcare, transportation, insurance, and many other industries work. The excitement of having data as a key production input lies in the fact that it is a non-rivalrous good that does not diminish by consumption.[1] However, the fact that people are reluctant in sharing data due to privacy and trade secrets considerations has been preventing countries to realize the full value of data. [2]   To release more data, policymakers and researchers have been exploring ways to overcome the trust dilemma. Of all the discussions, data intermediaries have become a major solution that governments are turning to. This article gives an overview of relevant policy developments concerning data intermediaries and a preliminary analysis of the key elements that policymakers should consider for data intermediaries to function well. I. Policy and Legal developments concerning data intermediaries   In order to unlock data’s full value, many countries have started to focus on data intermediaries. For example, in 2021, the UK’s Department for Digital, Culture, Media and Sport (DCMS) commissioned the Centre for Data Ethics and Innovation (CDEI) to publish a report on data intermediaries[3] , in response to the 2020 National Data Strategy.[4] In 2020, the European Commission published its draft Data Governance Act (DGA)[5] , which aims to build up trust in data intermediaries and data altruism organizations, in response to the 2020 European Strategy for Data.[6] The act was adopted and approved in mid-2022 by the Parliament and Council; and will apply from 24 September 2023.[7] The Japanese government has also promoted the establishment of data intermediaries since 2019, publishing guidance to establish regulations on data trust and data banks.[8] II. Key considerations for designing effective data intermediary policy 1.Evaluate which type of data intermediary works best in the targeted country   From CDEI’s report on data intermediaries and the confusion in DGA’s various versions of data intermediary’s definition, one could tell that there are many forms of data intermediaries. In fact, there are at least eight types of data intermediaries, including personal information management systems (PIMS), data custodians, data exchanges, industrial data platforms, data collaboratives, trusted third parties, data cooperatives, and data trusts.[9] Each type of data intermediary was designed to combat data-sharing issues in specific countries, cultures, and scenarios. Hence, policymakers need to evaluate which type of data intermediary is more suitable for their society and market culture, before investing more resources to promote them.   For example, data trust came from the concept of trust—a trustee managing a trustor’s property rights on behalf of his interest. This practice emerged in the middle ages in England and has since developed into case law.[10] Thus, the idea of data trust is easily understood and trusted by the British people and companies. As a result, British people are more willing to believe that data trusts will manage their data on their behalf in their best interest and share their valuable data, compared to countries without a strong legal history of trusts. With more people sharing their data, trusts would have more bargaining power to negotiate contract terms that are more beneficial to data subjects than what individual data owners could have achieved. However, this model would not necessarily work for other countries without a strong foundation of trust law. 2.Quality signals required to build trust: A government certificate system can help overcome the lemon market problem   The basis of trust in data intermediaries depends largely on whether the service provider is really neutral in its actions and does not reuse or sell off other parties’ data in secret. However, without a suitable way to signal their service quality, the market would end up with less high-quality service, as consumers would be reluctant to pay for higher-priced service that is more secure and trustworthy when they have no means to verify the exact quality.[11] This lemon market problem could only be solved by a certificate system established by actors that consumers trust, which in most cases is the government.   The EU government clearly grasped this issue as a major obstacle to the encouragement of trust in data intermediaries and thus tackles it with a government register and verification system. According to the Data Government Act, data intermediation services providers who intend to provide services are required to notify the competent authority with information on their legal status, form, ownership structure, relevant subsidiaries, address, public website, contact details, the type of service they intend to provide, the estimated start date of activities…etc. This information would be provided on a website for consumers to review. In addition, they can request the competent authority to confirm their legal compliance status, which would in turn verify them as reliable entities that can use the ‘data intermediation services provider recognised in the Union’ label. 3.Overcoming trust issues with technology that self-enforces privacy: privacy-enhancing technologies (PETs)   Even if there are verified data intermediation services available, businesses and consumers might still be reluctant to trust human organizations. A way to boost trust is to adopt technologies that self-enforces privacy. A real-world example is OpenSAFELY, a data intermediary implementing privacy-enhancing technologies (PETs) to provide health data sharing in a secure environment. Through a federated analytics system, researchers are able to conduct research with large volumes of healthcare data, without the ability to observe any data directly. Under such protection, UK NHS is willing to share its data for research purposes. The accuracy and timeliness of such research have provided key insights to inform the UK government in decision-making during the COVID-19 pandemic.   With the benefits it can bring, unsurprisingly, PETs-related policies have become quite popular around the globe. In June 2022, Singapore launched its Digital Trust Centre (DTC) for accelerating PETs development and also signed a Memorandum of Understanding with the International Centre of Expertise of Montreal for the Advancement of Artificial Intelligence (CEIMIA) to collaborate on PETs.[12] On September 7th, 2022, the UK Information Commissioners’ Office (ICO) published draft guidance on PETs.[13] Moreover, the U.K. and U.S. governments are collaborating on PETs prize challenges, announcing the first phase winners on November 10th, 2022.[14] We could reasonably predict that more PETs-related policies would emerge in the coming year. Reference: [1] Yan Carrière-Swallow and Vikram Haksar, The Economics of Data, IMFBlog (Sept. 23, 2019), https://blogs.imf.org/2019/09/23/the-economics-of-data/#:~:text=Data%20has%20become%20a%20key,including%20oil%2C%20in%20important%20ways (last visited July 22, 2022). [2] Frontier Economics, Increasing access to data across the economy: Report prepared for the Department for Digital, Culture, Media, and Sport (2021), https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/974532/Frontier-access_to_data_report-26-03-2021.pdf (last visited July 22, 2022). [3] The Centre for Data Ethics and Innovation (CDEI), Unlocking the value of data: Exploring the role of data intermediaries (2021), https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1004925/Data_intermediaries_-_accessible_version.pdf (last visited June 17, 2022). [4] Please refer to the guidelines for the selection of sponsors of the 2022 Social Innovation Summit: https://www.gov.uk/government/publications/uk-national-data-strategy/national-data-strategy(last visited June 17, 2022). [5] Regulation of the European Parliament and of the Council on European data governance and amending Regulation (EU) 2018/1724 (Data Governance Act), 2020/0340 (COD) final (May 4, 2022). [6] Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and The Committee of the Regions— A European strategy for data, COM/2020/66 final (Feb 19, 2020). [7] Proposal for a Regulation on European Data Governance, European Parliament Legislative Train Schedule, https://www.europarl.europa.eu/legislative-train/theme-a-europe-fit-for-the-digital-age/file-data-governance-act(last visited Aug 17, 2022). [8] 周晨蕙,〈日本資訊信託功能認定指引第二版〉,科技法律研究所,https://stli.iii.org.tw/article-detail.aspx?no=67&tp=5&d=8422(最後瀏覽日期︰2022/05/30)。 [9] CDEI, supra note 3. [10] Ada Lovelace Institute, Exploring legal mechanisms for data stewardship (2021), 30~31,https://www.adalovelaceinstitute.org/wp-content/uploads/2021/03/Legal-mechanisms-for-data-stewardship_report_Ada_AI-Council-2.pdf (last visited Aug 17, 2022). [11] George A. Akerlof, The Market for "Lemons": Quality Uncertainty and the Market Mechanism, THE QUARTERLY JOURNAL OF ECONOMICS, 84(3), 488-500 (1970). [12] IMDA, MOU Signing Between IMDA and CEIMIA is a Step Forward in Cross-border Collaboration on Privacy Enhancing Technology (PET) (2022),https://www.imda.gov.sg/-/media/Imda/Files/News-and-Events/Media-Room/Media-Releases/2022/06/MOU-bet-IMDA-and-CEIMIA---ATxSG-1-Jun-2022.pdf (last visited Nov. 28, 2022). [13] ICO publishes guidance on privacy enhancing technologies, ICO, https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2022/09/ico-publishes-guidance-on-privacy-enhancing-technologies/ (last visited Nov. 27, 2022). [14] U.K. and U.S. governments collaborate on prize challenges to accelerate development and adoption of privacy-enhancing technologies, GOV.UK, https://www.gov.uk/government/news/uk-and-us-governments-collaborate-on-prize-challenges-to-accelerate-development-and-adoption-of-privacy-enhancing-technologies (last visited Nov. 28, 2022); Winners Announced in First Phase of UK-US Privacy-Enhancing Technologies Prize Challenges, NIST, https://www.nist.gov/news-events/news/2022/11/winners-announced-first-phase-uk-us-privacy-enhancing-technologies-prize (last visited Nov. 28, 2022).

TOP